question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,271,759 | 2022-2-25 | https://stackoverflow.com/questions/71271759/how-to-change-markupsafe-version-in-virtual-environment | I am trying to make an application using python and gRPC as shown in this article - link I am able to run the app successfully on my terminal but to run with a frontend I need to run it as a flask app, codebase. And I am doing all this in a virtual environment. when I run my flask command FLASK_APP=marketplace.py flask run This is the error I get ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/Users/alex/Desktop/coding/virt/lib/python3.8/site-packages/markupsafe/__init__.py) On researching about this error I found this link - it basically tells us that currently I am using a higher version of MarkUpSafe library than required. So I did pip freeze --local inside the virtualenv and got MarkUpSafe version to be MarkupSafe==2.1.0 I think if I change the version of this library from 2.1.0 to 2.0.1 then the flask app might run. How can I change this library's version from the terminal? PS: If you think changing the version of the library won't help in running the flask app, please let me know what else can I try in this. | If downgrading will solve the issue for you try the following code inside your virtual environment. pip install MarkupSafe==2.0.1 | 11 | 20 |
71,272,151 | 2022-2-25 | https://stackoverflow.com/questions/71272151/return-generator-instead-of-list-from-df-to-dict | I am working on a large Pandas DataFrame which needs to be converted into dictionaries before being processed by another API. The required dictionaries can be generated by calling the .to_dict(orient='records') method. As stated in the docs, the returned value depends on the orient option: Returns: dict, list or collections.abc.Mapping Return a collections.abc.Mapping object representing the DataFrame. The resulting transformation depends on the orient parameter. For my case, passing orient='records', a list of dictionaries is returned. When dealing with lists, the complete memory required to store the list items, is reserved/allocated. As my dataframe can get rather large, this might lead to memory issues especially as the code might be executed on lower spec target systems. I could certainly circumvent this issue by processing the dataframe chunk-wise and generate the list of dictionaries for each chunk which is then passed to the API. Furthermore, calling iter(df.to_dict(orient='records')) would return the desired generator, but would not reduce the required memory footprint as the list is created intermediately. Is there a way to directly return a generator expression from df.to_dict(orient='records') instead of a list in order to reduce the memory footprint? | There is not a way to get a generator directly from to_dict(orient='records'). However, it is possible to modify the to_dict source code to be a generator instead of returning a list comprehension: from pandas.core.common import standardize_mapping from pandas.core.dtypes.cast import maybe_box_native def dataframe_records_gen(df_): columns = df_.columns.tolist() into_c = standardize_mapping(dict) for row in df_.itertuples(index=False, name=None): yield into_c( (k, maybe_box_native(v)) for k, v in dict(zip(columns, row)).items() ) Sample Code: import pandas as pd df = pd.DataFrame({ 'A': [1, 2], 'B': [3, 4] }) # Using Generator for row in dataframe_records_gen(df): print(row) # For Comparison with to_dict function print("to_dict", df.to_dict(orient='records')) Output: {'A': 1, 'B': 3} {'A': 2, 'B': 4} to_dict [{'A': 1, 'B': 3}, {'A': 2, 'B': 4}] For more natural syntax, it's also possible to register a custom accessor: import pandas as pd from pandas.core.common import standardize_mapping from pandas.core.dtypes.cast import maybe_box_native @pd.api.extensions.register_dataframe_accessor("gen") class GenAccessor: def __init__(self, pandas_obj): self._obj = pandas_obj def records(self): columns = self._obj.columns.tolist() into_c = standardize_mapping(dict) for row in self._obj.itertuples(index=False, name=None): yield into_c( (k, maybe_box_native(v)) for k, v in dict(zip(columns, row)).items() ) Which makes this generator accessible via the gen accessor in this case: df = pd.DataFrame({ 'A': [1, 2], 'B': [3, 4] }) # Using Generator through registered custom accessor for row in df.gen.records(): print(row) # For Comparison with to_dict function print("to_dict", df.to_dict(orient='records')) Output: {'A': 1, 'B': 3} {'A': 2, 'B': 4} to_dict [{'A': 1, 'B': 3}, {'A': 2, 'B': 4}] | 5 | 4 |
71,271,825 | 2022-2-25 | https://stackoverflow.com/questions/71271825/how-to-get-setup-cfg-metadata-at-the-command-line-python | When you have a setup.py file, you can get the name of the package via the command: C:\some\dir>python setup.py --name And this would print the name of the package to the command line. In an attempt to adhere to best practice, I'm trying to migrate away from setup.py by putting everything in setup.cfg since everything that was previously in setup.py was static content. But our build pipeline depends on being able to call python setup.py --name. I'm looking to rewrite the pipeline in such a way that I don't need to create a setup.py file. Is there way to get the name of the package when you have a setup.cfg but not a setup.py file? | Maybe using the ConfigParser Python module ? python -c "from configparser import ConfigParser; cf = ConfigParser(); cf.read('setup.cfg'); print(cf['metadata']['name'])" | 10 | 4 |
71,265,214 | 2022-2-25 | https://stackoverflow.com/questions/71265214/github-actions-issue-error-process-completed-with-exit-code-2 | I have following piece of code for github actions: name: Python application on: push: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Install dependencies run: / python -m pip install --upgrade pip python -m pip install numpy pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: Test Selenium run: / python -m pytest -v -m devs But when I commit, and actions start running, I get this error: Run / python -m pip install --upgrade pip python -m pip install numpy pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi / python -m pip install --upgrade pip python -m pip install numpy pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi shell: /usr/bin/bash -e {0} /home/runner/work/_temp/317511f5-0874-43d8-a0ae-2601804ff811.sh: line 1: syntax error near unexpected token `then' Error: Process completed with exit code 2. And this is just under my Install dependencies. Am I missing something super obvious? Thank you in advance for the help. | Yes - you have a simple mistake there :) To have multiple commands under run you have to use: run: | not \ \ is used later on to have one bash command split into multiple lines name: Python application on: push: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Install dependencies run: | python -m pip install --upgrade pip python -m pip install numpy pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: Test Selenium run: | python -m pytest -v -m devs | 5 | 6 |
71,253,495 | 2022-2-24 | https://stackoverflow.com/questions/71253495/how-to-annotate-the-type-of-arguments-forwarded-to-another-function | Let's say we have a trivial function that calls open() but with a fixed argument: def open_for_writing(*args, **kwargs): kwargs['mode'] = 'w' return open(*args, **kwargs) If I now try to call open_for_writing(some_fake_arg = 123), no type checker (e.g. mypy) can tell that this is an incorrect invocation: it's missing the required file argument, and is adding another argument that isn't part of the open signature. How can I tell the type checker that *args and **kwargs must be a subset of the open parameter spec? I realise Python 3.10 has the new ParamSpec type, but it doesn't seem to apply here because you can't get the ParamSpec of a concrete function like open. | I think out of the box this is not possible. However, you could write a decorator that takes the function that contains the arguments you want to get checked for (open in your case) as an input and returns the decorated function, i.e. open_for_writing in your case. This of course only works with python 3.10 or using typing_extensions as it makes use of ParamSpec from typing import TypeVar, ParamSpec, Callable, Optional T = TypeVar('T') P = ParamSpec('P') def take_annotation_from(this: Callable[P, Optional[T]]) -> Callable[[Callable], Callable[P, Optional[T]]]: def decorator(real_function: Callable) -> Callable[P, Optional[T]]: def new_function(*args: P.args, **kwargs: P.kwargs) -> Optional[T]: return real_function(*args, **kwargs) return new_function return decorator @take_annotation_from(open) def open_for_writing(*args, **kwargs): kwargs['mode'] = 'w' return open(*args, **kwargs) open_for_writing(some_fake_arg=123) open_for_writing(file='') As shown here, mypy complains now about getting an unknown argument. | 19 | 11 |
71,261,347 | 2022-2-25 | https://stackoverflow.com/questions/71261347/runtimeerror-dataloader-worker-exited-unexpectedly | I am new to PyTorch and Machine Learning so I try to follow the tutorial from here: https://medium.com/@nutanbhogendrasharma/pytorch-convolutional-neural-network-with-mnist-dataset-4e8a4265e118 By copying the code step by step I got the following error for no reason. I tried the program on another computer and it gives syntax error. However, my IDE didn't warn my anything about syntax. I am really confused how I can fix the issue. Any help is appreciated. RuntimeError: DataLoader worker exited unexpectedly Here is the code. import torch from torchvision import datasets from torchvision.transforms import ToTensor import torch.nn as nn import matplotlib.pyplot as plt from torch.utils.data import DataLoader from torch import optim from torch.autograd import Variable train_data = datasets.MNIST( root='data', train=True, transform=ToTensor(), download=True, ) test_data = datasets.MNIST( root='data', train=False, transform=ToTensor() ) print(train_data) print(test_data) print(train_data.data.size()) print(train_data.targets.size()) plt.imshow(train_data.data[0], cmap='gray') plt.title('%i' % train_data.targets[0]) plt.show() figure = plt.figure(figsize=(10, 8)) cols, rows = 5, 5 for i in range(1, cols * rows + 1): sample_idx = torch.randint(len(train_data), size=(1,)).item() img, label = train_data[sample_idx] figure.add_subplot(rows, cols, i) plt.title(label) plt.axis("off") plt.imshow(img.squeeze(), cmap="gray") plt.show() loaders = { 'train': DataLoader(train_data, batch_size=100, shuffle=True, num_workers=1), 'test': DataLoader(test_data, batch_size=100, shuffle=True, num_workers=1), } class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d( in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2, ), nn.ReLU(), nn.MaxPool2d(kernel_size=2), ) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2), ) # fully connected layer, output 10 classes self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) # flatten the output of conv2 to (batch_size, 32 * 7 * 7) x = x.view(x.size(0), -1) output = self.out(x) return output, x # return x for visualization cnn = CNN() print(cnn) loss_func = nn.CrossEntropyLoss() print(loss_func) optimizer = optim.Adam(cnn.parameters(), lr=0.01) print(optimizer) num_epochs = 10 def train(num_epochs, cnn, loaders): cnn.train() # Train the model total_step = len(loaders['train']) for epoch in range(num_epochs): for i, (images, labels) in enumerate(loaders['train']): # gives batch data, normalize x when iterate train_loader b_x = Variable(images) # batch x b_y = Variable(labels) # batch y output = cnn(b_x)[0] loss = loss_func(output, b_y) # clear gradients for this training step optimizer.zero_grad() # backpropagation, compute gradients loss.backward() # apply gradients optimizer.step() if (i + 1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) pass pass pass train(num_epochs, cnn, loaders) def evalFunc(): # Test the model cnn.eval() with torch.no_grad(): correct = 0 total = 0 for images, labels in loaders['test']: test_output, last_layer = cnn(images) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = (pred_y == labels).sum().item() / float(labels.size(0)) pass print('Test Accuracy of the model on the 10000 test images: %.2f' % accuracy) pass evalFunc() sample = next(iter(loaders['test'])) imgs, lbls = sample actual_number = lbls[:10].numpy() test_output, last_layer = cnn(imgs[:10]) pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze() print(f'Prediction number: {pred_y}') print(f'Actual number: {actual_number}') | If you are working on jupyter notebook. The problem is more likely to be num_worker. You should set num_worker=0. You can find here some solutions to follow. Because unfortunately, jupyter notebook has some issues with running multiprocessing. | 6 | 6 |
71,261,860 | 2022-2-25 | https://stackoverflow.com/questions/71261860/what-does-q-means-in-pip-install-q-package-name | In the below example, I am trying to install gradio package but I have seen -q flag is used in some tutorials to install packages.What does '-q' flag means in pip install -q ? pip install -q gradio | Running pip3 --help gives you this: -q, --quiet Give less output. Option is additive, and can be used up to 3 times (corresponding to WARNING, ERROR, and CRITICAL logging levels). So the -q option reduces the output produced by pip. It does not affect the installation process. It is a general option. | 7 | 10 |
71,257,947 | 2022-2-24 | https://stackoverflow.com/questions/71257947/what-is-the-scope-of-the-as-binding-in-an-except-statement-or-context-manage | I know that in general python only makes new scopes for classes, functions etc., but I'm confused by the as statement in a try/except block or context manager. Variables assigned inside the block are accessible outside it, which makes sense, but the variable bound with as itself is not. So this fails: try: raise RuntimeError() except RuntimeError as error: pass print(repr(error)) but this succeeds: try: raise RuntimeError() except RuntimeError as e: error = e print(repr(error)) What's going on with the variable bound with as, and why don't normal python scoping rules apply? The PEP indicates that it's just a normally bound python variable, but that doesn't seem to be the case. | As explained in PEP 3110, as well as current documentation, variables bound with as in an except block are explicitly and specially cleared at the end of the block, even though they share the same local scope. This improves the immediacy of garbage collection. The as syntax was originally not available for exceptions in 2.x; it was backported for 2.6, but the old semantics were preserved. The same does not apply to with blocks: >>> from contextlib import contextmanager >>> @contextmanager ... def test(): ... yield ... >>> with test() as a: ... pass ... >>> a # contains None; does not raise NameError >>> >>> def func(): # similarly within a function ... with test() as a: ... pass ... return a ... >>> func() >>> The behaviour is specific to the except block, not to the as keyword. | 7 | 8 |
71,242,328 | 2022-2-23 | https://stackoverflow.com/questions/71242328/renv-venv-jupyterlab-irkernel-will-it-blend | Short version What is the simple and elegant way to use renv, venv and jupyterlab with IRkernel together? In particular, how to automatically activate renv from jupyter notebook that is not in the root directory? Long version I'm embracing a "polyglot" data science style, which means using both python and R in tandem. Now venv is awesome, and renv is awesome, and jupyterlab is awesome, so I'm trying to figure out what is the neat way to use them all together. I almost have it, so probably a few hints would be enough to finish this setup. Here's where I'm at. System Start with a clean OS, and install system level requirements: R + renv and Python + venv. For example on Ubuntu it would be approximatelly like that: # R sudo apt install r-base sudo R -e "install.packages('renv')" # Python sudo apt install python3.8 sudo apt install python3.8-venv Project Now create a bare bones project jupyrenv with two files: jupyrenv/ ├── DESCRIPTION └── requirements.txt DESCRIPTION contains R dependencies: Suggests: IRkernel, fortunes requirements.txt contains python dependencies: jupyterlab Create virtual environments and install dependencies (order matters, R has to follow python): # Python python3.8 -m venv venv source venv/bin/activate pip install -r requirements.txt # R R -e "renv::init(bare=TRUE)" R -e "renv::install()" R -e "IRkernel::installspec()" Very neat so far! Jupyter launch jupyter from the command line and rejoice, it works! jupyter-lab What's not to like? Unfortunatelly, if I create a folder (say notebooks) and launch an R notebook there, it does not work :( [I 2022-02-23 19:07:24.628 ServerApp] Creating new directory in [I 2022-02-23 19:07:31.159 ServerApp] Creating new notebook in /notebooks [I 2022-02-23 19:07:31.416 ServerApp] Kernel started: 0aa2c276-18dc-4511-b308-e78234fa71d4 Error in loadNamespace(name) : there is no package called ‘IRkernel’ Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted Attempt to fix It seems that renv is not used from a subfolder, so we need to hint the R process to use it. I tried to add an extra .Rprofile file the notebooks subfolder: jupyrenv/ ├── DESCRIPTION ├── requirements.txt ├── renv ├── venv ├── notebooks │ ├── .Rprofile │ └── Untitled.ipynb ├── .Rprofile └── Untitled.ipynb With the following contents: .Rprofile: source("../renv/activate.R") And it kind of works, but not really. First, when trying to create an R notebook in the notebooks directory, it creates a new renv: [I 2022-02-23 19:22:28.986 ServerApp] Creating new notebook in /notebooks [I 2022-02-23 19:22:29.298 ServerApp] Kernel started: b40a88b3-b0bb-4839-af45-85811ec3073c # Bootstrapping renv 0.15.2 -------------------------------------------------- * Downloading renv 0.15.2 ... OK (downloaded source) * Installing renv 0.15.2 ... Done! * Successfully installed and loaded renv 0.15.2. Then that instance of jupyter works, and I can use it, but if I restart, it stops working and get's back to the missing IRkernel error: [I 2022-02-23 19:24:58.912 ServerApp] Kernel started: 822d9372-47fd-43f5-8ac7-77895ef124dc Error in loadNamespace(name) : there is no package called ‘IRkernel’ Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart What am I missing? | I opened this question as an issue in the renv github repo, and maintainers kindly provided a workaround. The contents of the notebooks/.Rprofile should be as follows: owd <- setwd(".."); source("renv/activate.R"); setwd(owd) It blends! 🎉 | 5 | 8 |
71,245,281 | 2022-2-23 | https://stackoverflow.com/questions/71245281/sqlalchemy-how-to-escape-a-bind-parameter-inside-of-text | How can I escape a : inside of a string passed to text() to prevent SQLAlchemy from treating it like a bindparameter? conn.execute(text("select 'My favorite emoticon is :p' from dual")).fetchone() Will result in: sqlalchemy.exc.StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required for bind parameter 'p' (Background on this error at: http://sqlalche.me/e/14/cd3x) ' It's a bit confusing because from the context of selecting a string from the database select 'foo :bar baz' a bindparameter doesn't make much sense here. It looks like I can use a \ to escape this, but it says it is deprecated: >>> conn.execute(text("select 'My favorite emoticon is \:p' from dual")).fetchone() <stdin>:1: DeprecationWarning: invalid escape sequence \: ('My favorite emoticon is :p',) | As mentioned in the docs: For SQL statements where a colon is required verbatim, as within an inline string, use a backslash to escape But remember that the backslash is also the escape character in Python string literals, so text("select 'My favorite emoticon is \:p' from dual") is incorrect because Python will want to interpret \: as an escape character. We need to use either a "raw string" (r"") text(r"select 'My favorite emoticon is \:p' from dual") or escape the backslash itself text("select 'My favorite emoticon is \\:p' from dual") | 9 | 5 |
71,242,919 | 2022-2-23 | https://stackoverflow.com/questions/71242919/pip-install-results-in-this-error-cl-exe-failed-with-exit-code-2 | I've read all of the other questions on this error and frustratingly enough, none give a solution that works. If I run pip install sentencepiece in the cmd line, it gives me the following output. src/sentencepiece/sentencepiece_wrap.cxx(2809): fatal error C1083: Cannot open include file: 'sentencepiece_processor.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] I'm running python 3.10.1 and pip 22.0.3 . *I have the following Microsoft Visual C++ programs on my windows machine,which I've just done a fresh install of as it was complaining of not having a particular C++ program. MS VC++ I've even added the .exe file to my PATH variables but still I get the same error. Am I missing a particular Microsoft program on my pc? | I haven't seen this problem in Windows, but for Linux, I would normally reinstall Python after installing the dependencies (such as the MSVC thing). In that case this is especially helpful because I'm often rebuilding (compiling and other related steps) Python/Pip. Could also just be an error specific to the module and Python version combo you're trying. From a discussion in the comments: I have the pyenv-win version manager, so I was able to create venvs and test this for you. With Python 3.10.2, it fails; with Python 3.8.10, it's successful. So, yes, reinstalling does seem to be worthy of your time. | 9 | 4 |
71,244,250 | 2022-2-23 | https://stackoverflow.com/questions/71244250/why-is-numpy-cartesian-product-slower-than-pure-python-version | Input import numpy as np import itertools a = np.array([ 1, 6, 7, 8, 10, 11, 13, 14, 15, 19, 20, 23, 24, 26, 28, 29, 33, 34, 41, 42, 43, 44, 45, 46, 47, 52, 54, 58, 60, 61, 65, 70, 75]).astype(np.uint8) b = np.array([ 2, 3, 4, 10, 12, 14, 16, 20, 22, 26, 28, 29, 30, 31, 34, 36, 37, 38, 39, 40, 41, 46, 48, 49, 50, 52, 53, 55, 56, 57, 59, 60, 63, 66, 67, 68, 69, 70, 71, 74]).astype(np.uint8) c = np.array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75]).astype(np.uint8) I would like to get the Cartesian product of the 3 arrays but I do not want any duplicate elements in one row [1, 2, 1] would not be valid and only one of these two would be valid [10, 14, 0] or [14, 10, 0] since 10 and 14 are both in a and b. Python only def no_numpy(): combos = {tuple(set(i)): i for i in itertools.product(a, b, c)} combos = [val for key, val in combos.items() if len(key) == 3] %timeit no_numpy() # 32.5 ms ± 508 µs per loop Numpy # Solution from (https://stackoverflow.com/a/11146645/18158000) def cartesian_product(*arrays): broadcastable = np.ix_(*arrays) broadcasted = np.broadcast_arrays(*broadcastable) rows, cols = np.prod(broadcasted[0].shape), len(broadcasted) dtype = np.result_type(*arrays) out = np.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T def numpy(): combos = {tuple(set(i)): i for i in cartesian_product(*[a, b, c])} combos = [val for key, val in combos.items() if len(key) == 3] %timeit numpy() # 96.2 ms ± 136 µs per loop My guess is in the numpy version converting the np.array to a set is why it is much slower but when comparing strictly getting the initial products cartesian_product is much faster than itertools.product. Can the numpy version be modified in anyway to outperform the pure python solution or is there another solution that outperforms both? | Why current implementations are slow While the first solution is faster than the second one, it is quite inefficient since it creates a lot of temporary CPython objects (at least 6 per item of itertools.product). Creating a lot of objects is expensive because they are dynamically allocated and reference-counted by CPython. The Numpy function cartesian_product is pretty fast but the iteration over the resulting array is very slow because it creates a lot of Numpy views and operates on numpy.uint8 instead of CPython int. Numpy types and functions introduce a huge overhead for very small arrays. Numpy can be used to speed up this operation as shown by @AlainT but this is not trivial to do and Numpy does not shine to solve such problems. How to improve performance One solution is to use Numba to do the job yourself more efficiently and let the Numba's JIT compiler optimizes loops. You can use 3 nested loops to efficiently generate the value of the Cartesian product and filter items. A dictionary can be used to track already seen values. The tuple of 3 items can be packed into one integer so to reduce the memory footprint and improve performance (so the dictionary can better fit in CPU caches and avoid the creation of slow tuple objects). Here is the resulting code: import numba as nb # Signature of the function (parameter types) # Note: `::1` means the array is contiguous @nb.njit('(uint8[::1], uint8[::1], uint8[::1])') def with_numba(a, b, c): seen = dict() for va in a: for vb in b: for vc in c: # If the 3 values are different if va != vb and vc != vb and vc != va: # Sort the 3 values using a fast sorting network v1, v2, v3 = va, vb, vc if v1 > v2: v1, v2 = v2, v1 if v2 > v3: v2, v3 = v3, v2 if v1 > v2: v1, v2 = v2, v1 # Compact the 3 values into one 32-bit integer packedKey = (np.uint32(v1) << 16) | (np.uint32(v2) << 8) | np.uint32(v3) # Is the sorted tuple (v1,v2,v3) already seen? if packedKey not in seen: # Add the value and remember the ordered tuple (va,vb,vc) packedValue = (np.uint32(va) << 16) | (np.uint32(vb) << 8) | np.uint32(vc) seen[packedKey] = packedValue res = np.empty((len(seen), 3), dtype=np.uint8) for i, packed in enumerate(seen.values()): res[i, 0] = np.uint8(packed >> 16) res[i, 1] = np.uint8(packed >> 8) res[i, 2] = np.uint8(packed) return res with_numba(a, b, c) Benchmark Here are results on my i5-9600KF processor: numpy: 122.1 ms (x 1.0) no_numpy: 49.6 ms (x 2.5) AlainT's solution: 49.0 ms (x 2.5) mathfux's solution 34.2 ms (x 3.5) mathfux's optimized solution 7.5 ms (x16.2) with_numba: 4.9 ms (x24.9) The provided solution is about 25 times faster than the slowest implementation and about 1.5 time faster than the fastest provided implementation so far. The current Numba code is bounded by the speed of the Numba dictionary operations. The code can be optimized using more low-level tricks. On solution is to replace the dictionary by a packed boolean array (1 item = 1 bit) of size 256**3/8 to track the values already seen (by checking the packedKeyth bit). The packed values can be directly added in res if the fetched bit is not set. This requires res to be preallocated to the maximum size or to implement an exponentially growing array (like list in Python or std::vector in C++). Another optimization is to sort the list and use a tiling strategy so to improve cache locality. Such optimization are far from being easy to implement but I expect them to drastically speed up the execution. If you plan to use more arrays, then the hash-map can become a bottleneck and a bit-array can be quite big. While using tiling certainly help to reduce the memory footprint, you can speed up the implementation by a large margin using Bloom filters. This probabilist data structure can speed up the execution by skipping many duplicates without causing any cache misses and with a low memory footprint. You can remove most of the duplicates and then sort the array so to then remove the duplicates. Regarding your problem, a radix sort may be faster than usual sorting algorithms. | 5 | 3 |
71,244,472 | 2022-2-23 | https://stackoverflow.com/questions/71244472/keep-getting-cors-policy-no-access-control-allow-origin-even-with-fastapi-cor | I am working on a project that has a FastAPI back end with a React Frontend. When calling the back end via fetch I sometimes get the following: Access to fetch at 'http://localhost:8000/get-main-query-data' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. This happens every so often, I can call one endpoint then the error gets thrown. Sometimes the error gets thrown for all endpoints I have set up Middleware in my main.py like so: (also at this line) # allows cross-origin requests from React origins = [ "http://localhost", "http://localhost:3000", ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) Could this be an issue with fetch it's self? I am worried at when I get to host this ill be getting CORS errors and my prototype won't be working :( The whole main.py is like so: Backend """ API to allow for data retrieval and manipulation. """ from typing import Optional from fastapi import FastAPI, HTTPException, status from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel import models from db import Session app = FastAPI() """ Pydantic BaseModels for the API. """ class SignalJourneyAudiences(BaseModel): """SignalJourneyAudiences BaseModel.""" audienceId: Optional[int] # PK segment: str enabled: bool class SignalJourneyAudienceConstraints(BaseModel): """SignalJourneyAudienceConstraints BaseModel.""" uid: Optional[int] # PK constraintId: int audienceId: int # FK - SignalJourneyAudiences -> audienceId sourceId: int # FK - SignalJourneySources -> sourceId constraintTypeId: int # FK - SignalJourneyConstraintType -> constraintTypeId constraintValue: str targeting: bool frequency: int period: int class SignalJourneyAudienceConstraintRelations(BaseModel): """SignalJourneyAudienceConstraintRelations BaseModel.""" uid: Optional[int] # PK audienceId: int relation: str constraintIds: str class SignalJourneyConstraintType(BaseModel): """SignalJourneyConstraintType BaseModel.""" constraintTypeId: Optional[int] # PK constraintType: str class SingalJourneySources(BaseModel): """SignalJourneySources BaseModel.""" sourceId: Optional[int] # PK source: str # allows cross-origin requests from React origins = [ "http://localhost", "http://localhost:3000", ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) # database instance db = Session() @app.get("/") def index(): """Root endpoint.""" return { "messagee": "Welcome to Signal Journey API. Please use the API documentation to learn more." } @app.get("/audiences", status_code=status.HTTP_200_OK) def get_audiences(): """Get all audience data from the database.""" return db.query(models.SignalJourneyAudiences).all() @app.get("/audience-constraints", status_code=status.HTTP_200_OK) def get_audience_constraints(): """Get all audience constraint data from the database.""" return db.query(models.SignalJourneyAudienceConstraints).all() @app.get("/audience-constraints-relations", status_code=status.HTTP_200_OK) def get_audience_constraints_relations(): """Get all audience constraint data from the database.""" return db.query(models.SignalJourneyAudienceConstraintRelations).all() @app.get("/get-constraint-types", status_code=status.HTTP_200_OK) def get_constraints_type(): """Get all audience constraint data from the database.""" return db.query(models.SignalJourneyConstraintType).all() @app.post("/add-constraint-type", status_code=status.HTTP_200_OK) def add_constraint_type(sjct: SignalJourneyConstraintType): """Add a constraint type to the database.""" constraint_type_query = ( db.query(models.SignalJourneyConstraintType) .filter( models.SignalJourneyConstraintType.constraintType == sjct.constraintType.upper() and models.SignalJourneyConstraintType.constraintTypeId == sjct.constraintTypeId ) .first() ) if constraint_type_query is not None: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Constaint type already exists.", ) constraint_type = models.SignalJourneyConstraintType( constraintType=sjct.constraintType.upper(), ) db.add(constraint_type) db.commit() return { "message": f"Constraint type {sjct.constraintType.upper()} added successfully." } @app.get("/get-sources", status_code=status.HTTP_200_OK) def get_sources(): """Get all sources data from the database.""" return db.query(models.SingalJourneySources).all() @app.post("/add-source", status_code=status.HTTP_200_OK) def add_source_type(sjs: SingalJourneySources): """Add a new source type to the database.""" source_type_query = ( db.query(models.SingalJourneySources) .filter(models.SingalJourneySources.source == sjs.source.upper()) .first() ) if source_type_query is not None: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Source already exists.", ) source_type = models.SingalJourneySources(source=sjs.source.upper()) db.add(source_type) db.commit() return {"message": f"Source {sjs.source.upper()} added successfully."} """ Endpoints for populating the UI with data. These need to consist of some joins. Query to be used in SQL SELECT constraintid, sja.segment, sjs.source, sjct.constrainttype, constraintvalue, targeting, frequency, period FROM signaljourneyaudienceconstraints JOIN signaljourneyaudiences sja ON sja.audienceid = signaljourneyaudienceconstraints.audienceid; JOIN signaljourneysources sjs ON sjs.sourceid = signaljourneyaudienceconstraints.sourceid JOIN signaljourneyconstrainttype sjct ON sjct.constrainttypeid = signaljourneyaudienceconstraints.constrainttypeid """ @app.get("/get-main-query-data", status_code=status.HTTP_200_OK) def get_main_query_data(): """Returns data for the main query.""" return ( db.query( models.SignalJourneyAudienceConstraints.constraintId, models.SignalJourneyAudiences.segment, models.SingalJourneySources.source, models.SignalJourneyConstraintType.constraintType, models.SignalJourneyAudienceConstraints.constraintValue, models.SignalJourneyAudienceConstraints.targeting, models.SignalJourneyAudienceConstraints.frequency, models.SignalJourneyAudienceConstraints.period, ) .join( models.SignalJourneyAudiences, models.SignalJourneyAudiences.audienceId == models.SignalJourneyAudienceConstraints.audienceId, ) .join( models.SingalJourneySources, models.SingalJourneySources.sourceId == models.SignalJourneyAudienceConstraints.sourceId, ) .join( models.SignalJourneyConstraintType, models.SignalJourneyConstraintType.constraintTypeId == models.SignalJourneyAudienceConstraints.constraintTypeId, ) .all() ) Frontend I am calling my API endpoints like so: //form.jsx // pulls segments name from signaljourneyaudiences useEffect(() => { fetch('http://localhost:8000/audiences') .then((res) => res.json()) .then((data) => setSegmentNames(data)) .catch((err) => console.log(err)); }, []); // pulls field names from signaljourneyaudiences useEffect(() => { fetch('http://localhost:8000/get-constraint-types') .then((res) => res.json()) .then((data) => setConstraints(data)) .catch((err) => console.log(err)); }, []); // table.jsx useEffect(() => { fetch('http://localhost:8000/get-main-query-data') .then((res) => res.json()) .then((data) => { setTableData(data); }) .catch((err) => console.log(err)); }, []); As you can see here the table has been populated by the endpoints but on the other hand, one of the dropdowns have not. HTTP 500 error description INFO: 127.0.0.1:62301 - "GET /get-constraint-types HTTP/1.1" 500 Internal Server Error 2022-02-24 09:26:44,234 INFO sqlalchemy.engine.Engine [cached since 2972s ago] () ERROR: Exception in ASGI application Traceback (most recent call last): File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1702, in _execute_context context = constructor( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1013, in _init_compiled self.cursor = self.create_cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1361, in create_cursor return self.create_default_cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1364, in create_default_cursor return self._dbapi_connection.cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1083, in cursor return self.dbapi_connection.cursor(*args, **kwargs) sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 6191820800 and this is thread id 6174994432. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 372, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__ return await self.app(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/fastapi/applications.py", line 259, in __call__ await super().__call__(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/middleware/cors.py", line 92, in __call__ await self.simple_response(scope, receive, send, request_headers=headers) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/middleware/cors.py", line 147, in simple_response await self.app(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/routing.py", line 656, in __call__ await route.handle(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle await self.app(scope, receive, send) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/routing.py", line 61, in app response = await func(request) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/fastapi/routing.py", line 227, in app raw_response = await run_endpoint_function( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/fastapi/routing.py", line 162, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/anyio/to_thread.py", line 28, in run_sync return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable, File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread return await future File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 754, in run result = context.run(func, *args) File "/Users/paul/Developer/signal_journey/backend/./main.py", line 109, in get_constraints_type return db.query(models.SignalJourneyConstraintType).all() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2759, in all return self._iter().all() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2894, in _iter result = self.session.execute( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1692, in execute result = conn._execute_20(statement, params or {}, execution_options) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection return connection._execute_clauseelement( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1481, in _execute_clauseelement ret = self._execute_context( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1708, in _execute_context self._handle_dbapi_exception( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2026, in _handle_dbapi_exception util.raise_( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1702, in _execute_context context = constructor( File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1013, in _init_compiled self.cursor = self.create_cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1361, in create_cursor return self.create_default_cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1364, in create_default_cursor return self._dbapi_connection.cursor() File "/Users/paul/.local/share/virtualenvs/backend-CF5omcRU/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1083, in cursor return self.dbapi_connection.cursor(*args, **kwargs) sqlalchemy.exc.ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 6191820800 and this is thread id 6174994432. [SQL: SELECT "SignalJourneyConstraintType"."constraintTypeId" AS "SignalJourneyConstraintType_constraintTypeId", "SignalJourneyConstraintType"."constraintType" AS "SignalJourneyConstraintType_constraintType" FROM "SignalJourneyConstraintType"] [parameters: [{}]] (Background on this error at: https://sqlalche.me/e/14/f405) | When a server side error occurs (a response code of 5xx), the CORS middleware doesn't get to add their headers since the request is effectively terminated, making it impossible for the browser to read the response. For your second problem, you should use a separate session for each invocation of your API. The reference guide has an example of how to do this: SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) ... # Dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() ... @app.post("/users/{user_id}/items/", response_model=schemas.Item) def create_item_for_user(..., db: Session = Depends(get_db)): return crud.create_user_item(db=db, item=item, user_id=user_id) | 5 | 9 |
71,238,056 | 2022-2-23 | https://stackoverflow.com/questions/71238056/msedge-failed-to-start-crashed-chrome-not-reachable | I am a beginner to Selenium python. I have tried to invoke the Edge browser with an existing profile(Default) with the following code. But it is throwing the following exception as soon as the execution starts. Can someone please help me with this? Am I missing something? edge_options = webdriver.EdgeOptions() edge_options.add_argument("user-data-dir = C:/Users/XYZ/AppData/Local/Microsoft/Edge/User Data/Default") edge_browser = webdriver.Edge(executable_path = "C:/Users/XYZ/ABC/msedgedriver.exe",options = edge_options ) edge_browser.maximize_window() WebDriverException: unknown error: MSEdge failed to start: crashed. (chrome not reachable) (The process started from msedge location C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe is no longer running, so MSEdgeDriver is assuming that MSEdge has crashed.) Note: Edge browser is getting invoked and works properly when I run the code without the following line edge_options.add_argument("user-data-dir = C:/Users/XYZ/AppData/Local/Microsoft/Edge/User Data/Default") | I came across the issue before, that's because there're running Edge processes in the background. The solution is you can back up your User Data folder in the same path and use that folder in selenium: Back up your User Data folder in the same path. Here for example, I back up the User Data folder as User Data1: Use User Data1 in your code to specify using Default profile when run Edge with Selenium: from selenium import webdriver from selenium.webdriver.edge.service import Service edge_options = webdriver.EdgeOptions() #Here you set the path of the back up profile ending with User Data1 not the profile folder edge_options.add_argument("user-data-dir=C:\\Users\\XYZ\\AppData\\Local\\Microsoft\\Edge\\User Data1") ser = Service("C:\\Users\\XYZ\\ABC\\msedgedriver.exe") edge_browser = webdriver.Edge(options = edge_options, service=ser) edge_browser.maximize_window() | 6 | 8 |
71,241,494 | 2022-2-23 | https://stackoverflow.com/questions/71241494/is-it-safe-to-use-functions-from-numpy-core | To motivate the question: In recent NumPy versions there is a performance issue with numpy.clip and in the corresponding issue a suggested workaround is to use numpy.core.umath.clip. I couldn't really find anything about the purpose of the numpy.core.umath or more generally the numpy.core.* modules. pydoc shows documentation for them but they seem not to be included in the online documentation. Therefore I am a bit unsure if it is safe (in the sense of having a stable API) to use functions from these modules or if they should rather be seen as internals that may change between releases? | The documentation of numpy.core states: Please note that this module is private. All functions and objects are available in the main numpy namespace - use that instead. This submodule is apparently not meant to be used by end-users. This is also why there is almost no online documentation about it. The functions in this submodule are more likely to change in the future than the regular ones in provided in numpy. | 5 | 5 |
71,232,996 | 2022-2-23 | https://stackoverflow.com/questions/71232996/str-object-has-no-attribute-tag-error-in-django-tutorial | I am following the Django Tutorial to learn how to work with it, but I have encountered an error very early in it and I'm not sure how to fix it. It happened while creating the django project and doing the 'Write your first view' section: https://docs.djangoproject.com/en/dev/intro/tutorial01/#write-your-first-view After following those steps carefully, while executing python3 manage.py runserver the following error appears: AttributeError: 'str' object has no attribute 'tag' This is the full error trace: Exception in thread django-main-thread: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/noctis/.local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/noctis/.local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 124, in inner_run self.check(display_num_errors=True) File "/home/noctis/.local/lib/python3.9/site-packages/django/core/management/base.py", line 438, in check all_issues = checks.run_checks( File "/home/noctis/.local/lib/python3.9/site-packages/django/core/checks/registry.py", line 77, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "/home/noctis/.local/lib/python3.9/site-packages/django/core/checks/urls.py", line 13, in check_url_config return check_resolver(resolver) File "/home/noctis/.local/lib/python3.9/site-packages/django/core/checks/urls.py", line 23, in check_resolver return check_method() File "/home/noctis/.local/lib/python3.9/site-packages/django/urls/resolvers.py", line 448, in check for pattern in self.url_patterns: File "/home/noctis/.local/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/home/noctis/.local/lib/python3.9/site-packages/django/urls/resolvers.py", line 634, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/home/noctis/.local/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/home/noctis/.local/lib/python3.9/site-packages/django/urls/resolvers.py", line 627, in urlconf_module return import_module(self.urlconf_name) File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/noctis/code/tests/django/mysite/mysite/urls.py", line 21, in <module> path('polls/', include('polls.urls')), File "/usr/lib/python3.9/xml/etree/ElementInclude.py", line 128, in include _include(elem, loader, base_url, max_depth, set()) File "/usr/lib/python3.9/xml/etree/ElementInclude.py", line 136, in _include if e.tag == XINCLUDE_INCLUDE: AttributeError: 'str' object has no attribute 'tag' Maybe there is some incompatibility with the python version and django version? I'm using Python 3.9.7 and Django 4.0.2 Thanks in advance | So, I've found the mistake I've made. In the tutorial there's a point where you add to mysite/urls.py this snippet: from django.contrib import admin from django.urls import include, path urlpatterns = [ path('polls/', include('polls.urls')), path('admin/', admin.site.urls), ] The autocomplete feature for python in vscode added a different include than the one found in django.urls. Hence the error. | 6 | 28 |
71,239,764 | 2022-2-23 | https://stackoverflow.com/questions/71239764/how-to-cache-poetry-install-for-gitlab-ci | Is there a way to cache poetry install command in Gitlab CI (.gitlab-ci.yml)? For example, in node yarn there is a way to cache yarn install (https://classic.yarnpkg.com/lang/en/docs/install-ci/ Gitlab section) this makes stages a lot faster. | GitLab can only cache things in the working directory and Poetry stores packages elsewhere by default: Directory where virtual environments will be created. Defaults to {cache-dir}/virtualenvs ({cache-dir}\virtualenvs on Windows). On my machine, cache-dir is /home/chris/.cache/pypoetry. You can use the virtualenvs.in-project option to change this behaviour: If set to true, the virtualenv wil be created and expected in a folder named .venv within the root directory of the project. So, something like this should work in your gitlab-ci.yml: before_script: - poetry config virtualenvs.in-project true cache: paths: - .venv | 15 | 21 |
71,238,822 | 2022-2-23 | https://stackoverflow.com/questions/71238822/why-is-setuptools-not-available-in-environment-ubuntu-docker-image-with-python | I'm trying to build a Ubuntu 18.04 Docker image running Python 3.7 for a machine learning project. When installing specific Python packages with pip from requirements.txt, I get the following error: Collecting sklearn==0.0 Downloading sklearn-0.0.tar.gz (1.1 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [1 lines of output] ERROR: Can not execute `setup.py` since setuptools is not available in the build environment. [end of output] Although here the error arises in the context of sklearn, the issue is not specific to one library; when I remove that libraries and try to rebuild the image, the error arises with other libraries. Here is my Dockerfile: FROM ubuntu:18.04 # install python RUN apt-get update && \ apt-get install --no-install-recommends -y \ python3.7 python3-pip python3.7-dev # copy requirements WORKDIR /opt/program COPY requirements.txt requirements.txt # install requirements RUN python3.7 -m pip install --upgrade pip && \ python3.7 -m pip install -r requirements.txt # set up program in image COPY . /opt/program What I've tried: installing python-devtools, both instead of and alongside, python3.7-dev before installing requirements with pip; installing setuptools in requirements.txt before affected libraries are installed. In both cases the same error arose. Do you know how I can ensure setuptools is available in my environment when installing libraries like sklearn? | As mentioned in comment, install setuptools with pip before running pip install -r requirements.txt. It is different than putting setuptools higher in the requirements.txt because it forces the order while the requirements file collect all the packages and installs them after so you don't control the order. | 7 | 13 |
71,162,915 | 2022-2-17 | https://stackoverflow.com/questions/71162915/conditional-call-of-a-fastapi-model | I have a multilang FastAPI connected to MongoDB. My document in MongoDB is duplicated in the two languages available and structured this way (simplified example): { "_id": xxxxxxx, "en": { "title": "Drinking Water Composition", "description": "Drinking water composition expressed in... with pesticides.", "category": "Water", "tags": ["water","pesticides"] }, "fr": { "title": "Composition de l'eau de boisson", "description": "Composition de l'eau de boisson exprimée en... présence de pesticides....", "category": "Eau", "tags": ["eau","pesticides"] }, } I therefore implemented two models DatasetFR and DatasetEN, each one makes references with specific external Models (Enum) for category and tags in each lang. class DatasetFR(BaseModel): title:str description: str category: CategoryFR tags: Optional[List[TagsFR]] # same for DatasetEN chnaging the lang tag to EN In the routes definition I forced the language parameter to declare the corresponding Model and get the corresponding validation. @router.post("?lang=fr", response_description="Add a dataset") async def create_dataset(request:Request, dataset: DatasetFR = Body(...), lang:str="fr"): ... return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset) @router.post("?lang=en", response_description="Add a dataset") async def create_dataset(request:Request, dataset: DatasetEN = Body(...), lang:str="en"): ... return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset) But this seems to be in contradiction with the DRY principle. So, I wonder here if someone knows an elegant solution to: - given the lang parameter, dynamically call the corresponding model. Or, if we can create a Parent Model Dataset that takes the lang argument and retrieve the child model Dataset<LANG>. This would incredibly ease building my API routes and the call of my models and mathematically divide by two the writing. | Option 1 A solution would be the following: Define lang as Query paramter and add a regular expression that the parameter should match. In your case, that would be ^(fr|en)$, meaning that only fr or en would be valid inputs. Thus, if no match was found, the request would stop there and the client would receive a "string does not match regex..." error. Next, define the body parameter as a generic type of dict and declare it as Body field; thus, instructing FastAPI to expect a JSON body. Following, create a dictionary of your models that you can use to look up for a model using the lang attribute. Once you find the corresponding model, try to parse the JSON body using models[lang].parse_obj(body) (equivalent to using models[lang](**body)). If no ValidationError is raised, you know the resulting model instance is valid. Otherwise, return an HTTP_422_UNPROCESSABLE_ENTITY error, including the errors, which you can handle as desired. If you would also like FR and EN being valid lang values, adjust the regex to ignore case using ^(?i)(fr|en)$ instead, and make sure to convert lang to lower case when looking up for a model (i.e., models[lang.lower()].parse_obj(body)). import pydantic from fastapi import FastAPI, Response, status, Body, Query from fastapi.responses import JSONResponse from fastapi.encoders import jsonable_encoder models = {"fr": DatasetFR, "en": DatasetEN} @router.post("/", response_description="Add a dataset") async def create_dataset(body: dict = Body(...), lang: str = Query(..., regex="^(fr|en)$")): try: model = models[lang].parse_obj(body) except pydantic.ValidationError as e: return Response(content=e.json(), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, media_type="application/json") return JSONResponse(content=jsonable_encoder(dict(model)), status_code=status.HTTP_201_CREATED) Update Since the two models have identical attributes (i.e., title and description), you could define a parent model (e.g., Dataset) with those two attributes, and have DatasetFR and DatasetEN models inherit those. class Dataset(BaseModel): title:str description: str class DatasetFR(Dataset): category: CategoryFR tags: Optional[List[TagsFR]] class DatasetEN(Dataset): category: CategoryEN tags: Optional[List[TagsEN]] Additionally, it might be a better approach to move the logic from inside the route to a dependency function and have it return the model, if it passes the validation; otherwise, raise an HTTPException, as also demonstrated by @tiangolo. You can use jsonable_encoder, which is internally used by FastAPI, to encode the validation errors() (the same function can also be used when returning the JSONResponse). from fastapi.exceptions import HTTPException from fastapi import Depends models = {"fr": DatasetFR, "en": DatasetEN} async def checker(body: dict = Body(...), lang: str = Query(..., regex="^(fr|en)$")): try: model = models[lang].parse_obj(body) except pydantic.ValidationError as e: raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY) return model @router.post("/", response_description="Add a dataset") async def create_dataset(model: Dataset = Depends(checker)): return JSONResponse(content=jsonable_encoder(dict(model)), status_code=status.HTTP_201_CREATED) Option 2 A further approach would be to have a single Pydantic model (let's say Dataset) and customize the validators for category and tags fields. You can also define lang as part of Dataset, thus, no need to have it as query parameter. You can use a set, as described here, to keep the values of each Enum class, so that you can efficiently check if a value exists in the Enum; and have dictionaries to quickly look up for a set using the lang attribute. In the case of tags, to verify that every element in the list is valid, use set.issubset, as described here. If an attribute is not valid, you can raise ValueError, as shown in the documentation, "which will be caught and used to populate ValidationError" (see "Note" section here). Again, if you need the lang codes written in uppercase being valid inputs, adjust the regex pattern, as described earlier. P.S. You don't even need to use Enum with this approach. Instead, populate each set below with the permitted values. For instance, categories_FR = {"Eau"} categories_EN = {"Water"} tags_FR = {"eau", "pesticides"} tags_EN = {"water", "pesticides"}. Additionally, if you would like not to use regex, but rather have a custom validation error for lang attribute as well, you could add it in the same validator decorator and perform validation similar (and previous) to the other two fields. from pydantic import validator categories_FR = set(item.value for item in CategoryFR) categories_EN = set(item.value for item in CategoryEN) tags_FR = set(item.value for item in TagsFR) tags_EN = set(item.value for item in TagsEN) cats = {"fr": categories_FR, "en": categories_EN} tags = {"fr": tags_FR, "en": tags_EN} def raise_error(values): raise ValueError(f'value is not a valid enumeration member; permitted: {values}') class Dataset(BaseModel): lang: str = Body(..., regex="^(fr|en)$") title: str description: str category: str tags: List[str] @validator("category", "tags") def validate_atts(cls, v, values, field): lang = values.get('lang') if lang: if field.name == "category": if v not in cats[lang]: raise_error(cats[lang]) elif field.name == "tags": if not set(v).issubset(tags[lang]): raise_error(tags[lang]) return v @router.post("/", response_description="Add a dataset") async def create_dataset(model: Dataset): return JSONResponse(content=jsonable_encoder(dict(model)), status_code=status.HTTP_201_CREATED) Update Note that in Pydantic V2, @validator has been deprecated and was replaced by @field_validator. Please have a look at this answer for more details and examples. Option 3 Another approach would be to use Discriminated Unions, as described in this answer. As per the documentation: When Union is used with multiple submodels, you sometimes know exactly which submodel needs to be checked and validated and want to enforce this. To do that you can set the same field - let's call it my_discriminator - in each of the submodels with a discriminated value, which is one (or many) Literal value(s). For your Union, you can set the discriminator in its value: Field(discriminator='my_discriminator'). Setting a discriminated union has many benefits: validation is faster since it is only attempted against one model only one explicit error is raised in case of failure the generated JSON schema implements the associated OpenAPI specification | 8 | 4 |
71,152,069 | 2022-2-17 | https://stackoverflow.com/questions/71152069/how-to-run-python-code-directly-on-a-webpage | My problem is as follows: I have written a python code, and I need to run it on a web page.Basically I need that whatever is on the console should be displayed as it is. I have no experience in web development and similar libraries, and I need to get this done in a short time. Kindly tell how should I proceed? Note: I might be plotting some graphs also. It would be great if they could be displayed all at once(sequentially) on the website | ERROR: type should be string, got " https://brython.info/ https://skulpt.org/ https://pyodide.org/en/stable/ There are multiple python implementations in the browser: some are WebAssembly (WASM) and some are JavaScript. Is it a good idea to run python on browser as a replacement for JavaScript in 2022? No it is not; learn JavaScript. No in-browser python implementation can match JavaScript and its performance as of today and most probably ever." | 7 | 6 |
71,196,737 | 2022-2-20 | https://stackoverflow.com/questions/71196737/how-to-filter-a-polars-dataframe-by-date | df.filter(pl.col("MyDate") >= "2020-01-01") does not work like it does in pandas. I found a workaround df.filter(pl.col("MyDate") >= pl.datetime(2020,1,1)) but this does not solve a problem if I need to use string variables. | You can turn the string into a date type e.g. with .str.to_date() Building on the example above: import polars as pl from datetime import datetime df = pl.DataFrame({ "dates": [datetime(2021, 1, 1), datetime(2021, 1, 2), datetime(2021, 1, 3)], "vals": range(3) }) df.filter(pl.col('dates') >= pl.lit(my_date_str).str.to_date()) shape: (2, 2) ┌─────────────────────┬──────┐ │ dates ┆ vals │ │ --- ┆ --- │ │ datetime[μs] ┆ i64 │ ╞═════════════════════╪══════╡ │ 2021-01-02 00:00:00 ┆ 1 │ │ 2021-01-03 00:00:00 ┆ 2 │ └─────────────────────┴──────┘ | 10 | 10 |
71,172,212 | 2022-2-18 | https://stackoverflow.com/questions/71172212/find-enum-value-by-enum-name-in-string-python | I'm struggling with Python enums. I created an enum class containing various fields: class Animal(Enum): DOG = "doggy" CAT = "cute cat" I know that I can access this enum with value i.e. by passing Animal("doggy") I will have Animal.DOG. However I would like to achieve the same but the other way around, let's say I have "DOG" and I use it to find "doggy". Example: str_to_animal_enum("DOG") = Animal.DOG I thought that I could rewrite init function of Enum class to check whether string name that we put as a parameter has representants in Enum names, if yes then we have a match. Sth like this: def __init__(cls, name): for enum in all_enums: if enum.name == name: return enum But this solution looks very heavy in computation. Let's say that I have 100 names to find and for each name we have to make N iterations in the worst case scenario where N is number of enums in enum class. I think there should be something better but I just don't know what. Do you have any ideas how it could be implemented? Thanks :) | As per the manual: >>> Animal.DOG.value 'doggy' >>> Animal.DOG.name 'DOG' >>> # For more programmatic access, with just the enum member name as a string: >>> Animal['DOG'].value 'doggy' | 16 | 26 |
71,195,740 | 2022-2-20 | https://stackoverflow.com/questions/71195740/vs-code-text-output-unreadable-format-in-new-window | I was using a jupyter notebook inside VSCode and used ?? on the object to look the source code. The output showed : Output exceeds the size limit. Open the full output data in a text editor But when I click on it it opens the output in another window but everything is illegible. What's going on here? What are those strange characters like esc[031m? How can I get rid of them when viewing the full output data? | Those are ANSI escape codes- particularly ones for colouring. If ANSI color support in edit buffer #38834 gets implemented, then this problem will sort of "go away" by default (though I imagine it could lead to different kinds of confusion). The IPython configuration docs have a section on terminal colours: InteractiveShell.colors sets the colour of tracebacks and object info (the output from e.g. zip?). It may also affect other things if the option below is set to 'legacy'. It has four case-insensitive values: 'nocolor', 'neutral', 'linux', 'lightbg'. The default is neutral, which should be legible on either dark or light terminal backgrounds. linux is optimised for dark backgrounds and lightbg for light ones. See the rest of those docs for more info. Ideally, the VS Code extension for IPython would strip out those ANSI escape sequences when showing the full output data in a text editor, but for now, you may be able to work around that by manual configuration by setting InteractiveShell.colors to 'nocolor', or putting the following code cell at the start of your notebook: %colors nocolor (see related docs here). This problem has been brought up on the microsoft/vscode-jupyter GitHub repo at least twice: Truncated stacktraces illegible in text editor #10467 (closed pointing to another issue ticket: Output truncated with no way to view all output in the notebook itself #7096) Error traces opened in text editor are not readable #11279 (closed as a duplicate of #10467) | 11 | 6 |
71,140,633 | 2022-2-16 | https://stackoverflow.com/questions/71140633/how-to-save-the-best-estimator-in-gridsearchcv | When faced with a large dataset, I need to spend a day using GridSearchCV() to train an SVM with the best parameters. How can I save the best estimator so that I can use this trained estimator directly when I start my computer next time? | By default, GridSearchCV does not expose or store the best model instance it only returns the parameter set that led to the highest score. If you want the best predictor, you have to specify refit=True, or if you are using multiple metrics refit=name-of-your-decider-metric. This will run a final training step using the full dataset and the best parameters found. To find the optimal parameters, GridSearchCV obviously does not use the entire dataset for training, as they have to split out the hold-out validation set. Now, when you do that, you can get the model via the best_estimator_ attribute. Having this, you can pickel that model using joblib and reload it the next day to do your prediction. In a mix of pseudo and real code, that would read like: from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC from joblib import dump, load svc = SVC() # Probably not what you are using, but just as an example gcv = GridSearchCV(svc, parameters, refit=True) gcv.fit(X, y) estimator = gcv.best_estimator_ dump(estimator, "your-model.joblib") # Somewhere else estimator = load("your-model.joblib") | 5 | 12 |
71,146,731 | 2022-2-16 | https://stackoverflow.com/questions/71146731/using-loc-in-pandas-without-discarding-the-outer-levels | I have a dataframe like df = pd.DataFrame({ 'level0': [0,1,2], 'level1': ['a', 'b', 'b'], 'level2':['x', 'x', 'x'], 'data': [0.12, 0.34, 0.45]} ).set_index(['level0', 'level1', 'level2']) level0 level1 level 2 data 0 a x 0.12 1 b x 0.34 2 b x 0.56 If level0, level1, and level2 are the index levels, I want to access the data at (2, b) but keep the first two levels of labels. If I do df.loc[(2, 'b')] the output is level2 data x 0.56 but my desired output is level0 level1 level 2 data 2 b x 0.56 How do I keep the levels 0 and 1 while using loc? I could add these levels back afterwards, but this is slightly annoying, and I'm doing this frequently enough to wonder if there's a one step solution. | You can use the output of MultiIndex.get_locs in iloc: >>> df.iloc[df.index.get_locs((2, 'b'))] data level0 level1 level2 2 b x 0.45 | 6 | 5 |
71,192,894 | 2022-2-20 | https://stackoverflow.com/questions/71192894/python-multiprocessing-terminate-other-processes-after-one-process-finished | I have some programm in which multiple processes try to finish some function. My aim now is to stop all the other processes after one process has successfully finished the function. The python program shown below unfortunately waits until all the processes successfully solved the question given in find function. How can I fix my problem? import multiprocessing import random FIND = 50 MAX_COUNT = 100000 INTERVAL = range(10) def find(process, initial, return_dict): succ = False while succ == False: start=initial while(start <= MAX_COUNT): if(FIND == start): return_dict[process] = f"Found: {process}, start: {initial}" succ = True break; i = random.choice(INTERVAL) start = start + i print(start) processes = [] manager = multiprocessing.Manager() return_code = manager.dict() for i in range(5): process = multiprocessing.Process(target=find, args=(f'computer_{i}', i, return_code)) processes.append(process) process.start() for process in processes: process.join() print(return_code.values()) output can be for example: ['Found: computer_0, start: 0', 'Found: computer_4, start: 4', 'Found: computer_2, start: 2', 'Found: computer_1, start: 1', 'Found: computer_3, start: 3'] But this output shows me the program is waiting until all processes are finished ... | Use an Event to govern if the processes should keep running. Basically, it replaces succ with something that works over all processes. import multiprocessing import random FIND = 50 MAX_COUNT = 1000 def find(process, initial, return_dict, run): while run.is_set(): start = initial while start <= MAX_COUNT: if FIND == start: return_dict[process] = f"Found: {process}, start: {initial}" run.clear() # Stop running. break start += random.randrange(0, 10) print(start) if __name__ == "__main__": processes = [] manager = multiprocessing.Manager() return_code = manager.dict() run = manager.Event() run.set() # We should keep running. for i in range(5): process = multiprocessing.Process( target=find, args=(f"computer_{i}", i, return_code, run) ) processes.append(process) process.start() for process in processes: process.join() print(return_code.values()) Note that using __name__ is mandatory for multiprocessing to work properly when the process-creation method is set to 'spawn' which is the default on ms-windows and macOS but also available on linux. On those systems, the main module is imported into newly created Python processes. This needs to happen without side effects such as starting a process, and the __name__ mechanism ensures that. | 6 | 6 |
71,220,697 | 2022-2-22 | https://stackoverflow.com/questions/71220697/python-dash-plotly-websockets | I'm a n00b with Dash and I'm trying to update a DashTable from websocket feeds. The code appears to work when there aren't too many feeds, but once there are, Chrome starts spamming my server with fetch requests (from dash_update_component) Is there any way to make this more performant ? import dash_bootstrap_components as dbc import dash_core_components as dcc import dash_html_components as html import json import pandas as pd from dash import callback, Dash, dash_table from dash.dependencies import Input, Output, State from dash_extensions import WebSocket symbols = ["BTCUSDT"] columns = ["symbol", "bid_volume", "bid_price", "ask_volume", "ask_price"] def create_data(): data = {} for col in columns: if col == "symbol": data[col] = symbols else: data[col] = [None] * len(symbols) return data df = pd.DataFrame(data=create_data()) # Create example app. app = Dash(prevent_initial_callbacks=True) app.layout = html.Div([ dash_table.DataTable(df.to_dict('records'), [{"name": i, "id": i} for i in df.columns], id='tbl', editable=True), dcc.Input(id="input", autoComplete="off"), html.Div(id="message"), WebSocket(url="wss://fstream.binance.com/ws/", id="ws") ]) # Write to websocket. @app.callback(Output("ws", "send"), [Input("input", "value")]) def send(value): sub_msg = { "method": "SUBSCRIBE", "params": [], "id": 1 } for ins in symbols: sub_msg["params"] += ([f"{ins.lower()}@bookTicker"]) return json.dumps(sub_msg, indent=0) # Read from websocket. @app.callback(Output('tbl', 'data'), [Input("ws", "message")]) def on_feed(message): if "data" not in message: return dash.no_update else: data = json.loads(message["data"]) print(data) symbol = data["s"] row_idx = df.index[df['symbol'] == symbol].tolist()[0] df.loc[row_idx, columns] = [symbol, data["B"], data["b"], data["a"], data["A"]] return df.to_dict('records') if __name__ == '__main__': app.run_server(debug=True) | One thing you can do to improve performance is to convert your callbacks into clientside callbacks: symbols = ["ETHBUSD", "BNBUSDT", "BTCUSDT"] # ... app.clientside_callback( """ function(value) { const symbols = ['ETHBUSD', 'BTCUSDT', 'BNBUSDT'] const subMsg = { 'method': 'SUBSCRIBE', 'params': [], 'id': 1 }; for (const ins of symbols) { subMsg.params.push(`${ins.toLowerCase()}@bookTicker`); } return JSON.stringify(subMsg); } """, Output("ws", "send"), Input("input", "value"), prevent_initial_call=True ) app.clientside_callback( """ function(message, tableData) { if (message === undefined) { return window.dash_clientside.no_update; } data = JSON.parse(message['data']); if (data === undefined) { return window.dash_clientside.no_update; } const newTableData = tableData; row = newTableData.find(row => row.symbol === data['s']); if (row !== undefined) { row['bid_volume'] = data['B']; row['bid_price'] = data['b']; row['ask_volume'] = data['a']; row['ask_price'] = data['A']; } return tableData; } """, Output("tbl", "data"), Input("ws", "message"), State("tbl", "data"), prevent_initial_call=True ) Clientside callbacks execute your code in the client in JavaScript rather than on the server in Python. https://dash.plotly.com/performance If you look at the network tab you can see that no websocket related requests are made anymore now. In this case we're still updating the table for every valid message coming in. You might also want to limit this by using a dcc.Interval: dcc.Interval(id="interval", interval=500) # interval fires every 500 ms # ... app.clientside_callback( """ function(nIntervals, message, tableData) { if (message === undefined) { return window.dash_clientside.no_update; } data = JSON.parse(message['data']); if (data === undefined) { return window.dash_clientside.no_update; } const newTableData = tableData; row = newTableData.find(row => row.symbol === data['s']); if (row !== undefined) { row['bid_volume'] = data['B']; row['bid_price'] = data['b']; row['ask_volume'] = data['a']; row['ask_price'] = data['A']; } return tableData; } """, Output("tbl", "data"), Input("interval", "n_intervals"), State("ws", "message"), State("tbl", "data"), prevent_initial_call=True ) You can experiment with adjusting the interval. A lower interval makes result are more "real time"/accurate, but a higher interval means less updates that need to be performed. | 6 | 0 |
71,203,579 | 2022-2-21 | https://stackoverflow.com/questions/71203579/how-to-return-a-csv-file-pandas-dataframe-in-json-format-using-fastapi | I have a .csv file that I would like to render in a FastAPI app. I only managed to render the .csv file in JSON format as follows: def transform_question_format(csv_file_name): json_file_name = f"{csv_file_name[:-4]}.json" # transforms the csv file into json file pd.read_csv(csv_file_name ,sep=",").to_json(json_file_name) with open(json_file_name, "r") as f: json_data = json.load(f) return json_data @app.get("/questions") def load_questions(): question_json = transform_question_format(question_csv_filename) return question_json When I tried returning directly pd.read_csv(csv_file_name ,sep=",").to_json(json_file_name), it works, as it returns a string. How should I proceed? I believe this is not the good way to do it. | The below shows four different ways of returning the data stored in a .csv file/Pandas DataFrame (for solutions without using Pandas DataFrame, have a look here). Related answers on how to efficiently return a large dataframe can be found here and here as well. Option 1 The first option is to convert the file data into JSON and then parse it into a dict. You can optionally change the orientation of the data using the orient parameter in the .to_json() method. Note: Better not to use this option. See Updates below. from fastapi import FastAPI import pandas as pd import json app = FastAPI() df = pd.read_csv("file.csv") def parse_csv(df): res = df.to_json(orient="records") parsed = json.loads(res) return parsed @app.get("/questions") def load_questions(): return parse_csv(df) Update 1: Using .to_dict() method would be a better option, as it would return a dict directly, instead of converting the DataFrame into JSON (using df.to_json()) and then that JSON string into dict (using json.loads()), as described earlier. Example: @app.get("/questions") def load_questions(): return df.to_dict(orient="records") Update 2: When using .to_dict() method and returning the dict, FastAPI, behind the scenes, automatically converts that return value into JSON using the Python standard json.dumps(), after converting it into JSON-compatible data first, using the jsonable_encoder, and then putting that JSON-compatible data inside of a JSONResponse (see this answer for more details). Thus, to avoid that extra processing, you could still use the .to_json() method, but this time, put the JSON string in a custom Response and return it directly, as shown below: from fastapi import Response @app.get("/questions") def load_questions(): return Response(df.to_json(orient="records"), media_type="application/json") Option 2 Another option is to return the data in string format, using .to_string() method. @app.get("/questions") def load_questions(): return df.to_string() Option 3 You could also return the data as an HTML table, using .to_html() method. from fastapi.responses import HTMLResponse @app.get("/questions") def load_questions(): return HTMLResponse(content=df.to_html(), status_code=200) Option 4 Finally, you can always return the file as is using FastAPI's FileResponse. from fastapi.responses import FileResponse @app.get("/questions") def load_questions(): return FileResponse(path="file.csv", filename="file.csv") | 13 | 13 |
71,193,095 | 2022-2-20 | https://stackoverflow.com/questions/71193095/questions-on-pyproject-toml-vs-setup-py | Reading up on pyproject.toml, python -m pip install, poetry, flit, etc - I have several questions regarding replacing setup.py with pyproject.toml. My biggest question was - how does a toml file replace a setup.py. Meaning, a toml file can't do everything a py file can. Reading into it, poetry and flit completely replace setup.py with pyproject.toml. While pip uses the pyproject.toml to specify the build tools, but then still uses the setup.py for everything else. A good example is, pip currently doesn't have a way to do entry points for console script directly in a toml file, but poetry and flit do. https://flit.readthedocs.io/en/latest/pyproject_toml.html#scripts-section https://python-poetry.org/docs/pyproject/#scripts My main question right now is; The point of pyproject.toml is to provide build system requirement. It is a metadata file. So wouldn't the ideal solution to be to use this file only to specify the build system requirements and still leverage the setup.py for everything else. I am confused because I feel like we're losing a lot to over come a fairly simple problem. By entirely doing way with the setup.py and replacing it with pyproject.toml, we lose a lot of helpful things we can do in a setup.py. We can't use a __version.py__, and we lose the ability to automatically create a universal wheel and sdist and upload our packages to PyPi using Twine. which we can currently do in the setup.py file. I'm just having a had time wrapping my head around why we would want to completely replace the setup.py with a metadata only file. It seems like using them together is the best of both worlds. We solve the chicken and the egg build system issue, and we get to retain a lot of useful things the setup.py can do. Wouldn't we need a setup.py to install in Dev mode anyway? Or maybe that is just a pip problem? | Currently I am investigating this feature too. I found this experimental feature explanation of setuptools which should just refer to the pyproject.toml without any need of setup.py in the end. Regarding dynamic behavior of setup.py, I figured out that you can set a dynamic behavior for fields under the [project] metadata dynamic = ["version"] [tool.setuptools.dynamic] version = {attr = "my_package.__version__"} whereat the corresponding version in this example is set in, e.g. my_package.__init__.py __version__ = "0.1.0" __all__ = ["__version__"] In the end, I guess that setuptools will cover the missing setup.py execution and places the necessary egg-links for the development mode. | 22 | 13 |
71,168,274 | 2022-2-18 | https://stackoverflow.com/questions/71168274/create-custom-data-type-in-python | Hopefully the title isn't too misleading, I'm not sure the best way to phrase my question. I'm trying to create a (X, Y) coordinate data type in Python. Is there a way to create a "custom data type" so that I have an object with a value, but also some supporting attributes? So far I've made this simple class: class Point: def __init__(self, x, y): self.x = x self.y = y self.tuple = (x, y) Ideally, I'd like to be able to do something like this: >>> p = Point(4, 5) >>> >>> my_x = p.x # can access the `x` attribute with "dot syntax" >>> >>> my_tuple = p # or can access the tuple value directly # without needing to do `.tuple`, as if the `tuple` # attribute is the "default" attribute for the object NOTE I'm not trying to simply display the tuple, I know I can do that with the __repr__ method In a way, I'm trying to create a very simplified numpy.ndarray, because the ndarrays are a datatype that have their own attributes. I tried looking thru the numpy source to see how this is done, but it was way over my head, haha. Any tips would be appreciated! | I am not sure what you want to do with the tuple. p will always be an instance of Point. What you intend to do there won't work. If you just don't want to use the dot notation, you could use a namedtuple or a dataclass instead of a class. Then cast their instances to a tuple using tuple() and astuple(). Using a namedtuple and tuple(): from collections import namedtuple Point = namedtuple("Point", ["x", "y"]) p = Point(4, 5) x = p.x y = p.y xy = p # xy = tuple(p) not necessary since namedtuple is already a tuple Note: namedtuple is immutable, i.e. you can't change x and y. Using a dataclasses.dataclass and dataclasses.astuple(): from dataclasses import dataclass, astuple @dataclass class Point: x: int y: int p = Point(4, 5) x = p.x y = p.y xy = astuple(p) | 8 | 7 |
71,200,479 | 2022-2-21 | https://stackoverflow.com/questions/71200479/plotly-dash-zmqerror-address-already-in-use | I am testing Plotly Dash as a possible dashboarding tool. I am trying to run one of the charts found in the documentation: https://plotly.com/python/bar-charts/ import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import plotly.express as px df = px.data.tips() days = df.day.unique() app = dash.Dash(__name__) app.layout = html.Div([ dcc.Dropdown( id="dropdown", options=[{"label": x, "value": x} for x in days], value=days[0], clearable=False, ), dcc.Graph(id="bar-chart"), ]) @app.callback( Output("bar-chart", "figure"), [Input("dropdown", "value")]) def update_bar_chart(day): mask = df["day"] == day fig = px.bar(df[mask], x="sex", y="total_bill", color="smoker", barmode="group") return fig app.run_server(debug=True, port=8049) When I run this I get an error. Here is the end of the trace callback: File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Address already in use As you can see from my example, I have already tried altering the port to avoid this error. I have tried many ports around 8050, but they all seem to be "already in use." My guess is that Dash reserves the port then tries to use it but sees that it's already reserved, not knowing that it was reserved for the process it was about to execute. Does anyone know how to fix this error? | If you are running it from jupyter-notebook or jupyter-lab, you should run the app server as: app.run_server(debug=True, port=8049, use_reloader=False) | 5 | 5 |
71,168,930 | 2022-2-18 | https://stackoverflow.com/questions/71168930/change-pystray-tray-notification-title | I have an issue with finding a way to change the pystray tray notification title. It appears that it's taking a default value of "Python" from somewhere. See the image below: In the documentation, there are no additional parameters to change the notification icon title. How can I find a way to change the title value to something that I want? Here is a working code example: from tkinter import * from pystray import MenuItem as item from PIL import Image, ImageTk from res import * #here is my base64 encoded icon. Variable icon_base64. from base64 import b64decode import pystray import base64 pic=ImageTk.BytesIO(icon_base64) #transfering base64 to bytes def run_icon(): #image = Image.open("icon.ico") #uncomment this to use a standard image, isntead of base64. title="Tray title" image=Image.open(pic) #comment this if using standard way of image menu = (item('test1', lambda: show(),default = True), item('Exit', lambda: exit())) global icon icon = pystray.Icon("name", image, title, menu) icon.run() def show_notification(text): icon.notify(text,"My test notification sub title") def show(): print("show") def show(): print("exit") run_icon() sleep(3) show_notification("test") Update: An idea just came to my head - perhaps this "Python" is being taken from the project name or program name, etc. Should I search or add code related to naming parameters (on a Win10 OS)? | Python is an interpreted language, which means that it executes code line by line rather than compiling the entire program into a standalone executable. This means that your program does not have a standalone existence until you compile it. In a Windows environment, the commands you have written are executed by python.exe. To answer your question, in Windows, the title of each notification comes from the value of the File description property. In your case, it is "Python" as shown below:" Given this, you need to turn your code into a standalone executable file and fill in some property values. This can be done in two steps: STEP 1 Create a VSVersionInfo file (e.g.: version_info.rs), with the following indicative content: VSVersionInfo( ffi=FixedFileInfo( OS=0x4, fileType=0x1, ), kids=[ StringFileInfo( [ StringTable( u'040904B0', [ StringStruct(u'FileDescription', u'Tray Application'), StringStruct(u'InternalName', u'trayapplication'), StringStruct(u'LegalCopyright', u'Copyright (c) Andreas Violaris'), StringStruct(u'OriginalFilename', u'trayapplication.exe'), StringStruct(u'ProductName', u'trayapplication'), StringStruct(u'ProductVersion', u'1.0')]) ] ), VarFileInfo([VarStruct(u'Translation', [1033, 1200])]) ] ) TL;DR: The VSVersionInfo structure is used to store version information for a Windows executable file. The structure consists of two parts. The "ffi" part is a FixedFileInfo structure, which stores general information about the file, such as the file type, operating system version, and other attributes. The "kids" part is a list of sub-structures that store more specific version information. The "ffi" part of the VSVersionInfo structure contains a FixedFileInfo structure. The "OS" property specifies the operating system version for which the file was designed. The value 0x4 corresponds to the Windows NT operating system. The "fileType" property specifies the type of file. The value 0x1 corresponds to an application. The "kids" part of the VSVersionInfo structure contains a list with two elements: a StringFileInfo structure and a VarFileInfo structure. The StringFileInfo structure contains a list of StringStruct structures that are self-explanatory. The VarFileInfo structure is used to store information about the language and character set of the file. It consists of a single VarStruct structure with the property "Translation" and the value [1033, 1200], which corresponds to the English (US) language and the Unicode character set. STEP 2 Turn your program into a standalone executable using a tool like PyInstaller. To use PyInstaller, you first need to install it using a package installer like pip: pip install pyinstaller Then, you can use the following PyInstaller command to package your program into an executable and set its version information using the version_info.rs file of the first step: pyinstaller --onefile main.py --version-file version_info.rs RESULT After running the executable (located in the dist directory), you will find that the notification title now has the value you assigned to the FileDescription property in the first step. | 6 | 7 |
71,175,293 | 2022-2-18 | https://stackoverflow.com/questions/71175293/make-built-in-lru-cache-skip-caching-when-function-returns-none | Here's a simplified function for which I'm trying to add a lru_cache for - from functools import lru_cache, wraps @lru_cache(maxsize=1000) def validate_token(token): if token % 3: return None return True for x in range(1000): validate_token(x) print(validate_token.cache_info()) outputs - CacheInfo(hits=0, misses=1000, maxsize=1000, currsize=1000) As we can see, it would also cache args and returned values for the None returns as well. In above example, I want the cache_size to be 334, where we are returning non-None values. In my case, my function having large no. of args might return a different value if previous value was None. So I want to avoid caching the None values. I want to avoid reinventing the wheel and implementing a lru_cache again from scratch. Is there any good way to do this? Here are some of my attempts - 1. Trying to implement own cache (which is non-lru here) - from functools import wraps # global cache object MY_CACHE = {} def get_func_hash(func): # generates unique key for a function. TODO: fix what if function gets redefined? return func.__module__ + '|' + func.__name__ def my_lru_cache(func): name = get_func_hash(func) if not name in MY_CACHE: MY_CACHE[name] = {} @wraps(func) def function_wrapper(*args, **kwargs): if tuple(args) in MY_CACHE[name]: return MY_CACHE[name][tuple(args)] value = func(*args, **kwargs) if value is not None: MY_CACHE[name][tuple(args)] = value return value return function_wrapper @my_lru_cache def validate_token(token): if token % 3: return None return True for x in range(1000): validate_token(x) print(get_func_hash(validate_token)) print(len(MY_CACHE[get_func_hash(validate_token)])) outputs - __main__|validate_token 334 2. I realised that the lru_cache doesn't do caching when an exception is raised within the wrapped function - from functools import wraps, lru_cache def my_lru_cache(func): @wraps(func) @lru_cache(maxsize=1000) def function_wrapper(*args, **kwargs): value = func(*args, **kwargs) if value is None: # TODO: change this to a custom exception raise KeyError return value return function_wrapper def handle_exception(func): @wraps(func) def function_wrapper(*args, **kwargs): try: value = func(*args, **kwargs) return value except KeyError: return None return function_wrapper @handle_exception @my_lru_cache def validate_token(token): if token % 3: return None return True for x in range(1000): validate_token(x) print(validate_token.__wrapped__.cache_info()) outputs - CacheInfo(hits=0, misses=334, maxsize=1000, currsize=334) Above correctly caches only the 334 values, but needs wrapping the function twice and accessing the cache_info in a weird manner func.__wrapped__.cache_info(). How do I better achieve the behaviour of not caching when None(or specific) values are returned using built-in lru_cache decorator in a pythonic way? | You are missing the two lines marked here: def handle_exception(func): @wraps(func) def function_wrapper(*args, **kwargs): try: value = func(*args, **kwargs) return value except KeyError: return None function_wrapper.cache_info = func.cache_info # Add this function_wrapper.cache_clear = func.cache_clear # Add this return function_wrapper You can do both wrappers in one function: def my_lru_cache(maxsize=128, typed=False): class CustomException(Exception): pass def decorator(func): @lru_cache(maxsize=maxsize, typed=typed) def raise_exception_wrapper(*args, **kwargs): value = func(*args, **kwargs) if value is None: raise CustomException return value @wraps(func) def handle_exception_wrapper(*args, **kwargs): try: return raise_exception_wrapper(*args, **kwargs) except CustomException: return None handle_exception_wrapper.cache_info = raise_exception_wrapper.cache_info handle_exception_wrapper.cache_clear = raise_exception_wrapper.cache_clear return handle_exception_wrapper if callable(maxsize): user_function, maxsize = maxsize, 128 return decorator(user_function) return decorator | 9 | 5 |
71,196,661 | 2022-2-20 | https://stackoverflow.com/questions/71196661/what-is-the-equivalent-of-dataframe-drop-duplicates-from-pandas-in-polars | What is the equivalent of drop_duplicates() from pandas in polars? import polars as pl df = pl.DataFrame({"a":[1,1,2], "b":[2,2,3], "c":[1,2,3]}) df Output: shape: (3, 3) ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 2 ┆ 1 │ │ 1 ┆ 2 ┆ 2 │ │ 2 ┆ 3 ┆ 3 │ └─────┴─────┴─────┘ Code: df.drop_duplicates(["a", "b"]) Delivers the following error: # AttributeError: 'DataFrame' object has no attribute 'drop_duplicates' | The right function name is .unique() import polars as pl df = pl.DataFrame({"a":[1,1,2], "b":[2,2,3], "c":[1,2,3]}) df.unique(subset=["a","b"]) And this delivers the right output: shape: (2, 3) ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 2 ┆ 1 │ ├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤ │ 2 ┆ 3 ┆ 3 │ └─────┴─────┴─────┘ | 41 | 53 |
71,205,417 | 2022-2-21 | https://stackoverflow.com/questions/71205417/what-to-do-when-pip-dependency-resolver-wants-to-use-conflicting-django-plotly-d | So I'm trying to integrate plotly with my django app however I'm having an issue rendering a chart. I was using VSCode which did not pick up the dependency conflict. However when i started to use Pycharm. It said my Dash was version 1.11 which satisfies the django-plotly-dash but did not satisfy the dash_bootstrap_components which required 2.0.0 I have now installed Dash version 1.10 which conflicts with both apps just to show the error message below: Relevant error code ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following de pendency conflicts. django-plotly-dash 1.6.6 requires dash<1.21.0,>=1.11, but you have dash 1.10.0 which is incompatible. dash-bootstrap-components 1.0.3 requires dash>=2.0.0, but you have dash 1.10.0 which is incompatible. Any help is appreciated Answer As django-plotly-dash is on the latest version, i've decided to install dash 1.20 and downgrade by dash-bootstrap-components to 0.13.0 (https://github.com/facultyai/dash-bootstrap-components/releases?page=2) This has worked like a charm.. weirdly - Pycharm has a reference error for the imports but visual studio code does not show any error and my program/script works perfectly. The pycharm import issue may be due to a setting in pycharm? idk | As django-plotly-dash is on the latest version, i've decided to install dash 1.20 and downgrade by dash-bootstrap-components to 0.13.0 (https://github.com/facultyai/dash-bootstrap-components/releases?page=2) This has worked like a charm.. weirdly - Pycharm has a reference error for the imports but visual studio code does not show any error and my program/script works perfectly. The pycharm import issue may be due to a setting in pycharm? idk | 10 | 1 |
71,187,944 | 2022-2-19 | https://stackoverflow.com/questions/71187944/dlopen-libcrypt-so-1-cannot-open-shared-object-file-no-such-file-or-directory | I use EndeavourOS and have updated my system on February 17 2022 using sudo pacman -Syu Eversince, when I run docker-compose, I get this error message: [4221] Error loading Python lib '/tmp/_MEIgGJQGW/libpython3.7m.so.1.0': dlopen: libcrypt.so.1: cannot open shared object file: No such file or directory Some forum threads suggested to reinstall docker-compose, which I did. I tried the following solution, but both without success: Python3.7: error while loading shared libraries: libpython3.7m.so.1.0 How can I resolve this issue? | The underlying issue here is that you use docker-compose instead of docker compose, which are two different binaries. docker-compose is also known as V1, and is deprecated since April 26, 2022. Since then, it does not receive updates or patches, other than high-severity security patches. So, to fix your issue, use docker compose instead of docker-compose. If you compare docker compose version and docker-compose version, you will see that this uses the newer docker compose and runs without an issue. | 32 | 3 |
71,220,825 | 2022-2-22 | https://stackoverflow.com/questions/71220825/what-is-the-difference-between-subprocess-run-subprocess-check-output | I am trying to send two simple commands using subprocess.run & trying to store results in a variable then print it but for one arg the output is coming for subprocess.run & for other its empty Arg are "help" & "adb devices" command I am sending which returns the output result = subprocess.run("help", capture_output=True, text=True, universal_newlines=True) print(result.stdout) but this command with a different arg is not returning result = subprocess.run("adb devices", capture_output=True, text=True, universal_newlines=True) print(result.stdout) If I try the same command with subprocess.checkoutput it returns the output can anyone explain what exactly is going on here Is there any specific usage scenario's for these command's like when to use which one ? c = subprocess.check_output( "adb devices", shell=True, stderr=subprocess.STDOUT) print(c) output - b'List of devices attached\r\n\r\n' | It is because from the python documentation here: run method run method accepts the first parameter as arguments and not string. So you can try passing the arguments in a list as: result = subprocess.run(['abd', 'devices'], capture_output=True, text=True, universal_newlines=True) Also, check_output method accepts args but it has a parameter call "shell = True" Therefore, it works for multi-word args. If you want to use the run method without a list, add shell=True in the run method parameter. (I tried for "man ls" command and it worked). | 13 | 8 |
71,183,960 | 2022-2-19 | https://stackoverflow.com/questions/71183960/short-way-to-get-all-field-names-of-a-pydantic-class | Minimal example of the class: from pydantic import BaseModel class AdaptedModel(BaseModel): def get_all_fields(self, alias=False): return list(self.schema(by_alias=alias).get("properties").keys()) class TestClass(AdaptedModel): test: str The way it works: dm.TestClass.get_all_fields(dm.TestClass) Is there a way to make it work without giving the class again? Desired way to get all field names: dm.TestClass.get_all_fields() It would also work if the field names are assigned to an attribute. Just any way to make it make it more readable | What about just using __fields__: from pydantic import BaseModel class AdaptedModel(BaseModel): parent_attr: str class TestClass(AdaptedModel): child_attr: str TestClass.__fields__ Output: {'parent_attr': ModelField(name='parent_attr', type=str, required=True), 'child_attr': ModelField(name='child_attr', type=str, required=True)} This is just a dict and you could get only the field names simply by: TestClass.__fields__.keys() See model properties: https://pydantic-docs.helpmanual.io/usage/models/#model-properties | 38 | 60 |
71,162,459 | 2022-2-17 | https://stackoverflow.com/questions/71162459/why-does-anaconda-install-pytorch-cpuonly-when-i-install-cuda | I have created a Python 3.7 conda virtual environment and installed the following packages using this command: conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch They install fine, but then when I come to run my program I get the following error which suggests that a CUDA enabled device is not found: raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. I have an NVIDIA RTX 3060ti GPU, which as far as I am aware is cuda enabled, but whenever I go into the Python interactive shell within my conda environment I get False when evaluating torch.cuda.is_available() suggesting that perhaps CUDA is not installed properly or is not found. When I then perform a conda list to view my installed packages: # packages in environment at /home/user/anaconda3/envs/FGVC: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 4.5 1_gnu blas 1.0 mkl brotli 1.0.9 he6710b0_2 bzip2 1.0.8 h7b6447c_0 ca-certificates 2021.10.26 h06a4308_2 cairo 1.16.0 hf32fb01_1 certifi 2021.10.8 py37h06a4308_2 cpuonly 1.0 0 pytorch cudatoolkit 11.3.1 h2bc3f7f_2 cycler 0.11.0 pyhd3eb1b0_0 dbus 1.13.18 hb2f20db_0 expat 2.4.4 h295c915_0 ffmpeg 4.0 hcdf2ecd_0 fontconfig 2.13.1 h6c09931_0 fonttools 4.25.0 pyhd3eb1b0_0 freeglut 3.0.0 hf484d3e_5 freetype 2.11.0 h70c0345_0 giflib 5.2.1 h7b6447c_0 glib 2.69.1 h4ff587b_1 graphite2 1.3.14 h23475e2_0 gst-plugins-base 1.14.0 h8213a91_2 gstreamer 1.14.0 h28cd5cc_2 harfbuzz 1.8.8 hffaf4a1_0 hdf5 1.10.2 hba1933b_1 icu 58.2 he6710b0_3 imageio 2.16.0 pypi_0 pypi imageio-ffmpeg 0.4.5 pypi_0 pypi imutils 0.5.4 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 jasper 2.0.14 hd8c5072_2 jpeg 9d h7f8727e_0 kiwisolver 1.3.2 py37h295c915_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.35.1 h7274673_9 libffi 3.3 he6710b0_2 libgcc-ng 9.3.0 h5101ec6_17 libgfortran-ng 7.5.0 ha8ba4b0_17 libgfortran4 7.5.0 ha8ba4b0_17 libglu 9.0.0 hf484d3e_1 libgomp 9.3.0 h5101ec6_17 libopencv 3.4.2 hb342d67_1 libopus 1.3.1 h7b6447c_0 libpng 1.6.37 hbc83047_0 libstdcxx-ng 9.3.0 hd4cf53a_17 libtiff 4.2.0 h85742a9_0 libuuid 1.0.3 h7f8727e_2 libuv 1.40.0 h7b6447c_0 libvpx 1.7.0 h439df22_0 libwebp 1.2.0 h89dd481_0 libwebp-base 1.2.0 h27cfd23_0 libxcb 1.14 h7b6447c_0 libxml2 2.9.12 h03d6c58_0 lz4-c 1.9.3 h295c915_1 matplotlib 3.5.0 py37h06a4308_0 matplotlib-base 3.5.0 py37h3ed280b_0 mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py37h7f8727e_0 mkl_fft 1.3.1 py37hd3c417c_0 mkl_random 1.2.2 py37h51133e4_0 munkres 1.1.4 py_0 ncurses 6.3 h7f8727e_2 networkx 2.6.3 pypi_0 pypi ninja 1.10.2 py37hd09550d_3 numpy 1.21.2 py37h20f2e39_0 numpy-base 1.21.2 py37h79a1101_0 olefile 0.46 py37_0 opencv 3.4.2 py37h6fd60c2_1 openssl 1.1.1m h7f8727e_0 packaging 21.3 pyhd3eb1b0_0 pcre 8.45 h295c915_0 pillow 8.4.0 py37h5aabda8_0 pip 21.2.2 py37h06a4308_0 pixman 0.40.0 h7f8727e_1 py-opencv 3.4.2 py37hb342d67_1 pyparsing 3.0.4 pyhd3eb1b0_0 pyqt 5.9.2 py37h05f1152_2 python 3.7.11 h12debd9_0 python-dateutil 2.8.2 pyhd3eb1b0_0 pytorch 1.7.0 py3.7_cpu_0 [cpuonly] pytorch pywavelets 1.2.0 pypi_0 pypi qt 5.9.7 h5867ecd_1 readline 8.1.2 h7f8727e_1 scikit-image 0.19.1 pypi_0 pypi scipy 1.7.3 py37hc147768_0 setuptools 58.0.4 py37h06a4308_0 sip 4.19.8 py37hf484d3e_0 six 1.16.0 pyhd3eb1b0_1 sqlite 3.37.2 hc218d9a_0 tifffile 2021.11.2 pypi_0 pypi tk 8.6.11 h1ccaba5_0 torchaudio 0.7.0 py37 pytorch torchvision 0.8.1 py37_cpu [cpuonly] pytorch tornado 6.1 py37h27cfd23_0 typing_extensions 3.10.0.2 pyh06a4308_0 wheel 0.37.1 pyhd3eb1b0_0 xz 5.2.5 h7b6447c_0 zlib 1.2.11 h7f8727e_4 zstd 1.4.9 haebb681_0 There seems to be a lot of things saying cpuonly, but I am not sure how they came about, since I did not install them. I am running Ubuntu version 20.04.4 LTS | I believe I had the following things wrong that prevented me from using Cuda. Despite having cuda installed the nvcc --version command indicated that Cuda was not installed and so what I did was add it to the path using this answer. Despite doing that and deleting my original conda environment and using the conda install pytorch torchvision torchaudio cudatoolkit=11.3 matplotlib scipy opencv -c pytorch command again I still got False when evaluating torch.cuda.is_available(). I then used this command conda install pytorch torchvision torchaudio cudatoolkit=10.2 matplotlib scipy opencv -c pytorch changing cudatoolkit from verison 11.3 to version 10.2 and then it worked! Now torch.cuda.is_available() evaluates to True Unfortunately, Cuda version 10.2 was incompatible with my RTX 3060 gpu (and I'm assuming it is not compatible with all RTX 3000 cards). Cuda version 11.0 was giving me errors and Cuda version 11.3 only installs the CPU only versions for some reason. Cuda version 11.1 worked perfectly though! This is the command I used to get it to work in the end: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html | 11 | 5 |
71,144,242 | 2022-2-16 | https://stackoverflow.com/questions/71144242/which-arguments-is-futurewarning-use-of-kwargs-is-deprecated-use-engine-kwa | I got the following waring from that code: file = r'.\changed_activities.xlsx' with pd.ExcelWriter(file, engine='openpyxl', mode='a', if_sheet_exists='new') as writer: df.to_excel(writer, sheet_name=activity[0:30]) FutureWarning: Use of **kwargs is deprecated, use engine_kwargs instead. with pd.ExcelWriter(file, What I can't seem to read from the documentation is which of my arguments I need to replace? the examples are e.g. this, but I don't see how that translates to my code. with pd.ExcelWriter( "path_to_file.xlsx", engine="openpyxl", mode="a", engine_kwargs={"keep_vba": True} ) as writer: df.to_excel(writer, sheet_name="Sheet2") And maybe in a more open question: how/where can I understand something like that that in a general case? | Use pd.ExcelWriter('out.xlsx', engine='xlsxwriter', engine_kwargs={'options':{'strings_to_urls': False}}) Instead of pd.ExcelWriter('out.xlsx', engine='xlsxwriter', options={'strings_to_urls': False}}) | 6 | 11 |
71,194,918 | 2022-2-20 | https://stackoverflow.com/questions/71194918/when-i-use-docker-compose-to-install-a-fastapi-project-i-got-assertionerror | when I use docker-compose to install a fastapi project, I got AssertionError: jinja2 must be installed to use Jinja2Templates but when I use env to install it, that will be run well. my OS: Ubuntu18.04STL my requirements.txt: fastapi~=0.68.2 starlette==0.14.2 pydantic~=1.8.1 uvicorn~=0.12.3 SQLAlchemy~=1.4.23 # WSGI Werkzeug==1.0.1 pyjwt~=1.7.0 # async-exit-stack~=1.0.1 # async-generator~=1.10 jinja2~=2.11.2 # assert aiofiles is not None, "'aiofiles' must be installed to use FileResponse" aiofiles~=0.6.0 python-multipart~=0.0.5 requests~=2.25.0 pyyaml~=5.3.1 # html-builder==0.0.6 loguru~=0.5.3 apscheduler==3.7.0 pytest~=6.1.2 html2text==2020.1.16 mkdocs==1.2.1 Dockerfile FROM python:3.8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 WORKDIR /server COPY requirements.txt /server/ RUN pip install -r requirements.txt COPY . /server/ docker-compose.yml version: '3.7' services: figbox_api: build: context: . dockerfile: Dockerfile command: uvicorn app.main:app --port 8773 --host 0.0.0.0 --reload volumes: - .:/server ports: - 8773:8773 Do I need to provide some other information? Thanks | I had a same problem on heroku, the error comes from Jinja2 version 2.11.x and it run locally but not in Heroku. Just install latest version of jinja2 it will work fine in your case too. pip install Jinja2==3.1.2 or pip install Jinja2 --upgrade | 9 | 3 |
71,164,259 | 2022-2-17 | https://stackoverflow.com/questions/71164259/tensorflow-augmentation-layers-not-working-after-importing-from-tf-keras-applica | I am currently using a model from tf.keras.applications for training. And a data augmentation layer along with it. Wierdly, after I import the model from applications, the augmentation layer does not work. The augmentation layer does work before I import it. What is going on? Also, this has only started happening recently after the new version of TF 2.8.0 was released. Before it was working all fine. The code for the augmentation layer is data_augmentation = tf.keras.Sequential([ tf.keras.layers.RandomFlip("horizontal_and_vertical"), tf.keras.layers.RandomRotation(0.5), ]) And I am importing the model using base_model = tf.keras.applications.MobileNetV3Small( input_shape=(75, 50, 3), alpha=1.0, weights='imagenet', pooling='avg', include_top=False, dropout_rate=0.1, include_preprocessing=False) Please help me understand what is going on. You can reproduce the code here on this notebook https://colab.research.google.com/drive/13Jd3l2CxbvIWQv3Y7CtryOdrv2IdKNxD?usp=sharing | I noticed same issue with tf 2.8. It can be solved by add training =True , when you test the augmentation layer: aug = data_augmentation(image,training=True) The reason is that the augmentation layer behaves differently in training and predicting (inference), i.e. it will do augmentation in training but do nothing in predicting. Ideally, the layer should set the training= argument smartly according the situation. Apparently, it is not smart in above code: it does not know your intention is to test the layer. But I think you should still leave training argument as default when you build the full model, letting the augmentation layer do the job. | 6 | 12 |
71,226,654 | 2022-2-22 | https://stackoverflow.com/questions/71226654/how-can-i-use-groupby-with-multiple-values-in-a-column-in-pandas | I've a dataframe like as follows, import pandas as pd data = { 'brand': ['Mercedes', 'Renault', 'Ford', 'Mercedes', 'Mercedes', 'Mercedes', 'Renault'], 'model': ['X', 'Y', 'Z', 'X', 'X', 'X', 'Q'], 'year': [2011, 2010, 2009, 2010, 2012, 2020, 2011], 'price': [None, 1000.4, 2000.3, 1000.0, 1100.3, 3000.5, None] } df = pd.DataFrame(data) print(df) brand model year price 0 Mercedes X 2011 NaN 1 Renault Y 2010 1000.4 2 Ford Z 2009 2000.3 3 Mercedes X 2010 1000.0 4 Mercedes X 2012 1100.3 5 Mercedes X 2020 3000.5 6 Renault Q 2011 NaN And here is the another case to test your solution, data = { 'brand': ['Mercedes', 'Mercedes', 'Mercedes', 'Mercedes', 'Mercedes'], 'model': ['X', 'X', 'X', 'X', 'X'], 'year': [2017, 2018, 2018, 2019, 2019], 'price': [None, None, None, 1000.0, 1200.50] } Expected output, brand model year price 0 Mercedes X 2017 NaN 1 Mercedes X 2018 1100.25 2 Mercedes X 2018 1100.25 3 Mercedes X 2019 1000.00 4 Mercedes X 2019 1200.50 I want to fill the missing values with the average of the observations containing year-1, year and year+1 and also same brand and model. For instance, Mercedes X model has a null price in 2011. When I look at the data, 2011 - 1 = 2010 2011 + 1 = 2012 The 4th observation -> Mercedes,X,2010,1000.0 The 5th observation -> Mercedes,X,2012,1100.3 The mean -> (1000.0 + 1100.3) / 2 = 1050.15 I've tried something as follows, for c_key, _ in df.groupby(['brand', 'model', 'year']): fc = ( (df['brand'] == c_key[0]) & (df['model'] == c_key[1]) & (df['year'].isin([c_key[2] + 1, c_key[2], c_key[2] - 1])) ) sc = ( (df['brand'] == c_key[0]) & (df['model'] == c_key[1]) & (df['year'] == c_key[2]) & (df['price'].isnull()) ) mean_val = df[fc]['price'].mean() df.loc[sc, 'price'] = mean_val print(df) brand model year price 0 Mercedes X 2011 1050.15 1 Renault Y 2010 1000.40 2 Ford Z 2009 2000.30 3 Mercedes X 2010 1000.00 4 Mercedes X 2012 1100.30 5 Mercedes X 2020 3000.50 6 Renault Q 2011 NaN But this solution takes a long time for 90,000 rows and 27 columns so, is there a more effective solution? For instance, can I use groupby for the values year-1, year, year+1, brand and model? Thanks in advance. | def fill_it(x): return df[(df.brand==df.iat[x,0])&(df.model==df.iat[x,1])&((df.year==df.iat[x,2]-1)|(df.year==df.iat[x,2]+1))].price.mean() df = df.apply(lambda x: x.fillna(fill_it(x.name)), axis=1) df Output 1: brand model year price 0 Mercedes X 2011 1050.15 1 Renault Y 2010 1000.40 2 Ford Z 2009 2000.30 3 Mercedes X 2010 1000.00 4 Mercedes X 2012 1100.30 5 Mercedes X 2020 3000.50 6 Renault Q 2011 NaN Output 2: brand model year price 0 Mercedes X 2017 NaN 1 Mercedes X 2018 1100.25 2 Mercedes X 2018 1100.25 3 Mercedes X 2019 1000.00 4 Mercedes X 2019 1200.50 This is 3x Faster df.loc[df.price.isna(), 'price'] = df[df.price.isna()].apply(lambda x: x.fillna(fill_it(x.name)), axis=1) I tried with another approach, using pd.rolling and it is way faster (on the dataframe with 70k rows runs in 200ms). The outputs are still as you wanted them. df.year = pd.to_datetime(df.year, format='%Y') df.sort_values('year', inplace=True) df.groupby(['brand', 'model']).apply(lambda x: x.fillna(x.rolling('1095D',on='year', center=True).mean())).sort_index() | 5 | 1 |
71,221,412 | 2022-2-22 | https://stackoverflow.com/questions/71221412/dag-run-not-found-when-unit-testing-a-custom-operator-in-airflow | I've written a custom operator (DataCleaningOperator), which corrects JSON data based on a provided schema. The unit tests previously worked when I didn't have to instatiate a TaskInstance and provide the operator with a context. However, I've updated the operator recently to take in a context (so that it can use xcom_push). Here is an example of one of the tests: DEFAULT_DATE = datetime.today() class TestDataCleaningOperator(unittest.TestCase): """ Class to execute unit tests for the operator 'DataCleaningOperator'. """ def setUp(self) -> None: super().setUp() self.dag = DAG( dag_id="test_dag_data_cleaning", schedule_interval=None, default_args={ "owner": "airflow", "start_date": DEFAULT_DATE, "output_to_xcom": True, }, ) self._initialise_test_data() def _initialize_test_data() -> None: # Test data set here as class variables such as self.test_data_correct ... def test_operator_cleans_dataset_which_matches_schema(self) -> None: """ Test: Attempt to clean a dataset which matches the provided schema. Verification: Returns the original dataset, unchanged. """ task = DataCleaningOperator( task_id="test_operator_cleans_dataset_which_matches_schema", schema_fields=self.test_schema_nest, data_file_object=deepcopy(self.test_data_correct), dag=self.dag, ) ti = TaskInstance(task=task, execution_date=DEFAULT_DATE) result: List[dict] = task.execute(ti.get_template_context()) self.assertEqual(result, self.test_data_correct) However, when the tests are run, the following error is raised: airflow.exceptions.DagRunNotFound: DagRun for 'test_dag_data_cleaning' with date 2022-02-22 12:09:51.538954+00:00 not found This is related to the line in which a task instance is instantiated in test_operator_cleans_dataset_which_matches_schema. Why can't Airflow locate the test_dag_data_cleaning DAG? Is there a specific configuration I've missed? Do I need to also create a DAG run instance or add the DAG to the dag bag manually if this test dag is outide of my standard DAG directory? All normal (non-test) dags in my dag dir run correctly. In case it helps, my current Airflow version is 2.2.3 and the structure of my project is: airflow ├─ dags ├─ plugins | ├─ ... | └─ operators | ├─ ... | └─ data_cleaning_operator.py | └─ tests ├─ ... └─ operators └─ test_data_cleaning_operator.py | The code have written is using Airflow 2.0 format of unit test. So when you upgraded to Airflow 2.2.3, the unit test requires you to create a dagrun before you create a test run. Below is the sample code which worked for me: import unittest import pendulum from airflow import DAG from airflow.utils.state import DagRunState from airflow.utils.types import DagRunType from operators.test_operator import EvenNumberCheckOperator DEFAULT_DATE = pendulum.datetime(2022, 3, 4, tz='America/Toronto') TEST_DAG_ID = "my_custom_operator_dag" TEST_TASK_ID = "my_custom_operator_task" class TestEvenNumberCheckOperator(unittest.TestCase): def setUp(self): super().setUp() self.dag = DAG('test_dag4', default_args={'owner': 'airflow', 'start_date': DEFAULT_DATE}) self.even = 10 self.odd = 11 EvenNumberCheckOperator( task_id=TEST_TASK_ID, my_operator_param=self.even, dag=self.dag ) def test_even(self): """Tests that the EvenNumberCheckOperator returns True for 10.""" dagrun = self.dag.create_dagrun(state=DagRunState.RUNNING, execution_date=DEFAULT_DATE, #data_interval=DEFAULT_DATE, start_date=DEFAULT_DATE, run_type=DagRunType.MANUAL) ti = dagrun.get_task_instance(task_id=TEST_TASK_ID) ti.task = self.dag.get_task(task_id=TEST_TASK_ID) result = ti.task.execute(ti.get_template_context()) assert result is True | 7 | 7 |
71,178,416 | 2022-2-18 | https://stackoverflow.com/questions/71178416/can-you-safely-change-a-python-objects-type-in-a-c-extension | Question Suppose that I have implemented two Python types using the C extension API and that the types are identical (same data layouts/C struct) with the exception of their names and a few methods. Assuming that all methods respect the data layout, can you safely change the type of an object from one of these types into the other in a C function? Notably, as of Python 3.9, there appears to be a function Py_SET_TYPE, but the documentation is not clear as to whether/when this is safe to do. I'm interested in knowing both how to use this function safely and whether types can be safely changed prior to version 3.9. Motivation I'm writing a Python C extension to implement a Persistent Hash Array Mapped Trie (PHAMT); in case it's useful, the source code is here (as of writing, it is at this commit). A feature I would like to add is the ability to create a Transient Hash Array Mapped Trie (THAMT) from a PHAMT. THAMTs can be created from PHAMTs in O(1) time and can be mutated in-place efficiently. Critically, THAMTs have the exact same underlying C data-structure as PHAMTs—the only real difference between a PHAMT and a THAMT is a few methods encapsulated by their Python types. This common structure allows one to very efficiently turn a THAMT back into a PHAMT once one has finished performing a set of edits. (This pattern typically reduces the number of memory allocations when performing a large number of updates to a PHAMT). A very convenient way to implement the conversion from THAMT to PHAMT would be to simply change the type pointers of the THAMT objects from the THAMT type to the PHAMT type. I am confident that I can write code that safely navigates this change, but I can imagine that doing so might, for example, break the Python garbage collector. (To be clear: the motivation is just context as to how the question arose. I'm not looking for help implementing the structures described in the Motivation, I'm looking for an answer to the Question, above.) | The supported way It is officially possible to change an object's type in Python, as long as the memory layouts are compatible... but this is mostly limited to types not implemented in C. With some restrictions, it is possible to do # Python attribute assignment, not C struct member assignment obj.__class__ = some_new_class to change an object's class, with one of the restrictions being that both the old and new classes must be "heap types", which all classes implemented in Python are and most classes implemented in C are not. (types.ModuleType and subclasses of that type are also specifically permitted, despite types.ModuleType not being a heap type. See the source for exact restrictions.) If you want to create a heap type from C, you can, but the interface is pretty different from the normal way of defining Python types from C. Plus, for __class__ assignment to work, you have to not set the Py_TPFLAGS_IMMUTABLETYPE flag, and that means that people will be able to monkey-patch your classes in ways you might not like (or maybe you see that as an upside). If you want to go that route, I suggest looking at the CPython 3.10 _functools module source code for an example. (They set the Py_TPFLAGS_IMMUTABLETYPE flag, which you'll have to make sure not to do.) The unsupported way There was an attempt at one point to allow __class__ assignment for non-heap types, as long as the memory layouts worked. It got abandoned because it caused problems with some built-in immutable types, where the interpreter likes to reuse instances. For example, allowing (1).__class__ = SomethingElse would have caused a lot of problems. You can read more in the big comment in the source code for the __class__ setter. (The comment is slightly out of date, particularly regarding the Py_TPFLAGS_IMMUTABLETYPE flag, which was added after the comment was written.) As far as I know, this was the only problem, and I don't think any more problems have been added since then. The interpreter isn't going to aggressively reuse instances of your classes, so as long as you're not doing anything like that, and the memory layouts are compatible, I think changing the type of your objects should work for now, even for non-heap-types. However, it is not officially supported, so even if I'm right about this working for now, there's no guarantee it'll keep working. Py_SET_TYPE only sets an object's type pointer. It doesn't do any refcount fixing that might be needed. It's a very low-level operation. If neither the old class nor the new class are heap types, no extra refcount fixing is needed, but if the old class is a heap type, you will have to decref the old class, and if the new class is a heap type, you will have to incref the new class. If you need to decref the old class, make sure to do it after changing the object's class and possibly incref'ing the new class. | 7 | 5 |
71,150,313 | 2022-2-16 | https://stackoverflow.com/questions/71150313/python-docx-adding-bold-and-non-bold-strings-to-same-cell-in-table | I'm using python-docx to create a document with a table I want to populate from textual data. My text looks like this: 01:02:10.3 a: Lorem ipsum dolor sit amet, b: consectetur adipiscing elit. a: Mauris a turpis erat. 01:02:20.4 a: Vivamus dignissim aliquam b: Nam ultricies (etc.) I need to organize it in a table like this (using ASCII for visualization): +---+--------------------+---------------------------------+ | | A | B | +---+--------------------+---------------------------------+ | 1 | 01:02:10.3 | a: Lorem ipsum dolor sit amet, | | 2 | | b: consectetur adipiscing elit. | | 3 | | a: Mauris a turpis erat. | | 4 | ------------------ | ------------------------------- | | 5 | 01:02:20.4 | a: Vivamus dignissim aliqua | | 6 | | b: Nam ultricies | +---+--------------------+---------------------------------+ however, I need to make it so everything after "a: " is bold, and everything after "b: " isn't, while they both occupy the same cell. It's pretty easy to iterate and organize this the way I want, but I'm really unsure about how to make only some of the lines bold: IS_BOLD = { 'a': True 'b': False } row_cells = table.add_row().cells for line in lines: if is_timestamp(line): # function that uses regex to discern between columns if row_cells[1]: row_cells = table.add_row().cells row_cells[0].text = line else row_cells[1].text += line if IS_BOLD[ line.split(":")[0] ]: # make only this line within the cell bold, somehow. (this is sort of pseudo-code, I'm doing some more textual processing but that's kinda irrelevant here). I found one probably relevant question where someone uses something called run but I'm finding it hard to understand how to apply it to my case. Any help? Thanks. | You need to add run in the cell's paragraph. This way you can control the specific text you wish to bold Full example: from docx import Document from docx.shared import Inches import os import re def is_timestamp(line): # it's flaky, I saw you have your own method and probably you did a better job parsing this. return re.match(r'^\d{2}:\d{2}:\d{2}', line) is not None def parse_raw_script(raw_script): current_timestamp = '' current_content = '' for line in raw_script.splitlines(): line = line.strip() if is_timestamp(line): if current_timestamp: yield { 'timestamp': current_timestamp, 'content': current_content } current_timestamp = line current_content = '' continue if current_content: current_content += '\n' current_content += line if current_timestamp: yield { 'timestamp': current_timestamp, 'content': current_content } def should_bold(line): # i leave it to you to replace with your logic return line.startswith('a:') def load_raw_script(): # I placed here the example from your question. read from file instead I presume return '''01:02:10.3 a: Lorem ipsum dolor sit amet, b: consectetur adipiscing elit. a: Mauris a turpis erat. 01:02:20.4 a: Vivamus dignissim aliquam b: Nam ultricies''' def convert_raw_script_to_docx(raw_script, output_file_path): document = Document() table = document.add_table(rows=1, cols=3, style="Table Grid") # add header row header_row = table.rows[0] header_row.cells[0].text = '' header_row.cells[1].text = 'A' header_row.cells[2].text = 'B' # parse the raw script into something iterable script_rows = parse_raw_script(raw_script) # create a row for each timestamp row for script_row in script_rows: timestamp = script_row['timestamp'] content = script_row['content'] row = table.add_row() timestamp_cell = row.cells[1] timestamp_cell.text = timestamp content_cell = row.cells[2] content_paragraph = content_cell.paragraphs[0] # using the cell's default paragraph here instead of creating one for line in content.splitlines(): run = content_paragraph.add_run(line) if should_bold(line): run.bold = True run.add_break() # resize table columns (optional) for row in table.rows: row.cells[0].width = Inches(0.2) row.cells[1].width = Inches(1.9) row.cells[2].width = Inches(3.9) document.save(output_file_path) def main(): script_dir = os.path.dirname(__file__) dist_dir = os.path.join(script_dir, 'dist') if not os.path.isdir(dist_dir): os.makedirs(dist_dir) output_file_path = os.path.join(dist_dir, 'so-template.docx') raw_script = load_raw_script() convert_raw_script_to_docx(raw_script, output_file_path) if __name__ == '__main__': main() Result (file should be in ./dist/so-template.docx): BTW - if you prefer sticking with your own example, this is what needs to be changed: IS_BOLD = { 'a': True, 'b': False } row_cells = table.add_row().cells for line in lines: if is_timestamp(line): if row_cells[1]: row_cells = table.add_row().cells row_cells[0].text = line else: run = row_cells[1].paragraphs[0].add_run(line) if IS_BOLD[line.split(":")[0]]: run.bold = True run.add_break() | 7 | 5 |
71,145,982 | 2022-2-16 | https://stackoverflow.com/questions/71145982/django-form-doesnt-display | I'm trying to develop a simple Django app of a contact form and a thanks page. I'm not using Django 'admin' at all; no database, either. Django 3.2.12. I'm working on localhost using python manage.py runserver I can't get the actual form to display at http://127.0.0.1:8000/contact/contact; all I see is the submit button from /contact/contactform/templates/contact.html: Static files load OK: http://127.0.0.1:8000/static/css/bootstrap.css The thanks.html page loads OK: http://127.0.0.1:8000/contact/thanks This is the directory structure: /contact/contact/settings.py import os from pathlib import Path from dotenv import load_dotenv load_dotenv() DEBUG=True BASE_DIR = Path(__file__).resolve().parent.parent ALLOWED_HOSTS = ['127.0.0.1'] + os.getenv('REMOTE_HOST').split(',') SECRET_KEY = os.getenv('SECRET_KEY') EMAIL_USE_TLS = os.getenv('EMAIL_USE_TLS') EMAIL_HOST = os.getenv('EMAIL_HOST') EMAIL_PORT = os.getenv('EMAIL_PORT') EMAIL_HOST_USER = os.getenv('EMAIL_HOST_USER') EMAIL_HOST_PASSWORD = os.getenv('EMAIL_HOST_PASSWORD') INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'contactform.apps.ContactformConfig', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'contact.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'contact.wsgi.application' AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True STATIC_URL = '/static/' STATIC_ROOT = BASE_DIR / 'static/' DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' /contact/contact/urls.py from django.contrib import admin from django.urls import path urlpatterns = [ path('admin/', admin.site.urls), ] from django.urls import include urlpatterns += [ path('contact/', include('contactform.urls')), ] from django.conf import settings from django.conf.urls.static import static urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) /contact/contactform/urls.py from django.urls import path from . import views app_name = 'contactform' urlpatterns = [ path('thanks/', views.thanks, name='thanks'), path('contact/', views.contact, name='contact'), ] /contact/contactform/views.py import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from django.http import HttpResponseRedirect from django.shortcuts import render, get_object_or_404 from contactform.forms import ContactForm from contact.settings import EMAIL_HOST_USER, EMAIL_PORT, EMAIL_HOST_PASSWORD, EMAIL_HOST def thanks(request): return render(request, 'thanks.html', {}) def contact(request): if request.method == 'POST': form = ContactForm(request.POST) if form.is_valid(): form_data = form.cleaned_data msg = MIMEMultipart() msg['From'] = EMAIL_HOST_USER msg['To'] = EMAIL_HOST_USER msg['Subject'] = f'Personal site: {form_data["subject"]}' message = f'Name: {form_data["name"]}\n' \ f'Email address: {form_data["email_address"]}\n\n' \ f'{form_data["message"]}' msg.attach(MIMEText(message)) with smtplib.SMTP(EMAIL_HOST, EMAIL_PORT) as server: server.ehlo() server.starttls() server.login(EMAIL_HOST_USER, EMAIL_HOST_PASSWORD) server.sendmail(EMAIL_HOST_USER, EMAIL_HOST_USER, msg.as_string()) return HttpResponseRedirect('/thanks') else: form = ContactForm() return render(request, 'contact.html') /contact/contactform/models.py from django.urls import reverse /contact/contactform/apps.py from django.apps import AppConfig class ContactformConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'contactform' /contact/contactform/forms.py from django import forms class ContactForm(forms.Form): name = forms.CharField(required=True, widget=forms.TextInput( attrs={'class': 'form-control', 'maxlength': '100'} )) email_address = forms.EmailField(required=True, widget=forms.EmailInput( attrs={'class': 'form-control', 'maxlength': '100'} )) subject = forms.CharField(required=True, widget=forms.TextInput( attrs={'class': 'form-control', 'maxlength': '100'} )) message = forms.CharField(required=True, widget=forms.Textarea( attrs={'class': 'form-control', 'maxlength': '1000', 'rows': 8} )) /contact/contactform/templates/contact.html <h2>Form</h2> <form action="/contact/" method="post"> {% csrf_token %} {{ form.as_p }} <button type="submit">Send</button> </form> Update 2/20/22 This views.py now works and shows the contact form; the remaining issuse is when the form is completed, the redirect to the thanks page throws a 404. import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from django.shortcuts import redirect from django.shortcuts import render, get_object_or_404 from contactform.forms import ContactForm from contact.settings import EMAIL_HOST_USER, EMAIL_PORT, EMAIL_HOST_PASSWORD, EMAIL_HOST def thanks(request): return render(request, 'thanks.html', {}) def contact(request): if request.method == 'POST': form = ContactForm(request.POST) if form.is_valid(): form_data = form.cleaned_data msg = MIMEMultipart() msg['From'] = EMAIL_HOST_USER msg['To'] = EMAIL_HOST_USER msg['Subject'] = f'Site Email' message = f'Name: {form_data["name"]}\n' \ f'Email address: {form_data["email_address"]}\n\n' \ f'{form_data["message"]}' msg.attach(MIMEText(message)) with smtplib.SMTP(EMAIL_HOST, EMAIL_PORT) as server: server.ehlo() server.starttls() server.login(EMAIL_HOST_USER, EMAIL_HOST_PASSWORD) server.sendmail(EMAIL_HOST_USER, EMAIL_HOST_USER, msg.as_string()) return redirect('contactform:thanks') else: form = ContactForm() return render(request, 'contact.html', { "form": form }) Error screen: | Your form action needs to point to <form action="/contact/contact/".... or better <form action="{% url 'contactform:contact' %}" ...) | 5 | 1 |
71,229,685 | 2022-2-22 | https://stackoverflow.com/questions/71229685/packages-installed-with-poetry-fail-to-import | Having a simple yet confusing issue: a package I added with poetry fails to import when I try to use it in a module. Steps taken: poetry add sendgrid In a module, import sendgrid Error: Import "sendgrid" could not be resolved PylancereportMissingImports Troubleshooting I've tried: I checked my project's poetry venv dir, and sendgrid is there: 'C:\\Users\\xyz123\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\nameofproject-py3.10\\lib\\site-packages' Also checked sys.path(); the path to that site-packages dir is listed Running poetry install gives me the response No dependencies to install or update both the pyproject.toml and the poetry.lock files list sendgrid What is going on? | Well, it turns out it's a matter of VSCode not playing nice and failing to recognize Poetry's virtual environment. I had to run the Python: Select Interpreter command and change the venv directory to the one my project is using, then it was able to recognize the installed packages. See here for more details on how to do that. | 8 | 19 |
71,228,643 | 2022-2-22 | https://stackoverflow.com/questions/71228643/mwaa-airflow-2-2-2-dag-object-has-no-attribute-update-relative | So I was upgrading DAGs from airflow version 1.12.15 to 2.2.2 and DOWNGRADING python from 3.8 to 3.7 (since MWAA doesn't support python 3.8). The DAG is working fine on the previous setup but shows this error on the MWAA setup: Broken DAG: [/usr/local/airflow/dags/google_analytics_import.py] Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 1474, in set_downstream self._set_relatives(task_or_task_list, upstream=False, edge_modifier=edge_modifier) File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 1412, in _set_relatives task_object.update_relative(self, not upstream) AttributeError: 'DAG' object has no attribute 'update_relative' This is the built-in function that seems to be failing: def set_downstream( self, task_or_task_list: Union[TaskMixin, Sequence[TaskMixin]], edge_modifier: Optional[EdgeModifier] = None, ) -> None: """ Set a task or a task list to be directly downstream from the current task. Required by TaskMixin. """ self._set_relatives(task_or_task_list, upstream=False, edge_modifier=edge_modifier) There is the code we are trying to run in the DAG: for report in reports: dag << PythonOperator( task_id=f"task_{report}", python_callable=process, op_kwargs={ "conn": "snowflake_production", "table": report, }, provide_context=True, ) I am thinking this transition from Python 3.8 to 3.7 is causing this issue but I am not sure. Did anyone run across a similar issue ? | For Airflow>=2.0.0 Assigning task to a DAG using bitwise shift (bit-shift) operators are no longer supported. Trying to do: dag = DAG("my_dag") dummy = DummyOperator(task_id="dummy") dag >> dummy Will not work. Dependencies should be set only between operators. You should use context manager: with DAG("my_dag") as dag: dummy = DummyOperator(task_id="dummy") It already handles the relations of operator to DAG object. If you prefer not to, then use the dag parameter in the operator constructor as: DummyOperator(task_id="dummy", dag=dag) | 8 | 4 |
71,166,789 | 2022-2-17 | https://stackoverflow.com/questions/71166789/huggingface-valueerror-expected-sequence-of-length-165-at-dim-1-got-128 | I am trying to fine-tune the BERT language model on my own data. I've gone through their docs, but their tasks seem to be not quite what I need, since my end goal is embedding text. Here's my code: from datasets import load_dataset from transformers import BertTokenizerFast, AutoModel, TrainingArguments, Trainer import glob import os base_path = '../data/' model_name = 'bert-base-uncased' max_length = 512 checkpoints_dir = 'checkpoints' tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True) def tokenize_function(examples): return tokenizer(examples['text'], padding=True, truncation=True, max_length=max_length) dataset = load_dataset('text', data_files={ 'train': f'{base_path}train.txt', 'test': f'{base_path}test.txt', 'validation': f'{base_path}valid.txt' } ) print('Tokenizing data. This may take a while...') tokenized_dataset = dataset.map(tokenize_function, batched=True) train_dataset = tokenized_dataset['train'] eval_dataset = tokenized_dataset['test'] model = AutoModel.from_pretrained(model_name) training_args = TrainingArguments(checkpoints_dir) print('Training the model...') trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() I get the following error: File "train_lm_hf.py", line 44, in <module> trainer.train() ... File "/opt/conda/lib/python3.7/site-packages/transformers/data/data_collator.py", line 130, in torch_default_data_collator batch[k] = torch.tensor([f[k] for f in features]) ValueError: expected sequence of length 165 at dim 1 (got 128) What am I doing wrong? | I fixed this solution by changing the tokenize function to: def tokenize_function(examples): return tokenizer(examples['text'], padding='max_length', truncation=True, max_length=max_length) (note the padding argument). Also, I used a data collator like so: data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset ) | 13 | 19 |
71,217,530 | 2022-2-22 | https://stackoverflow.com/questions/71217530/i-want-to-get-the-address-from-mnemonic-with-the-proper-derivation-path | I am very new to blockchain programming and programming in general. I want to generate my SOL address using the mnemonic seed phrase with the derivation path "m/44'/501'/0'/0". I can't find a proper BIP44 module for python where you can specify the derivation path. | After a long search through the internet, I have finally found a way of solving my problem that I want to share with you. from bip_utils import * MNEMONIC = "...12 words phrase..." seed_bytes = Bip39SeedGenerator(MNEMONIC).Generate("") bip44_mst_ctx = Bip44.FromSeed(seed_bytes, Bip44Coins.SOLANA) bip44_acc_ctx = bip44_mst_ctx.Purpose().Coin().Account(0) bip44_chg_ctx = bip44_acc_ctx.Change(Bip44Changes.CHAIN_EXT) print(bip44_chg_ctx.PublicKey().ToAddress()) This code outputs the first address of your mnemonic. This is only for Sollet and Phantom wallet! If you are using Solflare you can cut the line bip44_chg_ctx = bip44_acc_ctx.Change(Bip44Changes.CHAIN_EXT) out! | 5 | 6 |
71,225,872 | 2022-2-22 | https://stackoverflow.com/questions/71225872/why-does-numpy-viewbool-makes-numpy-logical-and-significantly-faster | When passing a numpy.ndarray of uint8 to numpy.logical_and, it runs significantly faster if I apply numpy.view(bool) to its inputs. a = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8) b = np.random.randint(0, 255, 1000 * 1000 * 100, dtype=np.uint8) %timeit np.logical_and(a, b) 126 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit np.logical_and(a.view(bool), b.view(bool)) 20.9 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Can someone explain why this is happening? Furthermore, why numpy.logical_and doesn't automatically apply view(bool) to an array of uint8? (Is there any situation where we shouldn't use view(bool)?) EDIT: It seems that this is an issue with Windows environment. I just tried the same thing in the official python docker container (which is debian) and found no difference between them. My environment: OS: Windows 10 Pro 21H2 CPU: AMD Ryzen 9 5900X Python: Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 numpy: 1.22.2 | This is a performance issue of the current Numpy implementation. I can also reproduce this problem on Windows (using an Intel Skylake Xeon processor with Numpy 1.20.3). np.logical_and(a, b) executes a very-inefficient scalar assembly code based on slow conditional jumps while np.logical_and(a.view(bool), b.view(bool)) executes relatively-fast SIMD instructions. Currently, Numpy uses a specific implementation for bool-types. Regarding the compiler used, the general-purpose implementation can be significantly slower if the compiler used to build Numpy failed to automatically vectorize the code which is apparently the case on Windows (and explain why this is not the case on other platforms since the compiler is likely not exactly the same). The Numpy code can be improved for non-bool types. Note that the vectorization of Numpy is an ongoing work and we plan optimize this soon. Deeper analysis Here is the assembly code executed by np.logical_and(a, b): Block 24: cmp byte ptr [r8], 0x0 ; Read a[i] jz <Block 27> ; Jump to block 27 if a[i]!=0 Block 25: cmp byte ptr [r9], 0x0 ; Read b[i] jz <Block 27> ; Jump to block 27 if b[i]!=0 Block 26: mov al, 0x1 ; al = 1 jmp <Block 28> ; Skip the next instruction Block 27: xor al, al ; al = 0 Block 28: mov byte ptr [rdx], al ; result[i] = al inc r8 ; i += 1 inc rdx inc r9 sub rcx, 0x1 jnz <Block 24> ; Loop again while i<a.shape[0] As you can see, the loop use several data-dependent conditional jumps to write per item of a and b read. This is very inefficient here since the branch taken cannot be predicted by the processor with random values. As a result the processor stall for few cycles (typically about 10 cycles on modern x86 processors). Here is the assembly code executed by np.logical_and(a.view(bool), b.view(bool)): Block 15: movdqu xmm1, xmmword ptr [r10] ; xmm1 = a[i:i+16] movdqu xmm0, xmmword ptr [rbx+r10*1] ; xmm0 = b[i:i+16] lea r10, ptr [r10+0x10] ; i += 16 pcmpeqb xmm1, xmm2 ; \ pandn xmm1, xmm0 ; | Complex sequence to just do: pcmpeqb xmm1, xmm2 ; | xmm1 &= xmm0 pandn xmm1, xmm3 ; / movdqu xmmword ptr [r14+r10*1-0x10], xmm1 ; result[i:i+16] = xmm1 sub rcx, 0x1 jnz <Block 15> ; Loop again while i!=a.shape[0]//16 This code use the SIMD instruction set called SSE which is able to work on 128-bit wide registers. There is no conditional jumps. This code is far more efficient as it operates on 16 items at once per iteration and each iteration should be much faster. Note that this last code is not optimal either as most modern x86 processors (like your AMD one) supports the 256-bit AVX-2 instruction set (twice as fast). Moreover, the compiler generate an inefficient sequence of SIMD instruction to perform the logical-and that can be optimized. The compiler seems to assume the boolean can be values different of 0 or 1. That being said, the input arrays are too big to fit in your CPU cache and so the code is bounded by the throughput of your RAM as opposed to the first one. This is why the SIMD-friendly code is not drastically faster. The difference between the two version is certainly much bigger with arrays of less than 1 MiB on your processor (like on almost all other modern processor). | 7 | 5 |
71,225,952 | 2022-2-22 | https://stackoverflow.com/questions/71225952/try-each-function-of-a-class-with-functools-wraps-decorator | I'm trying to define a decorator in order to execute a class method, try it first and, if an error is detected, raise it mentioning the method in which failed, so as to the user could see in which method is the error. Here I show a MRE (Minimal, Reproducible Example) of my code. from functools import wraps def trier(func): """Decorator for trying A-class methods""" @wraps(func) def inner_func(self, name, *args): try: func(self, *args) except: print(f"An error apeared while {name}") return inner_func class A: def __init__(self): self._animals = 2 self._humans = 5 @trier('getting animals') def animals(self, num): return self._animals + num @trier('getting humans') def humans(self): return self._humans A().animals Many errors are raising, like: TypeError: inner_func() missing 1 required positional argument: 'name' or misunderstanding self class with self function. | As an alternative to Stefan's answer, the following simply uses @trier without any parameters to decorate functions, and then when printing out the error message we can get the name with func.__name__. from functools import wraps def trier(func): """Decorator for trying A-class methods""" @wraps(func) def inner_func(self, *args, **kwargs): try: return func(self, *args, **kwargs) except: print(f"An error apeared in {func.__name__}") return inner_func class A: def __init__(self): self._animals = 2 self._humans = 5 @trier def animals(self, num): return self._animals + num @trier def humans(self): return self._humans print(A().animals(1)) I also fixed a couple of bugs in the code: In trier's try and except the result of calling func was never returned, and you need to include **kwargs in addition to *args so you can use named parameters. I.e. A().animals(num=1) only works when you handle kwargs. | 6 | 4 |
71,189,819 | 2022-2-19 | https://stackoverflow.com/questions/71189819/importerror-cannot-import-name-json-from-itsdangerous | I am trying to get a Flask and Docker application to work but when I try and run it using my docker-compose up command in my Visual Studio terminal, it gives me an ImportError called ImportError: cannot import name 'json' from itsdangerous. I have tried to look for possible solutions to this problem but as of right now there are not many on here or anywhere else. The only two solutions I could find are to change the current installation of MarkupSafe and itsdangerous to a higher version: https://serverfault.com/questions/1094062/from-itsdangerous-import-json-as-json-importerror-cannot-import-name-json-fr and another one on GitHub that tells me to essentially change the MarkUpSafe and itsdangerous installation again https://github.com/aws/aws-sam-cli/issues/3661, I have also tried to make a virtual environment named veganetworkscriptenv to install the packages but that has also failed as well. I am currently using Flask 2.0.0 and Docker 5.0.0 and the error occurs on line eight in vegamain.py. Here is the full ImportError that I get when I try and run the program: veganetworkscript-backend-1 | Traceback (most recent call last): veganetworkscript-backend-1 | File "/app/vegamain.py", line 8, in <module> veganetworkscript-backend-1 | from flask import Flask veganetworkscript-backend-1 | File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module> veganetworkscript-backend-1 | from . import json veganetworkscript-backend-1 | File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module> veganetworkscript-backend-1 | from itsdangerous import json as _json veganetworkscript-backend-1 | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py) veganetworkscript-backend-1 exited with code 1 Here are my requirements.txt, vegamain.py, Dockerfile, and docker-compose.yml files: requirements.txt: Flask==2.0.0 Flask-SQLAlchemy==2.4.4 SQLAlchemy==1.3.20 Flask-Migrate==2.5.3 Flask-Script==2.0.6 Flask-Cors==3.0.9 requests==2.25.0 mysqlclient==2.0.1 pika==1.1.0 wolframalpha==4.3.0 vegamain.py: # Veganetwork (C) TetraSystemSolutions 2022 # all rights are reserved. # # Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022 # # get our imports in order first from flask import Flask # <-- error occurs here!!! # start the application through flask. app = Flask(__name__) # if set to true will return only a "Hello World" string. Debug = True # start a route to the index part of the app in flask. @app.route('/') def index(): if (Debug == True): return 'Hello World!' else: pass # start the flask app here ---> if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') Dockerfile: FROM python:3.9 ENV PYTHONUNBUFFERED 1 WORKDIR /app COPY requirements.txt /app/requirements.txt RUN pip install -r requirements.txt COPY . /app docker-compose.yml: version: '3.8' services: backend: build: context: . dockerfile: Dockerfile command: 'python vegamain.py' ports: - 8004:5000 volumes: - .:/app depends_on: - db # queue: # build: # context: . # dockerfile: Dockerfile # command: 'python -u consumer.py' # depends_on: # - db db: image: mysql:5.7.22 restart: always environment: MYSQL_DATABASE: admin MYSQL_USER: root MYSQL_PASSWORD: root MYSQL_ROOT_PASSWORD: root volumes: - .dbdata:/var/lib/mysql ports: - 33069:3306 How exactly can I fix this code? thank you! | I just put itsdangerous==2.0.1 in my requirements.txt .Then updated my virtualenv using pip install -r requirements.txt and then docker-compose up --build . Now everything fine for me. Didnot upgrade the flask version. | 71 | 34 |
71,211,053 | 2022-2-21 | https://stackoverflow.com/questions/71211053/what-tensorflows-flat-map-window-batch-does-to-a-dataset-array | I'm following one of the online courses about time series predictions using Tensorflow. The function used to convert Numpy array (TS) into a Tensorflow dataset used is LSTM-based model is already given (with my comment lines): def windowed_dataset(series, window_size, batch_size, shuffle_buffer): # creating a tensor from an array dataset = tf.data.Dataset.from_tensor_slices(series) # cutting the tensor into fixed-size windows dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) # joining windows into a batch? dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) # separating row into features/label dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1])) dataset = dataset.batch(batch_size).prefetch(1) return dataset This code work fine but I want to understand it better to modify/adapt it for my needs. If I remove dataset.flat_map(lambda window: window.batch(window_size + 1)) operation, I receive the TypeError: '_VariantDataset' object is not subscriptable pointing to the line: lambda window: (window[:-1], window[-1])) I managed to rewrite part of this code (skipping shuffling) to Numpy-based one: def windowed_dataset_np(series, window_size): values = sliding_window_view(series, window_size) X = values[:, :-1] X = tf.convert_to_tensor(np.expand_dims(X, axis=-1)) y = values[:,-1] return X, y Syntax of fitting of the model looks a bit differently but it works fine. My two questions are: What does dataset.flat_map(lambda window: window.batch(window_size + 1)) achieves? Is the second code really equivalent to the three first operations in the original function? | I would break down the operations into smaller parts to really understand what is happening, since applying window to a dataset actually creates a dataset of windowed datasets containing tensor sequences: import tensorflow as tf window_size = 2 dataset = tf.data.Dataset.range(7) dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) for i, window in enumerate(dataset): print('{}. windowed dataset'.format(i + 1)) for w in window: print(w) 1. windowed dataset tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(2, shape=(), dtype=int64) 2. windowed dataset tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(2, shape=(), dtype=int64) tf.Tensor(3, shape=(), dtype=int64) 3. windowed dataset tf.Tensor(2, shape=(), dtype=int64) tf.Tensor(3, shape=(), dtype=int64) tf.Tensor(4, shape=(), dtype=int64) 4. windowed dataset tf.Tensor(3, shape=(), dtype=int64) tf.Tensor(4, shape=(), dtype=int64) tf.Tensor(5, shape=(), dtype=int64) 5. windowed dataset tf.Tensor(4, shape=(), dtype=int64) tf.Tensor(5, shape=(), dtype=int64) tf.Tensor(6, shape=(), dtype=int64) Notice how the window is always shifted by one position due to the parameter shift=1. Now, the operation flat_map is used here to flatten the dataset of datasets into a dataset of elements; however, you still want to keep the windowed sequences you created so you divide the dataset according to the window parameters using dataset.batch: dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) for w in dataset: print(w) tf.Tensor([0 1 2], shape=(3,), dtype=int64) tf.Tensor([1 2 3], shape=(3,), dtype=int64) tf.Tensor([2 3 4], shape=(3,), dtype=int64) tf.Tensor([3 4 5], shape=(3,), dtype=int64) tf.Tensor([4 5 6], shape=(3,), dtype=int64) You could also first flatten the dataset of datasets and then apply batch if you want to create the windowed sequences: dataset = dataset.flat_map(lambda window: window).batch(window_size + 1) Or only flatten the dataset of datasets: dataset = dataset.flat_map(lambda window: window) for w in dataset: print(w) tf.Tensor(0, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(2, shape=(), dtype=int64) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(2, shape=(), dtype=int64) tf.Tensor(3, shape=(), dtype=int64) tf.Tensor(2, shape=(), dtype=int64) tf.Tensor(3, shape=(), dtype=int64) tf.Tensor(4, shape=(), dtype=int64) tf.Tensor(3, shape=(), dtype=int64) tf.Tensor(4, shape=(), dtype=int64) tf.Tensor(5, shape=(), dtype=int64) tf.Tensor(4, shape=(), dtype=int64) tf.Tensor(5, shape=(), dtype=int64) tf.Tensor(6, shape=(), dtype=int64) But that is probably not what you want. Regarding this line in your question: dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1])), it is pretty trivial. It simply splits the data into sequences and labels, using the last element of each sequence as the label: dataset = dataset.shuffle(2).map(lambda window: (window[:-1], window[-1])) for w in dataset: print(w) (<tf.Tensor: shape=(2,), dtype=int64, numpy=array([1, 2])>, <tf.Tensor: shape=(), dtype=int64, numpy=3>) (<tf.Tensor: shape=(2,), dtype=int64, numpy=array([2, 3])>, <tf.Tensor: shape=(), dtype=int64, numpy=4>) (<tf.Tensor: shape=(2,), dtype=int64, numpy=array([3, 4])>, <tf.Tensor: shape=(), dtype=int64, numpy=5>) (<tf.Tensor: shape=(2,), dtype=int64, numpy=array([4, 5])>, <tf.Tensor: shape=(), dtype=int64, numpy=6>) (<tf.Tensor: shape=(2,), dtype=int64, numpy=array([0, 1])>, <tf.Tensor: shape=(), dtype=int64, numpy=2>) | 6 | 9 |
71,209,619 | 2022-2-21 | https://stackoverflow.com/questions/71209619/pandas-groupby-and-apply-aggregate-function-across-rows | I'm having difficulties applying customs functions to a groupby operation in pandas. Let's suppose that I have the following DataFrame to work with: import pandas as pd df = pd.DataFrame( { "id": [1, 1, 2, 2], "flag": ["A", "A", "B", "B"], "value1": [520, 250, 180, 360], "value2": [11, 5, 7, 2], } ) print(df) id flag value1 value2 0 1 A 520 11 1 1 A 250 5 2 2 B 180 7 3 2 B 360 2 I need to apply 4 aggregate functions to the above DataFrame grouped by id and flag. Specifically, for each id and flag: Calculate the mean of value1; Calculate the sum of value2; Calculate the mean of (value1 * value2) / 12; Calculate the sum of (value1 / value2). I don't have any issues with the first two. This is what I did to calculate them: df.groupby(["id", "flag"]).agg({"value1": ["mean"], "value2": ["sum"]}) value1 value2 mean sum id flag 1 A 385.0 16 2 B 270.0 9 My problems are related to the last two aggregates. I search here for similar problems and I think I need to create two custom functions and apply them to the groupby object. Unfortunately, all my attempts failed and I wasn't able to work this out. Also, if possible, I want all results to be in a single DataFrame like below (hopefully, I've calculated the numbers correctly): value1 value2 mean sum func1 func2 id flag 1 A 385.0 16 290.42 97.27 2 B 270.0 9 82.5 205.71 | groupby().agg. only takes in values of one columns. With custom functions involving several columns, I would do something like this: groupby = df.groupby(['id','flag']) out = pd.DataFrame({ 'value1': groupby['value1'].mean(), 'value2': groupby['value2'].sum(), 'value3': groupby.apply(lambda x: (x['value1'] * x['value2']).mean()/12), 'value4': groupby.apply(lambda x: (x['value1']/x['value2']).sum()) }) Output: value1 value2 value3 value4 id flag 1 A 385 16 290.416667 97.272727 2 B 270 9 82.500000 205.714286 | 5 | 7 |
71,198,478 | 2022-2-20 | https://stackoverflow.com/questions/71198478/counting-all-combinations-of-values-in-multiple-columns | The following is an example of items rated by 1,2 or 3 stars. I am trying to count all combinations of item ratings (stars) per month. In the following example, item 10 was rated in month 1 and has two ratings equal 1, one rating equal 2 and one rating equal 3. inp = pd.DataFrame({'month':[1,1,1,1,1,2,2,2], 'item':[10,10,10,10,20,20,20,20], 'star':[1,2,1,3,3,2,2,3]} ) month item star 0 1 10 1 1 1 10 2 2 1 10 1 3 1 10 3 4 1 20 3 5 2 20 2 6 2 20 2 7 2 20 3 For the given above input frame output should be: month item star_1_cnt star_2_cnt star_3_cnt 0 1 10 2 1 1 1 1 20 0 0 1 2 2 20 0 2 1 I am trying to solve the problem starting with the following code, which result still needs to be converted to the desired format of the output frame and which gives the wrong answers: 1 20 3 (1, 1) 2 20 3 (1, 1) Anyway, there should be a better way to create the output table, then finalizing this one: months = [1,2] items = [10,20] stars = [1,2,3] d = {'month': [], 'item': [], 'star': [], 'star_cnts': [] } for month in months: for star in stars: for item in items: star_cnts=dict(inp[(inp['item']==item) & (inp['star']==star)].value_counts()).values() d['month'].append(month) d['item'].append(item) d['star'].append(star) d['star_cnts'].append(star_cnts) pd.DataFrame(d) month item star star_cnts 0 1 10 1 (2) 1 1 20 1 () 2 1 10 2 (1) 3 1 20 2 (2) 4 1 10 3 (1) 5 1 20 3 (1, 1) 6 2 10 1 (2) 7 2 20 1 () 8 2 10 2 (1) 9 2 20 2 (2) 10 2 10 3 (1) 11 2 20 3 (1, 1) | This seems like a nice problem for pd.get_dummies: new_df = ( pd.concat([df, pd.get_dummies(df['star'])], axis=1) .groupby(['month', 'item'], as_index=False) [df['star'].unique()] .sum() ) Output: >>> new_df month item 1 2 3 0 1 10 2 1 1 1 1 20 0 0 1 2 2 20 0 2 1 Renaming, too: u = df['star'].unique() new_df = ( pd.concat([df, pd.get_dummies(df['star'])], axis=1) .groupby(['month', 'item'], as_index=False) [u] .sum() .rename({k: f'star_{k}_cnt' for k in df['star'].unique()}, axis=1) ) Output: >>> new_df month item star_1_cnt star_2_cnt star_3_cnt 0 1 10 2 1 1 1 1 20 0 0 1 2 2 20 0 2 1 Obligatory one- (or two-) liners: # Renames the columns u = df['star'].unique() new_df = pd.concat([df, pd.get_dummies(df['star'])], axis=1).groupby(['month', 'item'], as_index=False)[u].sum().rename({k: f'star_{k}_cnt' for k in df['star'].unique()}, axis=1) | 9 | 1 |
71,184,699 | 2022-2-19 | https://stackoverflow.com/questions/71184699/filter-a-dictionary-of-lists | I have a dictionary of the form: {"level": [1, 2, 3], "conf": [-1, 1, 2], "text": ["here", "hel", "llo"]} I want to filter the lists to remove every item at index i where an index in the value "conf" is not >0. So for the above dict, the output should be this: {"level": [2, 3], "conf": [1, 2], "text": ["hel", "llo"]} As the first value of conf was not > 0. I have tried something like this: new_dict = {i: [a for a in j if a >= min_conf] for i, j in my_dict.items()} But that would work just for one key. | I solved it with this: from typing import Dict, List, Any, Set d = {"level":[1,2,3], "conf":[-1,1,2], "text":["-1", "hel", "llo"]} # First, we create a set that stores the indices which should be kept. # I chose a set instead of a list because it has a O(1) lookup time. # We only want to keep the items on indices where the value in d["conf"] is greater than 0 filtered_indexes = {i for i, value in enumerate(d.get('conf', [])) if value > 0} def filter_dictionary(d: Dict[str, List[Any]], filtered_indexes: Set[int]) -> Dict[str, List[Any]]: filtered_dictionary = d.copy() # We'll return a modified copy of the original dictionary for key, list_values in d.items(): # In the next line the actual filtering for each key/value pair takes place. # The original lists get overwritten with the filtered lists. filtered_dictionary[key] = [value for i, value in enumerate(list_values) if i in filtered_indexes] return filtered_dictionary print(filter_dictionary(d, filtered_indexes)) Output: {'level': [2, 3], 'conf': [1, 2], 'text': ['hel', 'llo']} | 21 | 4 |
71,195,208 | 2022-2-20 | https://stackoverflow.com/questions/71195208/creating-a-unique-id-in-a-python-dataclass | I need a unique (unsigned int) id for my python data class. This is very similar to this so post, but without explicit ctors. import attr from attrs import field from itertools import count @attr.s(auto_attribs=True) class Person: #: each Person has a unique id _counter: count[int] = field(init=False, default=count()) _unique_id: int = field(init=False) @_unique_id.default def _initialize_unique_id(self) -> int: return next(self._counter) Is there any more-"pythonic" solution? | Use a default factory instead of just a default. This allows to define a call to get the next id on each instantiation. A simple means to get a callable that counts up is to use count().__next__, the equivalent of calling next(...) on a count instance.1 The common "no explicit ctor" libraries attr and dataclasses both support this: from itertools import count from dataclasses import dataclass, field @dataclass class C: identifier: int = field(default_factory=count().__next__) import attr @attr.s class C: identifier: int = attr.field(factory=count().__next__) To always use the automatically generated value and prevent passing one in as a parameter, use init=False. @dataclass class C: identifier: int = field(default_factory=count().__next__, init=False) 1 If one wants to avoid explicitly addressing magic methods, one can use a closure over a count. For example, factory=lambda counter=count(): next(counter). | 9 | 17 |
71,193,740 | 2022-2-20 | https://stackoverflow.com/questions/71193740/typeerror-encoders-require-their-input-to-be-uniformly-strings-or-numbers-got | I already referred the posts here, here and here. Don't mark it as duplicate. I am working on a binary classification problem where my dataset has categorical and numerical columns. However, some of the categorical columns has a mix of numeric and string values. Nontheless, they only indicate the category name. For instance, I have a column called biz_category which has values like A,B,C,4,5 etc. I guess the below error is thrown due to values like 4 and 5. Therefore, I tried the belowm to convert them into category datatype. (but still it doesn't work) cols=X_train.select_dtypes(exclude='int').columns.to_list() X_train[cols]=X_train[cols].astype('category') And my data info looks like below <class 'pandas.core.frame.DataFrame'> Int64Index: 683 entries, 21 to 965 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Feature_A 683 non-null category 1 Product Classification 683 non-null category 2 Industry 683 non-null category 3 DIVISION 683 non-null category 4 biz_category 683 non-null category 5 Country 683 non-null category 6 Product segment 683 non-null category 7 SUBREGION 683 non-null category 8 Quantity 1st year 683 non-null int64 dtypes: category(8), int64(1) So, after dtype conversion, when I try the below SMOTENC, I get an error print("Before OverSampling, counts of label '1': {}".format(sum(y_train == 1))) print("Before OverSampling, counts of label '0': {} \n".format(sum(y_train == 0))) cat_index = [0,1,2,3,4,5,6,7] # import SMOTE module from imblearn library # pip install imblearn (if you don't have imblearn in your system) from imblearn.over_sampling import SMOTE, SMOTENC sm = SMOTENC(categorical_features=cat_index,random_state = 2,sampling_strategy = 'minority') X_train_res, y_train_res = sm.fit_resample(X_train, y_train) This results in an error as shown below --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Roaming\Python\Python39\site-packages\sklearn\utils_encode.py in _unique_python(values, return_inverse) 134 --> 135 uniques = sorted(uniques_set) 136 uniques.extend(missing_values.to_list()) TypeError: '<' not supported between instances of 'str' and 'int' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) C:\Users\SATHAP~1\AppData\Local\Temp/ipykernel_31168/1931674352.py in 6 from imblearn.over_sampling import SMOTE, SMOTENC 7 sm = SMOTENC(categorical_features=cat_index,random_state = 2,sampling_strategy = 'minority') ----> 8 X_train_res, y_train_res = sm.fit_resample(X_train, y_train) 9 10 print('After OverSampling, the shape of train_X: {}'.format(X_train_res.shape)) ~\AppData\Roaming\Python\Python39\site-packages\imblearn\base.py in fit_resample(self, X, y) 81 ) 82 ---> 83 output = self.fit_resample(X, y) 84 85 y = ( ~\AppData\Roaming\Python\Python39\site-packages\imblearn\over_sampling_smote\base.py in fit_resample(self, X, y) 511 512 # the input of the OneHotEncoder needs to be dense --> 513 X_ohe = self.ohe.fit_transform( 514 X_categorical.toarray() if sparse.issparse(X_categorical) else X_categorical 515 ) ~\AppData\Roaming\Python\Python39\site-packages\sklearn\preprocessing_encoders.py in fit_transform(self, X, y) 486 """ 487 self._validate_keywords() --> 488 return super().fit_transform(X, y) 489 490 def transform(self, X): ~\AppData\Roaming\Python\Python39\site-packages\sklearn\base.py in fit_transform(self, X, y, **fit_params) 850 if y is None: 851 # fit method of arity 1 (unsupervised transformation) --> 852 return self.fit(X, **fit_params).transform(X) 853 else: 854 # fit method of arity 2 (supervised transformation) ~\AppData\Roaming\Python\Python39\site-packages\sklearn\preprocessing_encoders.py in fit(self, X, y) 459 """ 460 self._validate_keywords() --> 461 self.fit(X, handle_unknown=self.handle_unknown, force_all_finite="allow-nan") 462 self.drop_idx = self._compute_drop_idx() 463 return self ~\AppData\Roaming\Python\Python39\site-packages\sklearn\preprocessing_encoders.py in _fit(self, X, handle_unknown, force_all_finite) 92 Xi = X_list[i] 93 if self.categories == "auto": ---> 94 cats = _unique(Xi) 95 else: 96 cats = np.array(self.categories[i], dtype=Xi.dtype) ~\AppData\Roaming\Python\Python39\site-packages\sklearn\utils_encode.py in _unique(values, return_inverse) 29 """ 30 if values.dtype == object: ---> 31 return _unique_python(values, return_inverse=return_inverse) 32 # numerical 33 out = np.unique(values, return_inverse=return_inverse) ~\AppData\Roaming\Python\Python39\site-packages\sklearn\utils_encode.py in _unique_python(values, return_inverse) 138 except TypeError: 139 types = sorted(t.qualname for t in set(type(v) for v in values)) --> 140 raise TypeError( 141 "Encoders require their input to be uniformly " 142 f"strings or numbers. Got {types}" TypeError: Encoders require their input to be uniformly strings or numbers. Got ['int', 'str'] Should I transform y_train into categorical as well? Currently, it is int64. Help please | Cause of the problem SMOTE requires the values in each categorical/numerical column to have uniform datatype. Essentially you can not have mixed datatypes in any of the column in this case your biz_category column. Also merely casting the column to categorical type does not necessarily mean that the values in that column will have uniform datatype. Possible solution One possible solution to this problem is to re-encode the values in those columns which have mixed data types for example you could use lableencoder but I think in your case simply changing the dtype to string would also work. | 8 | 7 |
71,193,085 | 2022-2-20 | https://stackoverflow.com/questions/71193085/creating-nested-columns-in-python-dataframe | I have 3 columns namely Models(should be taken as index), Accuracy without normalization, Accuracy with normalization (zscore, minmax, maxabs, robust) and these are required to be created as: ------------------------------------------------------------------------------------ | Models | Accuracy without normalization | Accuracy with normalization | | | |-----------------------------------| | | | zscore | minmax | maxabs | robust | ------------------------------------------------------------------------------------ dfmod-> Models column dfacc-> Accuracy without normalization dfacc1-> Accuracy with normalization - zscore dfacc2-> Accuracy with normalization - minmax dfacc3-> Accuracy with normalization - maxabs dfacc4-> Accuracy with normalization - robust dfout=pd.DataFrame({('Accuracy without Normalization'):{dfacc}, ('Accuracy using Normalization','zscore'):{dfacc1}, ('Accuracy using Normalization','minmax'):{dfacc2}, ('Accuracy using Normalization','maxabs'):{dfacc3}, ('Accuracy using Normalization','robust'):{dfacc4}, },index=dfmod ) I was trying to do something like this but i can't figure out any further Test data: qda 0.6333 0.6917 0.5917 0.6417 0.5833 svm 0.5333 0.6917 0.5333 0.575 0.575 lda 0.5333 0.6583 0.5333 0.5667 0.5667 lr 0.5333 0.65 0.4917 0.5667 0.5667 dt 0.5333 0.65 0.4917 0.5667 0.5667 rc 0.5083 0.6333 0.4917 0.525 0.525 nb 0.5 0.625 0.475 0.5 0.4833 rfc 0.5 0.625 0.4417 0.4917 0.4583 knn 0.3917 0.6 0.4417 0.4833 0.45 et 0.375 0.5333 0.4333 0.4667 0.45 dc 0.375 0.5333 0.4333 0.4667 0.425 qds 0.3417 0.5333 0.4 0.4583 0.3667 lgt 0.3417 0.525 0.3917 0.45 0.3583 lt 0.2333 0.45 0.3917 0.4167 0.3417 These are values for respective subcolumns in order specified in the table above | There's a dirty way to do this, I'll write about it till someone answers with a better idea. Here we go: import pandas as pd # I assume that you can read raw data named test.csv by pandas and # set header = None cause you mentioned the Test data without any headers, so: df = pd.read_csv("test.csv", header = None) # Then define preferred Columns! MyColumns = pd.MultiIndex.from_tuples([("Models" , ""), ("Accuracy without normalization" , ""), ("Accuracy with normalization" , "zscore"), ("Accuracy with normalization" , "minmax"), ("Accuracy with normalization" , "maxabs"), ("Accuracy with normalization" , "robust")]) # Create new DataFrame with specified Columns, after this you should pass values New_DataFrame = pd.DataFrame(df , columns = MyColumns) # a loop for passing values for item in range(len(MyColumns)): New_DataFrame.loc[: , MyColumns[item]] = df.iloc[: , item] This gives me: after all, if you want to set Models as the index of New_DataFrame, You can continue with: New_DataFrame.set_index(New_DataFrame.columns[0][0] , inplace=True) New_DataFrame This gives me: | 6 | 3 |
71,191,907 | 2022-2-20 | https://stackoverflow.com/questions/71191907/no-module-named-x-main-x-is-a-package-and-cannot-be-directly-executed-w | I have this CLI tool called Rackfocus. I've published to PyPI, and I'm reasonably sure it worked just fine before. When I try to run it with current versions of Python on Mac, I get the error: No module named rackfocus.__main__; 'rackfocus' is a package and cannot be directly executed All I want is one package with one entry point that users can download and use using pip. Based on tutorials, I have this in setup.py: packages=['rackfocus'] entry_points = { 'console_scripts': [ 'rackfocus=rackfocus.run:main' ] } And I have a rackfocus.run:main function, an init.py and everything. What's wrong? You can reproduce this locally: Clone my repo. Create and activate a virtualenv (optional). pip3 install -e . python3 -m rackfocus | entry_points = { 'console_scripts': [ 'rackfocus=rackfocus.run:main' ] } This tells the packaging system to create a wrapper executable named rackfocus. That executable will automatically handle all the necessary steps to get Python off the ground, find the run module in the rackfocus package, find its main function and call it. You run the executable like rackfocus (if you are using a virtual environment, it should be on the path already), not python -m rackfocus. Using python -m rackfocus is completely unrelated to that (it doesn't even have anything to do with packaging, and can easily be used with code that hasn't been installed yet). It doesn't use the wrapper; instead, it simply attempts to execute the rackfocus module. But in your case, rackfocus isn't a module; it's a package. The error message means exactly what it says. You would want python -m rackfocus.run to execute the run module - but of course, that still doesn't actually call main() (just like it wouldn't with python rackfocus/main.py - though the -m approach is more powerful; in particular, it allows your relative imports to work). The error message says rackfocus.__main__ because you can make a package runnable by giving it a __main__ module. | 13 | 11 |
71,186,546 | 2022-2-19 | https://stackoverflow.com/questions/71186546/python-call-generator-function-from-other-function | For the below code # A simple generator function def infinite_sequence(): num = 1 while True: yield num num += 1 aaa = infinite_sequence() bbb = infinite_sequence() ccc = infinite_sequence() print(next(aaa)) print(next(aaa)) print(next(bbb)) print(next(bbb)) print(next(ccc)) print(next(ccc)) the output is: 1 2 1 2 1 2 When trying to call the same generator functions from another function, the output is not as expected def switchAction(input): action_item = { "A": next(aaa), "B": next(bbb), "C": next(ccc) } return action_item.get(input) print(switchAction("A")) print(switchAction("A")) print(switchAction("B")) print(switchAction("B")) print(switchAction("C")) print(switchAction("C")) The output is: 1 2 3 4 5 6 Why the counter is continuing across generator functions in the case of calling from another function? How to achieve the same output as that of first case for the second case above. | The problem is you call next on all values every time you call switchAction, since you define the dict over and over again. A solution to your problem can be as follows: # A simple generator function def infinite_sequence(): num = 1 while True: yield num num += 1 aaa = infinite_sequence() bbb = infinite_sequence() ccc = infinite_sequence() action_item = { "A": aaa, "B": bbb, "C": ccc } # create the dict one time, and don't call next within the definition. def switchAction(name): # also input is a dangerous var name so change it to name return next(action_item.get(name)) print(switchAction("A")) print(switchAction("A")) print(switchAction("B")) print(switchAction("B")) print(switchAction("C")) print(switchAction("C")) | 5 | 3 |
71,184,380 | 2022-2-19 | https://stackoverflow.com/questions/71184380/how-to-create-typing-literal-from-multiple-lists-of-values-in-python | I have two lists. I want to create a Literal using both these lists category1 = ["image/jpeg", "image/png"] category2 = ["application/pdf"] SUPPORTED_TYPES = typing.Literal[category1 + category2] Is there any way to do this? I have seen the question typing: Dynamically Create Literal Alias from List of Valid Values but this doesnt work for my use case because I dont want mimetype to be of type typing.Tuple. I will be using the Literal in a function - def process_file(filename: str, mimetype: SUPPORTED_TYPES) What I have tried - supported_types_list = category1 + category2 SUPPORTED_TYPES = Literal[supported_types_list] SUPPORTED_TYPES = Literal[*supported_types_list] # this gives 2 different literals, rather i want only 1 literal SUPPORTED_TYPES = Union[Literal["image/jpeg", "image/png"], Literal["application/pdf"]] | Use the same technique as in the question you linked: build the lists from the literal types, instead of the other way around: SUPPORTED_IMAGE_TYPES = typing.Literal["image/jpeg", "image/png"] SUPPORTED_OTHER_TYPES = typing.Literal["application/pdf"] SUPPORTED_TYPES = typing.Literal[SUPPORTED_IMAGE_TYPES, SUPPORTED_OTHER_TYPES] category1 = list(typing.get_args(SUPPORTED_IMAGE_TYPES)) category2 = list(typing.get_args(SUPPORTED_OTHER_TYPES)) The only part of this that wasn't already covered in the other answer is SUPPORTED_TYPES = typing.Literal[SUPPORTED_IMAGE_TYPES, SUPPORTED_OTHER_TYPES], which, yeah, you can do that. It's equivalent to your original definition of SUPPORTED_TYPES. | 9 | 12 |
71,180,148 | 2022-2-18 | https://stackoverflow.com/questions/71180148/fastapi-and-slowapi-limit-request-under-all-path | I'm having a problem with SlowAPI. All requests are limited according to the middleware, but I cannot manage to jointly limit all requests under the path /schools/ My code: from fastapi import FastAPI, Request, Response, status from fastapi.middleware.cors import CORSMiddleware from slowapi import Limiter, _rate_limit_exceeded_handler from slowapi.util import get_remote_address from slowapi.errors import RateLimitExceeded from slowapi.middleware import SlowAPIMiddleware limiter = Limiter(key_func=get_remote_address, default_limits=["2/5seconds"]) app = FastAPI() app.state.limiter = limiter app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler) origins = ["http://127.0.0.1/", "http://localhost", "http://192.168.1.75"] ## CORS app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) app.add_middleware(SlowAPIMiddleware) ## Rate-limit all request @app.get('/schools/{regione}/{provincia}/{comune}') def search_school(request: Request, response: Response, regione: str, provincia: str, comune: str): return {"message": 'No schools found!', "status": 'error', "code": 200} ## Or if found return schools informations @app.get('/testpath/{regione}') ## Works with one path. If I add "provincia" and "comune" non work def search_school(request: Request, response: Response, regione: str, provincia: str, comune: str): return {"message": 'No schools found!', "status": 'error', "code": 200} ## Or if found return schools informations When i send a request to /schools/{region}/{province}/{city} with jQuery the whole url is limited and therefore if I change region or province the limits are reset. How can I make myself apply settings for /schools/* Example: 2 request every 5 seconds If i send to request to apiURL+/schools/Lombardy/Milan/Milan the limit increases by 1 and if i made anothe 2 request at the third I get blocked. But if instead of making it to the same domain, I change the city (apiURL+/schools/Sicily/Palermo/Palermo), the limit resets and returns to 1 | Option 1 Define application_limits when instantiating the Limiter class, as shown below. As per the documentation, application_limits: a variable list of strings or callables returning strings for limits that are applied to the entire application (i.e., a shared limit for all routes) Thus, the below would apply a shared limit to all /schools/* routes, as well as any other route that might be in your application (e.g., /testpath/*, /some-other-route/, and so on), meaning that, only two requests per 5 seconds would go through by each client (regardless of the endpoint they would call). limiter = Limiter(key_func=get_remote_address, application_limits=["2/5seconds"]) Option 2 Apply a shared limit only to the endpoints you wish, using shared_limit. As per the documentation: shared_limit: Decorator to be applied to multiple routes sharing the same rate limit. Thus, the below would apply a shared limit only to /schools/* routes. limiter = Limiter(key_func=get_remote_address, default_limits=["2/5seconds"]) @app.get('/schools/{regione}/{provincia}/{comune}') @limiter.shared_limit(limit_value="2/5seconds", scope="schools") def search_school(request: Request, response: Response, regione: str, provincia: str, comune: str): return {"message": 'No schools found!', "status": 'error', "code": 200} | 5 | 6 |
71,175,486 | 2022-2-18 | https://stackoverflow.com/questions/71175486/how-to-get-caller-name-inside-pytest-fixture | Assume we have: @pytest.fixture() def setup(): print('All set up!') return True def foo(setup): print('I am using a fixture to set things up') setup_done=setup I'm looking for a way to get to know caller function name (in this case: foo) from within setup fixture. So far I have tried: import inspect @pytest.fixture() def setup(): daddy_function_name = inspect.stack()[1][3] print(daddy_function_name) print('All set up!') return True But what gets printed is: call_fixture_func How do I get foo from printing daddy_function_name? | You can use the built-in request fixture in your own fixture: The request fixture is a special fixture providing information of the requesting test function. Its node attribute is the Underlying collection node (depends on current request scope). import pytest @pytest.fixture() def setup(request): return request.node.name def test_foo(setup): assert setup == "test_foo" | 5 | 8 |
71,174,306 | 2022-2-18 | https://stackoverflow.com/questions/71174306/expected-in-usr-lib-libc-1-dylib-installing-tensorflow-on-m1-macbook-pro | I am trying to install Tensorflow on my MacBook Pro with the M1 chip. The operating system of my MacBook is MacOS Big Sur Version 11.0. In order to install Tensorflow to use it with Python, I have followed this tutorial, which says that I have to do the following: Install Homebrew. Download MiniForge3 for macOS arm64 chips (link provided in the webpage). Install MiniForge3 using: chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh sh ~/Downloads/Miniforge3-MacOSX-arm64.sh source ~/miniforge3/bin/activate Create a folder to set up an environment for Tensorflow. mkdir tensorflow-test cd tensorflow-test Make and activate Conda environment. conda create --prefix ./env python=3.9.7 conda activate ./env Install Tensorflow dependencies. conda install -c apple tensorflow-deps python -m pip install tensorflow-macos python -m pip install tensorflow-metal After this, I open a Jupyter Notebook and I try to import tensorflow, but this error shows up: OSError: dlopen(/Users/blancoarnau/tensorflow-test/env/lib/python3.9/site-packages/tensorflow/python/platform/../../core/platform/_cpu_feature_guard.so, 6): Symbol not found: __ZNKSt3__115basic_stringbufIcNS_11char_traitsIcEENS_9allocatorIcEEE3strEv Referenced from: /Users/blancoarnau/tensorflow-test/env/lib/python3.9/site-packages/tensorflow/python/platform/../../core/platform/_cpu_feature_guard.so (which was built for Mac OS X 12.3) Expected in: /usr/lib/libc++.1.dylib As you can see in this screenshot: Do you have an idea why this is happening? | check the message details: (which was built for Mac OS X 12.3) you need to upgrade macOS to 12.3 | 5 | 6 |
71,171,777 | 2022-2-18 | https://stackoverflow.com/questions/71171777/python-pip3-cannot-install-zoneinfo-on-linux-debian | How can zoneinfo be installed on a Linux Debian 10 machine? our script is working just fine on Mac. When pushed to Linux Debian and run, the script returns the error: myemail@repo-name:~/path-to/mainfolder$ python3 main_cbb_v2.py Traceback (most recent call last): File "main_cbb_v2.py", line 3, in <module> from utils import * File "/home/pathto-utils/utils.py", line 16, in <module> from zoneinfo import ZoneInfo ModuleNotFoundError: No module named 'zoneinfo' and when we attempt to install the library, we get the error: pip3 install zoneinfo Collecting zoneinfo Could not install packages due to an EnvironmentError: 404 Client Error: Not Found for url: https://pypi.org/simple /zoneinfo/ and we get this same exact error even if sudo su is run beforehand. And if backports prefix is used: pip3 install backports.zoneinfo Requirement already satisfied: backports.zoneinfo in /usr/local/lib/python3.7/dist-packages (0.2.1) How can this be troubleshooted further? | zoneinfo is new in python 3.9, so the undelying issue is probably that you have different python versions on different systems. You can either upgrade your python version or you use the backports module which you already have installed, but then your code needs to be: from backports.zoneinfo import ZoneInfo | 6 | 18 |
71,169,948 | 2022-2-18 | https://stackoverflow.com/questions/71169948/where-does-pytests-mocker-come-from | I am following this mini-tutorial/blog on pytest-mock. I can not understand how the mocker is working since there is no import for it - in particular the function declaration def test_mocking_constant_a(mocker): import mock_examples.functions from mock_examples.functions import double def test_mocking_constant_a(mocker): mocker.patch.object(mock_examples.functions, 'CONSTANT_A', 2) expected = 4 actual = double() # now it returns 4, not 2 assert expected == actual Somehow the mocker has the attributes/functions of pytest-mocker.mocker: in particular mocker.patch.object . But how can that be without the import statement? | The mocker variable is a Pytest fixture. Rather than using imports, fixtures are supplied using dependency injection - that is, Pytest takes care of creating the mocker object for you and supplies it to the test function when it runs the test. Pytest-mock defines the "mocker" fixture here, using the Pytest fixture decorator. Here, the fixture decorator is used as a regular function, which is a slightly unusual way of doing it. A more typical way of using the fixture decorator would look something like this: @pytest.fixture() def mocker(pytestconfig: Any) -> Generator[MockerFixture, None, None]: """ Return an object that has the same interface to the `mock` module, but takes care of automatically undoing all patches after each test method. """ result = MockerFixture(pytestconfig) yield result result.stopall() The fixture decorator registers the "mocker" function with Pytest, and when Pytest runs a test with a parameter called "mocker", it inserts the result of the "mocker" function for you. Pytest can do this because it uses Python's introspection features to view the list of arguments, complete with names, before calling the test function. It compares the names of the arguments with names of fixtures that have been registered, and if the names match, it supplies the corresponding object to that parameter of the test function. | 5 | 4 |
71,162,169 | 2022-2-17 | https://stackoverflow.com/questions/71162169/how-to-find-svg-element-with-selenium | I'm building a python Instagram bot, and I'm trying to get it to click on the DMs icon, but I'm not sure how to select it. I tried selecting by Xpath, but I can't seem to be able to navigate it to the icon. Here's Instagram's html code for the DMs icon: <svg aria-label="Messenger" class="_8-yf5 " color="#262626" fill="#262626" height="24" role="img" viewBox="0 0 24 24" width="24"> Any help is appreciated. | You have to apply a slightly different locator strategy to find svg. Here is what works: driver.find_element(By.XPATH, "//*[name()='svg']") assuming that this is the only svg element (as provided in your query) Combination of more than one attribute from the same DOM line: //*[name()='svg' and @aria-label='Messenger'] | 5 | 10 |
71,146,287 | 2022-2-16 | https://stackoverflow.com/questions/71146287/how-token-sort-ratio-works | Can someone explain me how this function of the library fuzzywuzzy in Python works? I know how the Levenshtein distance works but I don't understand how the ratio is computed. b = fuzz.token_sort_ratio('controlled', 'comparative') The result is 38 | Levenshtein distance As you probably already know the Levenshtein distance is the minimum amount of insertions / deletions / substitutions to convert one sequence into another sequence. It can be normalized as dist / max_dist, where max_dist is the maximum distance possible given the two sequence lengths. In the case of the Levenshtein distance this results in the normalization dist / max(len(s1), len(s2)). In addition a normalized similarity can be calculated by inverting this: 1 - normalized distance. >>> from rapidfuzz.distance import Levenshtein >>> Levenshtein.distance('controlled', 'comparative') 8 >>> Levenshtein.similarity('controlled', 'comparative') 3 >>> Levenshtein.normalized_distance('controlled', 'comparative') 0.7272727272727273 >>> Levenshtein.normalized_similarity('controlled', 'comparative') 0.2727272727272727 Indel distance The Indel distance is the minimum amount of insertions / deletions to convert one sequence into another sequence. So it behaves similar to the Levenshtein distance, but does not allow substitutions. Since any substitution can be replaced by a insertion + deletion this can be calculated e.g. by modifying the cost of substitutions in the Levenshtein distance to 2. The Indel distance can be normalized in a similar way to the Levenshtein distance, but uses a different max_dist: dist / (len(s1) + len(s2)). >>> from rapidfuzz.distance import Indel >>> Indel.distance('controlled', 'comparative') 13 >>> Indel.similarity('controlled', 'comparative') 8 >>> Indel.normalized_distance('controlled', 'comparative') 0.6190476190476191 >>> Indel.normalized_similarity('controlled', 'comparative') 0.38095238095238093 ratio The ratio in fuzzywuzzy/thefuzz/rapidfuzz is the normalized indel similarity scaled to 100. >>> from rapidfuzz.distance import Indel >>> from rapidfuzz import fuzz >>> Indel.normalized_similarity('controlled', 'comparative') * 100 38.095238095238095 >>> fuzz.ratio('controlled', 'comparative') 38.095238095238095 The only difference in fuzzywuzzy/thefuzz is, that results are rounded: >>> from fuzzywuzzy import fuzz >>> fuzz.ratio('controlled', 'comparative') 38 token_sort_ratio token_sort_ratio is a variant of ratio, which sorts the words in both sequences before comparing them: s1 = " ".join(sorted(s1.split())) s2 = " ".join(sorted(s2.split())) fuzz.ratio(s1, s2) In your example token_sort_ratio will have the same result as ratio, since both sequences are already sorted. | 5 | 11 |
71,151,966 | 2022-2-17 | https://stackoverflow.com/questions/71151966/python-grammar-correct-in-a-loop | "I was doing the follow exercise: A gymnast can earn a score between 1 and 10 from each judge. Print out a series of sentences, "A judge can give a gymnast _ points." Don't worry if your first sentence reads "A judge can give a gymnast 1 points." However, you get 1000 bonus internet points if you can use a for loop, and have correct grammar. My code is this: scores = (tuple(range(1,11))) for score in scores: print (f'A judge can give a gymnast {score} points.') And my output of course was: A judge can give a gymnast 1 points. A judge can give a gymnast 2 points. A judge can give a gymnast 3 points. A judge can give a gymnast 4 points. A judge can give a gymnast 5 points. A judge can give a gymnast 6 points. A judge can give a gymnast 7 points. A judge can give a gymnast 8 points. A judge can give a gymnast 9 points. A judge can give a gymnast 10 points. The question is: How can I correct the grammar in my loop? I mean, when is "1 point" how can I make my loop to say "1 point" and not "1 points" and for the others scores maintain the "points"? | Heres another way of doing it. The answer above may(I don't know) be more efficient, in my opinion this is more readable for future developers. scores = (tuple(range(1,11))) for score in scores: if score == 1: print ('A judge can give a gymnast 1 point.') else: print (f'A judge can give a gymnast {score} points.') | 5 | 2 |
71,151,093 | 2022-2-17 | https://stackoverflow.com/questions/71151093/using-constant-variables-in-python-match-statement | I am writing a program and python and want to use the match-case statement allowed in python 3.10 to be able to switch on 'enum' values i.e: match token.type: case TOK_INT: #0 # do stuff case TOK_STRING: #1 # do stuff etc... however trying to do this causes python to throw a SyntaxError: name capture 'TOK_INT' makes remaining patterns unreachable error. I know I can fix it by manually typing the number associated with each 'enum' variable but I am wondering if there is a better way of going about this? | This will work with any dotted name (like math.pi). However an unqualified name (i.e. a bare name with no dots) will be always interpreted as a capture pattern, so avoid that ambiguity by always using qualified constants in patterns. you can refer here | 5 | 4 |
71,146,740 | 2022-2-16 | https://stackoverflow.com/questions/71146740/fastapi-create-auth-for-all-endpoints | I followed this documentation to setup up a single user: https://fastapi.tiangolo.com/advanced/security/http-basic-auth/ But I only get prompted for user/pass for that one end point, "/users/me". How do I ensure that all endpoints are behind auth? | You can configure FastAPI with a set of dependencies that needs to be resolved for any endpoint by giving the paramter directly when creating the FastAPI application (i.e. global dependencies): security = HTTPBasic() app = FastAPI(dependencies=[Depends(security)]) If you want some endpoints to be authenticated and some to be unauthenticated, you can create separate instances of APIRouter, then assign required dependencies to the one that require authentication: unauthenticated_router = APIRouter() authenticated_router = APIRouter(dependencies=[Depends(security)]) .. and then either include other routers under each (using .include_router) or register endpoints as you'd do with the app object - but instead use your two routers. | 9 | 15 |
71,148,612 | 2022-2-16 | https://stackoverflow.com/questions/71148612/type-hinting-list-of-strings | In python, if I am writing a function, is this the best way to type hint a list of strings: def sample_def(var:list[str]): | I would use the typing module from typing import List def foo(bar: List[str]): pass The reason is typing contains so many type hints and the ability to create your own, specify callables, etc. Definitely check it out. Edit: I guess as of Python 3.9 typing is deprecated (RIP). Instead it looks like you can use collections.abc.*. So you can do this if you're on Python 3.9+: from collections.abc import Iterable def foo(bar: Iterable[str]): pass You can take a look at https://docs.python.org/3/library/collections.abc.html for a list of ABCs that might fit your need. For instance, it might make more sense to specify Sequence[str] depending on your needs of that function. | 9 | 16 |
71,147,799 | 2022-2-16 | https://stackoverflow.com/questions/71147799/create-new-boolean-fields-based-on-specific-bigrams-appearing-in-a-tokenized-pan | Looping over a list of bigrams to search for, I need to create a boolean field for each bigram according to whether or not it is present in a tokenized pandas series. And I'd appreciate an upvote if you think this is a good question! List of bigrams: bigrams = ['data science', 'computer science', 'bachelors degree'] Dataframe: df = pd.DataFrame(data={'job_description': [['data', 'science', 'degree', 'expert'], ['computer', 'science', 'degree', 'masters'], ['bachelors', 'degree', 'computer', 'vision'], ['data', 'processing', 'science']]}) Desired Output: job_description data science computer science bachelors degree 0 [data, science, degree, expert] True False False 1 [computer, science, degree, masters] False True False 2 [bachelors, degree, computer, vision] False False True 3 [data, bachelors, science] False False False Criteria: Only exact matches should be replaced (for example, flagging for 'data science' should return True for 'data science' but False for 'science data' or 'data bachelors science') Each search term should get it's own field and be concatenated to the original df What I've tried: Failed: df = [x for x in df['job_description'] if x in bigrams] Failed: df[bigrams] = [[any(w==term for w in lst) for term in bigrams] for lst in df['job_description']] Failed: Could not adapt the approach here -> Match trigrams, bigrams, and unigrams to a text; if unigram or bigram a substring of already matched trigram, pass; python Failed: Could not get this one to adapt, either -> Compare two bigrams lists and return the matching bigram Failed: This method is very close, but couldn't adapt it to bigrams -> Create new boolean fields based on specific terms appearing in a tokenized pandas dataframe Thanks for any help you can provide! | You could also try using numpy and nltk, which should be quite fast: import pandas as pd import numpy as np import nltk bigrams = ['data science', 'computer science', 'bachelors degree'] df = pd.DataFrame(data={'job_description': [['data', 'science', 'degree', 'expert'], ['computer', 'science', 'degree', 'masters'], ['bachelors', 'degree', 'computer', 'vision'], ['data', 'processing', 'science']]}) def find_bigrams(data): output = np.zeros((data.shape[0], len(bigrams)), dtype=bool) for i, d in enumerate(data): possible_bigrams = [' '.join(x) for x in list(nltk.bigrams(d)) + list(nltk.bigrams(d[::-1]))] indices = np.where(np.isin(bigrams, list(set(bigrams).intersection(set(possible_bigrams))))) output[i, indices] = True return list(output.T) output = find_bigrams(df['job_description'].to_numpy()) df = df.assign(**dict(zip(bigrams, output))) | | job_description | data science | computer science | bachelors degree | |---:|:----------------------------------------------|:---------------|:-------------------|:-------------------| | 0 | ['data', 'science', 'degree', 'expert'] | True | False | False | | 1 | ['computer', 'science', 'degree', 'masters'] | False | True | False | | 2 | ['bachelors', 'degree', 'computer', 'vision'] | False | False | True | | 3 | ['data', 'processing', 'science'] | False | False | False | | 5 | 3 |
71,138,693 | 2022-2-16 | https://stackoverflow.com/questions/71138693/running-one-python-script-within-another-script-using-subprocess | I am working on a script to walk over a directory, and convert all the python2 files to python3. There is a utitliy (2to3.py) to acheive that. ( I am using python2.7 interpreter) I have the following code: import os import subprocess import pathlib APP_FOLDER = 'C:/Users/XXXX/Test/' for dirpath, dirnames, filenames in os.walk(APP_FOLDER): for inputFile in filenames: if pathlib.Path(inputFile).suffix == ".py": file_path = os.path.join(dirpath, inputFile) with open(file_path) as f: num_lines = len(f.readlines()) print dirpath print inputFile print "lines of code: ", num_lines print "converting to python 3" cmd ="py C:\Python27\Tools\Scripts\\2to3.py "+inputFile+" -w" p1=subprocess.Popen(cmd,shell=True) p1.wait() if p1.returncode ==0: print "Success" else: print "failure" At the path mentioned, I have a subdirectory named "FolderA" and within that there is a simple python file having the syntax of python2 and a division operation. I receive an error as below: C:/Users/XXXX/Test/FolderA Sample.py lines of code: 7 converting to python 3 RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Can't open Sample.py: [Errno 2] No such file or directory: 'Sample.py' RefactoringTool: No files need to be modified. RefactoringTool: There was 1 error: RefactoringTool: Can't open Sample.py: [Errno 2] No such file or directory: 'Sample.py' failure -------------------*************-------------------- What am I doing wrong here? | Try using, cmd ="py C:\Python27\Tools\Scripts\\2to3.py "+file_path+" -w" | 5 | 3 |
71,102,658 | 2022-2-13 | https://stackoverflow.com/questions/71102658/how-can-i-return-a-numpy-array-using-fastapi | I have a TensorFlow Keras deep learning model in the form of an h5 file. How can I upload an image and return a NumPy array in FastAPI? import numpy as np import cv2 from fastapi import FastAPI, File, UploadFile import numpy as np from tensorflow.keras.models import load_model import tensorflow as tf model=load_model("complete_model.h5") app = FastAPI() def prepare(image): IMG_SIZE = 224 new_array = cv2.resize(image, (IMG_SIZE, IMG_SIZE)) return new_array.reshape(-1, IMG_SIZE,IMG_SIZE,3) @app.post("/") async def root(file: UploadFile = File(...)): global model content = await file.read() nparr = np.fromstring(content, np.uint8) img = cv2.imdecode(nparr, cv2.IMREAD_COLOR).astype(np.float32) prediction = model.predict(prepare(img)) return prediction When uploading the image using Swagger UI, I get the following error: line 137, in jsonable_encoder data = dict(obj) TypeError: 'numpy.float32' object is not iterable Working code without FastAPI: import numpy as np import numpy as np from tensorflow.keras.models import load_model import tensorflow as tf import cv2 model=load_model("complete_model.h5") def prepare(image): IMG_SIZE = 224 new_array = cv2.resize(image, (IMG_SIZE, IMG_SIZE)) return new_array.reshape(-1, IMG_SIZE,IMG_SIZE,3) img = cv2.imread("./test.jpeg").astype(np.float32) prediction = model.predict(prepare(img)) print(prediction) Result in the terminal: [[0.25442022 0.74557984]] How can I get the same result while using FastAPI? | The error is thrown when returning the response (i.e., prediction in your case) from your endpoint. It looks like FastAPI is trying to convert the NumPy array into a dict, using the jsonable_encoder, which is used internally by FastAPI when returning a value from an endpoint, and which seems to call Python's vars() method, as shown in the error you provided here (have a look at the discussion here, as well as the documentation). Thus, what you could do is to convert the NumPy array into a Python list, then serialize it into a JSON string and return it: return json.dumps(prediction.tolist()) Note that instead of returning the JSON string in the way it is shown above, which would casue FastAPI to serialize it again behind the scenes, you might consider returning a custom Response directly, as demonstrated here, as well as here and here. On Swagger UI /docs, you should still be able to see the expected result. However, if you needed to convert it back to a NumPy array, you could parse the JSON string in Python, as shown below. arr = np.asarray(json.loads(resp.json())) # resp.json() if using Python requests If you would like to return the NumPy array as raw bytes and display the image in the browser or download it, have a look at this answer. | 6 | 7 |
71,106,690 | 2022-2-14 | https://stackoverflow.com/questions/71106690/polars-specify-dtypes-for-all-columns-at-once-in-read-csv | In Polars, how can one specify a single dtype for all columns in read_csv? According to the docs, the schema_overrides argument to read_csv can take either a mapping (dict) in the form of {'column_name': dtype}, or a list of dtypes, one for each column. However, it is not clear how to specify "I want all columns to be a single dtype". If you wanted all columns to be String for example and you knew the total number of columns, you could do: pl.read_csv('sample.csv', schema_overrides=[pl.String]*number_of_columns) However, this doesn't work if you don't know the total number of columns. In Pandas, you could do something like: pd.read_csv('sample.csv', dtype=str) But this doesn't work in Polars. | Reading all data in a csv to any other type than pl.String likely fails with a lot of null values. We can use expressions to declare how we want to deal with those null values. If you read a csv with infer_schema_length=0, polars does not know the schema and will read all columns as pl.String as that is a super type of all polars types. When read as String we can use expressions to cast all columns. (pl.read_csv("test.csv", infer_schema_length=0) .with_columns(pl.all().cast(pl.Int32, strict=False)) Update: infer_schema=False was added in 1.2.0 as a more user-friendly name for this feature. pl.read_csv("test.csv", infer_schema=False) # read all as pl.String | 15 | 21 |
71,103,393 | 2022-2-13 | https://stackoverflow.com/questions/71103393/fastapi-swagger-ui-does-not-render-because-of-custom-middleware | So I have a custom middleware like this: Its objective is to add some meta_data fields to every response from all endpoints of my FastAPI app. @app.middelware("http") async def add_metadata_to_response_payload(request: Request, call_next): response = await call_next(request) body = b"" async for chunk in response.body_iterator: body+=chunk data = {} data["data"] = json.loads(body.decode()) data["metadata"] = { "some_data_key_1": "some_data_value_1", "some_data_key_2": "some_data_value_2", "some_data_key_3": "some_data_value_3" } body = json.dumps(data, indent=2, default=str).encode("utf-8") return Response( content=body, status_code=response.status_code, media_type=response.media_type ) However, when I served my app using uvicorn, and launched the swagger URL, here is what I see: Unable to render this definition The provided definition does not specify a valid version field. Please indicate a valid Swagger or OpenAPI version field. Supported version fields are Swagger: "2.0" and those that match openapi: 3.0.n (for example, openapi: 3.0.0) With a lot of debugging, I found that this error was due to the custom middleware and specifically this line: body = json.dumps(data, indent=2, default=str).encode("utf-8") If I simply comment out this line, swagger renders just fine for me. However, I need this line for passing the content argument in Response from Middleware. How to sort this out? UPDATE: I tried the following: body = json.dumps(data, indent=2).encode("utf-8") by removing default arg, the swagger did successfully load. But now when I hit any of the APIs, here is what swagger tells me along with response payload on screen: Unrecognised response type; displaying content as text More Updates (6th April 2022): Got a solution to fix 1 part of the problem by Chris, but the swagger wasn't still loading. The code was hung up in the middleware level indefinitely and the page was not still loading. So, I found in all these places: https://github.com/encode/starlette/issues/919 Blocked code while using middleware and dependency injections to log requests in FastAPI(Python) https://github.com/tiangolo/fastapi/issues/394 that this way of adding custom middleware works by inheriting from BaseHTTPMiddleware in Starlette and has its own issues (something to do with awaiting inside middleware, streamingresponse and normal response, and the way it is called). I don't understand it yet. | Here's how you could do that (inspired by this). Make sure to check the Content-Type of the response (as shown below), so that you can modify it by adding the metadata, only if it is of application/json type. For the OpenAPI (Swagger UI) to render (both /docs and /redoc), make sure to check whether openapi key is not present in the response, so that you can proceed modifying the response only in that case. If you happen to have a key with such a name in your response data, then you could have additional checks using further keys that are present in the response for the OpenAPI, e.g., info, version, paths, and, if needed, you can check against their values too. from fastapi import FastAPI, Request, Response import json app = FastAPI() @app.middleware("http") async def add_metadata_to_response_payload(request: Request, call_next): response = await call_next(request) content_type = response.headers.get('Content-Type') if content_type == "application/json": response_body = [section async for section in response.body_iterator] resp_str = response_body[0].decode() # converts "response_body" bytes into string resp_dict = json.loads(resp_str) # converts resp_str into dict #print(resp_dict) if "openapi" not in resp_dict: data = {} data["data"] = resp_dict # adds the "resp_dict" to the "data" dictionary data["metadata"] = { "some_data_key_1": "some_data_value_1", "some_data_key_2": "some_data_value_2", "some_data_key_3": "some_data_value_3"} resp_str = json.dumps(data, indent=2) # converts dict into JSON string return Response(content=resp_str, status_code=response.status_code, media_type=response.media_type) return response @app.get("/") def foo(request: Request): return {"hello": "world!"} Update 1 Alternatively, a likely better approach would be to check for the request's url path at the start of the middleware function (against a pre-defined list of paths/routes that you would like to add metadata to their responses), and proceed accordingly. Example is given below. from fastapi import FastAPI, Request, Response, Query from pydantic import constr from fastapi.responses import JSONResponse import re import uvicorn import json app = FastAPI() routes_with_middleware = ["/"] rx = re.compile(r'^(/items/\d+|/courses/[a-zA-Z0-9]+)$') # support routes with path parameters my_constr = constr(regex="^[a-zA-Z0-9]+$") @app.middleware("http") async def add_metadata_to_response_payload(request: Request, call_next): response = await call_next(request) if request.url.path not in routes_with_middleware and not rx.match(request.url.path): return response else: content_type = response.headers.get('Content-Type') if content_type == "application/json": response_body = [section async for section in response.body_iterator] resp_str = response_body[0].decode() # converts "response_body" bytes into string resp_dict = json.loads(resp_str) # converts resp_str into dict data = {} data["data"] = resp_dict # adds "resp_dict" to the "data" dictionary data["metadata"] = { "some_data_key_1": "some_data_value_1", "some_data_key_2": "some_data_value_2", "some_data_key_3": "some_data_value_3"} resp_str = json.dumps(data, indent=2) # converts dict into JSON string return Response(content=resp_str, status_code=response.status_code, media_type="application/json") @app.get("/") def root(): return {"hello": "world!"} @app.get("/items/{id}") def get_item(id: int): return {"Item": id} @app.get("/courses/{code}") def get_course(code: my_constr): return {"course_code": code, "course_title": "Deep Learning"} Update 2 Another solution would be to use a custom APIRoute class, as demonstrated here and here, which would allow you to apply the changes to the response body only for routes that you have specified—which would solve the issue with Swaager UI in a more easy way. Alternatively, you could still use the middleware option if you wish, but instead of adding the middleware to the main app, you could add it to a sub application—as shown in this answer and this answer—that includes again only the routes for which you need to modify the response, in order to add some additional data in the body. | 5 | 5 |
71,102,876 | 2022-2-13 | https://stackoverflow.com/questions/71102876/in-ipython-how-do-i-accept-and-use-an-autocomplete-suggestion | I'm using Python 3.8.9 with IPython 8.0.1 on macOS. When I type anything whatsoever, it displays a predicted suggestion based on past commands. Cool. However, how do I actually accept that suggestion? I tried the obvious: tab, which does not accept the suggestion, but rather opens up a menu with different suggestions, while the original suggestion is still there (see screenshot). I also tried space, and return, but both of those act as if the suggestion was never made. How the heck do I actually use the ipython autosuggestion? Or is tab supposed to work and something is wrong with my ipython build or something? | CTRL-E, CTRL-F, or Right Arrow Key https://ipython.readthedocs.io/en/8.13.2/config/shortcuts/index.html Alternatively, the End key, as suggested by Richard Berg (below). | 71 | 76 |
71,086,453 | 2022-2-11 | https://stackoverflow.com/questions/71086453/how-to-combine-the-elements-of-two-lists-using-zip-function-in-python | I have two different lists and I would like to know how I can get each element of one list print with each element of another list. I know I could use two for loops (each for one of the lists), however I want to use the zip() function because there's more that I will be doing in this for loop for which I will require parallel iteration. I therefore attempted the following but the output is as shown below. lasts = ['x', 'y', 'z'] firsts = ['a', 'b', 'c'] for last, first in zip(lasts, firsts): print (last, first, "\n") Output: x a y b z c Expected Output: x a x b x c y a y b y c z a z b z c | I believe the function you are looking for is itertools.product: lasts = ['x', 'y', 'z'] firsts = ['a', 'b', 'c'] from itertools import product for last, first in product(lasts, firsts): print (last, first) x a x b x c y a y b y c z a z b z c Another alternative, that also produces an iterator is to use a nested comprehension: iPairs=( (l,f) for l in lasts for f in firsts) for last, first in iPairs: print (last, first) If you must use zip(), both inputs must be extended to the total length (product of lengths) with the first one repeating items and the second one repeating the list: iPairs = zip( (l for l in lasts for _ in firsts), (f for _ in lasts for f in firsts) ) | 6 | 7 |
71,132,469 | 2022-2-15 | https://stackoverflow.com/questions/71132469/appending-row-to-dataframe-with-concat | I have defined an empty data frame with df = pd.DataFrame(columns=['Name', 'Weight', 'Sample']) and want to append rows in a for loop like this: for key in my_dict: ... row = {'Name':key, 'Weight':wg, 'Sample':sm} df = pd.concat(row, axis=1, ignore_index=True) But I get this error cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid If I use df = df.append(row, ignore_index=True), it works but it seems that append is deprecated. So, I want to use concat(). How can I fix that? | You can transform your dict in pandas DataFrame import pandas as pd df = pd.DataFrame(columns=['Name', 'Weight', 'Sample']) for key in my_dict: ... #transform your dic in DataFrame new_df = pd.DataFrame([row]) df = pd.concat([df, new_df], axis=0, ignore_index=True) | 24 | 36 |
71,092,850 | 2022-2-12 | https://stackoverflow.com/questions/71092850/how-to-install-uwsgi-on-windows | I'm trying to install uwsgi for a django project inside a virtual environment; I'm using windows 10. I did pip install uwsgi & I gotCommand "python setup.py egg_info". So to resolve the error I followed this SO answer As per the answer I installed cygwin and gcc compiler for windows following this. Also changed the os.uname() to platform.uname() And now when I run `python setup.py install``. I get this error C:\Users\Suhail\AppData\Local\Programs\Python\Python39\lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'descriptions' warnings.warn(msg) running install C:\Users\Suhail\AppData\Local\Programs\Python\Python39\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( using profile: buildconf/default.ini detected include path: ['/usr/include', '/usr/local/include'] Patching "bin_name" to properly install_scripts dir detected CPU cores: 4 configured CFLAGS: -O2 -I. -Wall -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -DUWSGI_IPCSEM_ATEXIT -DUWSGI_EVENT_TIMER_USE_NONE -DUWSGI_EVENT_FILEMONITOR_USE_NONE -DUWSGI_VERSION="\"2.0.19.1\"" -DUWSGI_VERSION_BASE="2" -DUWSGI_VERSION_MAJOR="0" -DUWSGI_VERSION_MINOR="19" -DUWSGI_VERSION_REVISION="1" -DUWSGI_VERSION_CUSTOM="\"\"" -DUWSGI_YAML -DUWSGI_PLUGIN_DIR="\".\"" -DUWSGI_DECLARE_EMBEDDED_PLUGINS="UDEP(python);UDEP(gevent);UDEP(ping);UDEP(cache);UDEP(nagios);UDEP(rrdtool);UDEP(carbon);UDEP(rpc);UDEP(corerouter);UDEP(fastrouter);UDEP(http);UDEP(signal);UDEP(syslog);UDEP(rsyslog);UDEP(logsocket);UDEP(router_uwsgi);UDEP(router_redirect);UDEP(router_basicauth);UDEP(zergpool);UDEP(redislog);UDEP(mongodblog);UDEP(router_rewrite);UDEP(router_http);UDEP(logfile);UDEP(router_cache);UDEP(rawrouter);UDEP(router_static);UDEP(sslrouter);UDEP(spooler);UDEP(cheaper_busyness);UDEP(symcall);UDEP(transformation_tofile);UDEP(transformation_gzip);UDEP(transformation_chunked);UDEP(transformation_offload);UDEP(router_memcached);UDEP(router_redis);UDEP(router_hash);UDEP(router_expires);UDEP(router_metrics);UDEP(transformation_template);UDEP(stats_pusher_socket);" -DUWSGI_LOAD_EMBEDDED_PLUGINS="ULEP(python);ULEP(gevent);ULEP(ping);ULEP(cache);ULEP(nagios);ULEP(rrdtool);ULEP(carbon);ULEP(rpc);ULEP(corerouter);ULEP(fastrouter);ULEP(http);ULEP(signal);ULEP(syslog);ULEP(rsyslog);ULEP(logsocket);ULEP(router_uwsgi);ULEP(router_redirect);ULEP(router_basicauth);ULEP(zergpool);ULEP(redislog);ULEP(mongodblog);ULEP(router_rewrite);ULEP(router_http);ULEP(logfile);ULEP(router_cache);ULEP(rawrouter);ULEP(router_static);ULEP(sslrouter);ULEP(spooler);ULEP(cheaper_busyness);ULEP(symcall);ULEP(transformation_tofile);ULEP(transformation_gzip);ULEP(transformation_chunked);ULEP(transformation_offload);ULEP(router_memcached);ULEP(router_redis);ULEP(router_hash);ULEP(router_expires);ULEP(router_metrics);ULEP(transformation_template);ULEP(stats_pusher_socket);" *** uWSGI compiling server core *** [thread 1][gcc] core/utils.o [thread 2][gcc] core/protocol.o [thread 3][gcc] core/socket.o [thread 0][gcc] core/logging.o In file included from In file included from core/logging.c:2core/socket.c:1 : : ./uwsgi.h:172:10:: fatal error: sys/socket.h: No such file or directory #include sys/socket.h: No such file or directory #include <sys/socket.h>sys/socket.h: No such file or directory #include ^~~~~~~~~~~~~~^~~~~~~~~~~~~~ cc^~~~~~~~~~~~~~o mm ppiillaattiioonn cttoeemrrpmmiiilnnaaatttieeoddn.. In file included from t rcore/utils.c:1mi: n uwsgi-2.0.19.1> t./uwsgi.h:172:10:ed . fatal error: sys/socket.h: No such file or directory #include <sys/socket.h> ^~~~~~~~~~~~~~ compilation terminated. | Step 1: Download this stable release of uWSGI Step 2: Extract the tar file inside the site-packages folder of the virtual environment. For example the extracted path to uwsgi should be: \my_env\lib\site-packages\uwsgi-2.0.19.1 Step 3: Open uwsgi-2.0.19.1\uwsgiconfig.py And do the following edits: import platform ... Then wherever you encounter ... os.uname()[x-index] ... modify it with ... platform.uname()[x-index] ... Step 4: Finally, Open powershell and cd into \my_env\lib\site-packages\uwsgi-2.0.19.1 and run: python setup.py install Got an error? Check out this Step 5: Run pip install uwsgi you'll get Requirement already satisfied:. Try pip freeze you'll see uwsgi is listed. Which means you have now successfully installed uwsgi. Congrats! Reference | 7 | 6 |
71,125,094 | 2022-2-15 | https://stackoverflow.com/questions/71125094/debug-a-python-c-c-pybind11-extension-in-vscode-linux | Problem Statement I want to run and debug my own C++ extensions for python in "hybrid mode" in VSCode. Since defining your own python wrappers can be quite tedious, I want to use pybind11 to link C++ and python. I love the debugging tools of vscode, so I would like to debug both my python scripts as well as the C++ functions in vscode. Fortunately, debugging python and C++ files simultaneously is possible by first starting the python debugger and then attach a gdb debugger to that process as described in detail in nadiah's blog post (Windows users, please note this question). This works fine for me. Unfortunately, they define the C++ -- python bindings manually. I would like to use pybind11 instead. I created a simplified example that is aligned with nadiah's example using pybind11. Debugging the python file works but the gdb debugger doesn't stop in the .cpp file. According to this github question it should be theoretically possible but there are no details on how to achieve this. Steps to reproduce Here I try to follow nadiahs example as closely as possible but include pybind11 wrappers. Setting up the package Create a virtual environment (also works with anaconda, as described below) virtualenv --python=python3.8 myadd cd myadd/ . bin/activate Create file myadd.cpp #include <pybind11/pybind11.h> float method_myadd(float arg1, float arg2) { float return_val = arg1 + arg2; return return_val; } PYBIND11_MODULE(myadd, handle) { handle.doc() = "This is documentation"; handle.def("myadd", &method_myadd); } , myscript.py import myadd print("going to ADD SOME NUMBERS") x = myadd.myadd(5,6) print(x) and setup.py from glob import glob from distutils.core import setup, Extension from pybind11.setup_helpers import Pybind11Extension def main(): setup(name="myadd", version="1.0.0", description="Python interface for the myadd C library function", author="Nadiah", author_email="[email protected]", ext_modules=[Pybind11Extension("myadd",["myadd.cpp"])], ) if __name__ == "__main__": main() Clone the pybind11 repo git clone [email protected]:pybind/pybind11.git and install the python package pip install pybind11 Run the setup script python3 setup.py install Now, we can already run the python script python myscript.py Setting up vscode Open vscode code . Select the python interpreter with Ctrl+Shift+p -> Select python interpreter -> ./bin/python, now in the lower bar, you should see virtualenv myadd. Create the launch.json file by clicking the debug symbol and 'Create new launch configuration'. This is my launch.json (This might be the problem) { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python", "type": "python", "request": "launch", "program": "myscript.py", "console": "integratedTerminal" }, { "name": "(gdb) Attach", "type": "cppdbg", "request": "attach", "program": "${workspaceFolder}/bin/python", /* My virtual env */ "processId": "${command:pickProcess}", "MIMode": "gdb", "setupCommands": [ { "description": "Enable pretty-printing for gdb", "text": "-enable-pretty-printing", "ignoreFailures": true } ], "additionalSOLibSearchPath": "${workspaceFolder}/build/lib.linux-x86_64-3.8;${workspaceFolder}/lib;${workspaceFolder}/lib/python3.8/site-packages/myadd-1.0.0-py3.8-linux-x86_64.egg/" } ] } Note that I added the "additionalSOLibSearchPath" option in accordance to the github question but it did not change anything. Debugging In vscode, add breakpoints in myscript.py in line 5 and 7, and in myadd.cpp in line 5. Now, first start the python debugger and let it stop on the breakpoint in line 5. Then go to a terminal and get the correct process id of the running python script. ps aux | grep python The second to last process is the correct one in my case. E.g. username **65715** 3.0 0.0 485496 29812 pts/3 Sl+ 10:37 0:00 /home/username/myadd/bin/python /home/username/.vscode/extensions/ms-python.python-2022.0.1814523869/pythonFiles/lib/python/debugpy --connect 127.0.0.1:38473 --configure-qt none --adapter-access-token ... myscript.py In this example, 65715 would be the correct process id. Now, in vscode start the (gdb) Attach debugger and type in the process id in the search bar. Hit enter, now you need to type y in the console to allow the attaching and type in your sudo password. If you are following nadiah's example, you can now press continue on the python debug bar and the script will stop on the C++ breakpoint. For this pybind11 example, the script does not stop on the C++ breakpoint. Project Structure Your project structure now should look like this myadd | bin/ | build/ | dist/ | lib/ | myadd.cpp | myadd.egg-info/ | myscript.py | pybind11/ | setup.py Things I also tried As stated in the github post, one has to ensure that the debug flag is set. Therefore, I added a setup.cfg file [build_ext] debug=1 [aliases] debug_install = build_ext --debug install And ran python setup.py debug_install but this did not help as well. Using anaconda instead of virtualenv Using conda instead of virtualenv is quite easy. Just create your env as usual and then type in which python to get the path to the python executable. Replace the "program" in the (gdb) Attach debug configuration of your launch.json with this path. Software versions I run Ubuntu 20.04 Vscode 1.64.2 x64 GNU gdb 9.2 gcc 9.3.0 python 3.8 as defined in the virtualenv. | TLDR I think the C++ code was not build with debug information. Adding the keyword argument extra_compile_args=["-g"] to the Pybind11Extension in the setup.py may be enough to solve it. Regardless read on for my solution proposal, that worked for me. Steps I could make this work by using the Python C++ Debugger extension by BeniBenj, by setting the C++ flag -g and by using the --no-clean pip flag. For the sake of completeness, I am going to enclose here my minimal working project. Create the bindings in the add.cpp file: #include <pybind11/pybind11.h> float cpp_add(float arg1, float arg2) { float return_val = arg1 + arg2; return return_val; } PYBIND11_MODULE(my_add, handle) { handle.doc() = "This is documentation"; handle.def("cpp_add", &cpp_add); } Create the testing python script: import my_add if __name__ == "__main__": x = 5 y = 6 print(f"Adding {x} and {y} together.") z = my_add.cpp_add(x, y) print(f"Result is {z}") Create the setup.py file: import os from distutils.core import setup from pybind11.setup_helpers import Pybind11Extension setup(name="myadd", version="1.0.0", ext_modules=[Pybind11Extension("my_add", ["add.cpp"], extra_compile_args=["-g"])], ) The important thing about the setup.py file is that it builds the C++ code with debug information. I have the suspicion that this is what was missing for you. The package can be installed with: pip install --no-clean ./ The --no-clean is important. It prevents the sources that your debugger will try to open from being deleted. Now is the time for launching both the Python and the C++ debuggers. I am using the Python C++ Debugger extension by BeniBenj as recommended by the creator in a Github issue. After installing it, just create a debug config by clicking on the "create a launch.json file", selecting "Python C++ Debugger" and than choosing from the options. (For me both the Default and the Custom: GDB worked.) Place the breakpoints in both the python and the C++ code. (In the python code, I recommend placing them on the line with the binded code and the one after it.) Select your script and run the "Python C++ Debugger" configuration. The code should pause on entry, and on the second terminal that just opened this question should appear: Superuser access is required to attach to a process. Attaching as superuser can potentially harm your computer. Do you want to continue? [y/N] Answer y. Start debugging. Upon reaching your binded code in python, you may have to click manually in the call stack (in the debug panel on the left) to actually switch into the C++ code. Extra info Building the C++ code with debug information. I could not find online how to set the compiler flags within the setup.py. Therefore I looked into the source code of the Pybind11Extension. There I saw this line: env_cppflags = os.environ.get("CPPFLAGS", "") This suggested that I can set the flags with environment variables, and indeed I could. I added this line in the setup.py before the setup() function: os.environ["CPPFLAGS"] = "-g" However, as I was typing this answer I also saw this comment in the pybind11 source code: # flags are prepended, so that they can be further overridden, e.g. by # ``extra_compile_args=["-g"]``. I tested it and it works too. However, I could not find it in the documentation. That is the main reason I am including these steps here. Keeping the sources For me. when the debugger pauses on the breakpoint within the C++ code, it wants to open the source file within tmp/pip-req-build-o1w6len6/add.cpp. If the --no-clean option is not kept in the pip installation, then this file will not be found. (I had to create it and copy the source code into it.) launch.json Here is the Custom: GDB configuration, where the "miDebuggerPath" parameter may be deleted: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python C++ Debugger", "type": "pythoncpp", "request": "launch", "pythonLaunchName": "Python: Current File", "cppAttachName": "(gdb) Attach" }, { "name": "(gdb) Attach", "type": "cppdbg", "request": "attach", "program": "/home/dudly01/anaconda3/envs/trash_env/bin/python", "processId": "", "MIMode": "gdb", "miDebuggerPath": "/path/to/gdb or remove this attribute for the path to be found automatically", "setupCommands": [ { "description": "Enable pretty-printing for gdb", "text": "-enable-pretty-printing", "ignoreFailures": true } ] }, { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" } ] } Here is the default configuration: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python C++ Debugger", "type": "pythoncpp", "request": "launch", "pythonConfig": "default", "cppConfig": "default (gdb) Attach" } ] } | 9 | 7 |
71,109,838 | 2022-2-14 | https://stackoverflow.com/questions/71109838/numpy-typing-with-specific-shape-and-datatype | Currently i'm trying to work more with numpy typing to make my code clearer however i've somehow reached a limit that i can't currently override. Is it possible to specify a specific shape and also the corresponding data type? Example: Shape=(4,) datatype= np.int32 My attempts so far look like the following (but all just threw errors): First attempt: import numpy as np def foo(x: np.ndarray[(4,), np.dtype[np.int32]]): ... result -> 'numpy._DTypeMeta' object is not subscriptable Second attempt: import numpy as np import numpy.typing as npt def foo(x: npt.NDArray[(4,), np.int32]): ... result -> Too many arguments for numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]] Also, unfortunately, I can't find any information about it in the documentation or I only get errors when I implement it the way it is documented. | Currently, numpy.typing.NDArray only accepts a dtype, like so: numpy.typing.NDArray[numpy.int32]. You have some options though. Use typing.Annotated typing.Annotated allows you to create an alias for a type and to bundle some extra information with it. In some my_types.py you would write all variations of shapes you want to hint: from typing import Annotated, Literal, TypeVar import numpy as np import numpy.typing as npt DType = TypeVar("DType", bound=np.generic) Array4 = Annotated[npt.NDArray[DType], Literal[4]] Array3x3 = Annotated[npt.NDArray[DType], Literal[3, 3]] ArrayNxNx3 = Annotated[npt.NDArray[DType], Literal["N", "N", 3]] And then in foo.py, you can supply a numpy dtype and use them as typehint: import numpy as np from my_types import Array4 def foo(arr: Array4[np.int32]): assert arr.shape == (4,) MyPy will recognize arr to be an np.ndarray and will check it as such. The shape checking can be done at runtime only, like in this example with an assert. If you don't like the assertion, you can use your creativity to define a function to do the checking for you. def assert_match(arr, array_type): hinted_shape = array_type.__metadata__[0].__args__ hinted_dtype_type = array_type.__args__[0].__args__[1] hinted_dtype = hinted_dtype_type.__args__[0] assert np.issubdtype(arr.dtype, hinted_dtype), "DType does not match" assert arr.shape == hinted_shape, "Shape does not match" assert_match(some_array, Array4[np.int32]) Use nptyping Another option would be to use 3th party lib nptyping (yes, I am the author). You would drop my_types.py as it would be of no use anymore. Your foo.py would become something like: from nptyping import NDArray, Shape, Int32 def foo(arr: NDArray[Shape["4"], Int32]): assert isinstance(arr, NDArray[Shape["4"], Int32]) Use beartype + typing.Annotated There is also another 3th party lib called beartype that you could use. It can take a variant of the typing.Annotated approach and will do the runtime checking for you. You would reinstate your my_types.py with content similar to: from beartype import beartype from beartype.vale import Is from typing import Annotated import numpy as np Int32Array4 = Annotated[np.ndarray, Is[lambda array: array.shape == (4,) and np.issubdtype(array.dtype, np.int32)]] Int32Array3x3 = Annotated[np.ndarray, Is[lambda array: array.shape == (3,3) and np.issubdtype(array.dtype, np.int32)]] And your foo.py would become: import numpy as np from beartype import beartype from my_types import Int32Array4 @beartype def foo(arr: Int32Array4): ... # Runtime type checked by beartype. Use beartype + nptyping You could also stack up both libraries. Your my_types.py can be removed again and your foo.py would become something like: from nptyping import NDArray, Shape, Int32 from beartype import beartype @beartype def foo(arr: NDArray[Shape["4"], Int32]): ... # Runtime type checked by beartype. | 46 | 56 |
71,068,392 | 2022-2-10 | https://stackoverflow.com/questions/71068392/group-and-create-three-new-columns-by-condition-low-hit-high | I have a large dataset (~5 Mio rows) with results from a Machine Learning training. Now I want to check to see if the results hit the "target range" or not. Lets say this range contains all values between -0.25 and +0.25. If it's inside this range, it's a Hit, if it's below Low and on the other side High. I now would create this three columns Hit, Low, High and calculate for each row which condition applies and put a 1 into this col, the other two would become 0. After that I would group the values and sum them up. But I suspect there must be a better and faster way, such as calculate it directly while grouping. Data import pandas as pd df = pd.DataFrame({"Type":["RF", "RF", "RF", "MLP", "MLP", "MLP"], "Value":[-1.5,-0.1,1.7,0.2,-0.7,-0.6]}) +----+--------+---------+ | | Type | Value | |----+--------+---------| | 0 | RF | -1.5 | <- Low | 1 | RF | -0.1 | <- Hit | 2 | RF | 1.7 | <- High | 3 | MLP | 0.2 | <- Hit | 4 | MLP | -0.7 | <- Low | 5 | MLP | -0.6 | <- Low +----+--------+---------+ Expected Output pd.DataFrame({"Type":["RF", "MLP"], "Low":[1,2], "Hit":[1,1], "High":[1,0]}) +----+--------+-------+-------+--------+ | | Type | Low | Hit | High | |----+--------+-------+-------+--------| | 0 | RF | 1 | 1 | 1 | | 1 | MLP | 2 | 1 | 0 | +----+--------+-------+-------+--------+ | You could use cut to define the groups and pivot_table to reshape: (df.assign(group=pd.cut(df['Value'], [float('-inf'), -0.25, 0.25, float('inf')], labels=['Low', 'Hit', 'High'])) .pivot_table(index='Type', columns='group', values='Value', aggfunc='count') .reset_index() .rename_axis(None, axis=1) ) Or crosstab: (pd.crosstab(df['Type'], pd.cut(df['Value'], [float('-inf'), -0.25, 0.25, float('inf')], labels=['Low', 'Hit', 'High']) ) .reset_index().rename_axis(None, axis=1) ) output: Type Low Hit High 0 MLP 2 1 0 1 RF 1 1 1 | 9 | 11 |
71,048,280 | 2022-2-9 | https://stackoverflow.com/questions/71048280/upgrade-python-to-3-10-in-windows-do-i-have-to-reinstall-all-site-packages-manu | I have in windows 10 64 bit installed python 3.9 with site-packages. I would like to install python 3.10.2 on windows 10 64 bit and find a way to install packages automatically in python 3.10.2, the same ones I currently have installed in python 3.9. I am also interested in the answer to this question for windows 11 64 bit. | I upgraded to python 3.10.2 in windows 10 64 bit. To properly install the packages, install the appropriate version of the Microsoft Visual C++ compiler if necessary. Details can be read https://wiki.python.org/moin/WindowsCompilers . With the upgrade to python 3.10.2 from 3.9, it turned out that I had to do it, due to errors that are appearing during the installation of the packages. Before the installing python 3.10.2, type and execute the following command in the windows command prompt: pip freeze > reqs.txt This command writes to the reqs.txt file the names of all installed packages in the version suitable for pip. If you run the command prompt with administrator privileges, the reqs.txt file will be saved in the directory C:\WINDOWS\system32. Then, after the installing of python 3.10.2 and the adding it to the paths in PATH, with the help of the command prompt you need to issue the command: pip install -r reqs.txt This will start the installing of the packages in the same versions as for python 3.9. If problems occur, e.g. an installation error appears during the installation of lxml, then you can remove from the regs.txt file the entry with the name of the package whose installation is causing the problem and then install it manually. To edit the reqs.txt file you need the administrator privileges. The easiest way is to run the command prompt in the administrator mode, type reqs.txt and click Enter to edit it. I decided later to update the missing packages to the latest version, because I suspected that with python 3.10.2 older versions were not compatible. This means that when upgrading to python 3.10.2 it is worth asking yourself whether it is better to upgrade for all packages. To do this, you can generate the list of the outdated packages using the command: pip list –-outdated After the printing of the list in the command prompt, you can upgrade the outdated packages using the command: pip install --upgrade <package-name> This can be automated by the editing of the reqs.txt file and the changing of the mark == to > which will speed up the upgrade. The mark > should only be changed for the outdated packages or you will get an error: "Could not find a version that satisfies the requirement ... ". Supplement to virtual environments: When you enter a virtual environment directory (in the windows command prompt):, such as D:\python_projects\data_visualization\env\Scripts, type activate to activate it. Then create the reqs.txt file analogous to the description above. Then, copy the file to a temporary directory. After this delete the virtual environment, e.g. using the windows explorator by the deleting of the contents of the env directory. Then, using the version of python in windows of our choice, create a virtual environment using the env directory (see: https://docs.python.org/3/library/venv.html). Copy the regs.txt file to the newly created D:\python_projects\data_visualization\env\Scripts directory. Install site-packages with the support of the regs.txt file as described above. | 7 | 10 |
71,118,601 | 2022-2-14 | https://stackoverflow.com/questions/71118601/saving-a-plotly-image-not-working-with-kaleido-even-though-it-is-installed | I am trying to save a simple plotly figure to a directory. I understand it needs kaleido (I have version '0.2.1') and also at least plotly '5.3.1' which are installed. However trying to save the image I get the following error: fig.write_image(path) ValueError: Image export using the "kaleido" engine requires the kaleido package, which can be installed using pip: $ pip install -U kaleido Why is this occuring when all the required packages are there? | I've got exactly the same issue on google colab. Even after installation kaleido, the same error. Accidentally I've found fix, it seems necesarry to import kaleido first: !pip install kaleido import kaleido #required kaleido.__version__ #0.2.1 import plotly plotly.__version__ #5.5.0 #now this works: import plotly.graph_objects as go fig = go.Figure() fig.write_image('aaa.png') | 12 | 10 |
71,049,155 | 2022-2-9 | https://stackoverflow.com/questions/71049155/generate-graph-from-a-list-of-connected-components | Setup Let's assume the following undirected graph: import networkx as nx G = nx.from_edgelist([(0, 3), (0, 1), (2, 5), (0, 3)]) G.add_nodes_from(range(7)) or even adding the (1, 3) edge (it doesn't matter here): The connected components are: list(nx.connected_components(G)) # [{0, 1, 3}, {2, 5}, {4}, {6}] Question Is it possible to generate the graph G from the list of connected components directly with networkx? Or using a simple method? The only solution I found so far is to generate the successive edges or all combinations of nodes per group and feed it to nx.from_edgelist, then to add the single nodes with add_nodes_from: from itertools import pairwise, chain l = [{0, 1, 3}, {2, 5}, {4}, {6}] G = nx.from_edgelist(chain.from_iterable(pairwise(e) for e in l)) G.add_nodes_from(set.union(*l)) or for all edges: from itertools import combinations, chain l = [{0, 1, 3}, {2, 5}, {4}, {6}] G = nx.from_edgelist(chain.from_iterable(combinations(e, 2) for e in l)) G.add_nodes_from(set.union(*l)) | An alternative to itertools.pairwise is networkx.path_graph. An alternative to itertools.combinations is networkx.complete_graph. These two networkx functions return a new graph, not a list of edges, so you can combine them with networkx.compose_all. Note also union_all and disjoint_union_all as alternatives to compose_all. import networkx as nx l = [{0, 1, 3}, {2, 5}, {4}, {6}] G = nx.compose_all(map(nx.path_graph, l)) H = nx.compose_all(map(nx.complete_graph, l)) print(G.nodes, G.edges) # [0, 1, 3, 2, 5, 4, 6] [(0, 1), (1, 3), (2, 5)] print(H.nodes, H.edges) # [0, 1, 3, 2, 5, 4, 6] [(0, 1), (0, 3), (1, 3), (2, 5)] I haven't actually run benchmarks, but I suspect creating several graphs and composing them might be slower than creating lists of edges and chaining them to create only one graph. | 6 | 11 |
71,046,200 | 2022-2-9 | https://stackoverflow.com/questions/71046200/how-do-you-export-a-pydantic-model-to-yaml-using-anchors | I would like to export a Pydantic model to YAML, but avoid repeating values and using references (anchor+aliases) instead. Here's an example: from typing import List from ruamel.yaml import YAML # type: ignore import yaml from pydantic import BaseModel class Author(BaseModel): id: str name: str age: int class Book(BaseModel): id: str title: str author: Author class Library(BaseModel): authors: List[Author] books: List[Book] john_smith = Author(id="auth1", name="John Smith", age=42) books = [ Book(id="book1", title="Some title", author=john_smith), Book(id="book2", title="Another one", author=john_smith), ] library = Library(authors=[john_smith], books=books) print(yaml.dump(library.dict())) I get: authors: - age: 42 id: auth1 name: John Smith books: - author: age: 42 id: auth1 name: John Smith id: book1 title: Some title - author: age: 42 id: auth1 name: John Smith id: book2 title: Another one You can see that all author fields are repeated in each book. I would like something that uses anchors instead of copying all the information, like this: authors: - &auth1 age: 42 id: auth1 name: John Smith books: - author: *auth1 id: book1 title: Some title - author: *auth1 id: book2 title: Another one How can I achieve this? | When you traverse a nested Python data structure in order to convert it, you have to deal with the possibility of self-reference, otherwise your code will get in an endless loop if the data is self-referential. The way ruamel.yaml (and the standard library json.dump() ) deal with that is keeping a list of id()s of the collection objects (everything you want to recurse into, so not primitives like int, float, str) and if such an id() is already in the list represent, the first occurrence of that collection object as an anchor and the other occurrences as an alias, so you don't have to recurse again into the object ( json.dump() tells you it cannot dump such a structure, but at least it doesn't hang). The same mechanism (keeping track of id()s) is used in ruamel.yaml to not repeat the same collection when it is referenced in multiple other collections. pydantic doesn't seem to do that, hence the "written out" structure you get when calling library.dict(). I think that is the reason why in the documentation you are told to use a string with a class name when dumping pydanctic to JSON with self referential data To get around this limitation of pydantic you could do two things: write an alternative to .dict() that returns a data structure that dumps to the required YAML document format, which means it needs to return a structure with the same data (dict) in multiple places. make sure you can dump your classes directly using ruamel.yaml, so you don't have to convert them. But for both of these to work it is required that the author that you add to book1 and book2 is the same after adding, and it is not. You cannot safely assume that if two dicts have the same key/value pairs they are the same object so any comparison will need to be done using is and not using ==. After you pass in john_smith to the two calls of Book(), you don't have an attribute .author that points to the same data (i.e. has the same id()): from pydantic import BaseModel from typing import List class Author(BaseModel): id: str name: str age: int class Book(BaseModel): id: str title: str author: Author class Library(BaseModel): authors: List[Author] books: List[Book] john_smith = Author(id="auth1", name="John Smith", age=42) books = [ Book(id="book1", title="Some title", author=john_smith), Book(id="book2", title="Another one", author=john_smith), ] library = Library(authors=[john_smith], books=books) print('same author?', john_smith is library.books[0].author) print('same author?', library.books[0].author is library.books[1].author) which gives: same author? False same author? False What you can do is force the authors to be identical, and then use something smarter than pydantic's .dict(): import sys import ruamel.yaml def gen_data(d, id_map=None): if id_map is None: id_map = {} d_id = id(d) if d_id in id_map: print('already found', id_map) return id_map[d_id] if isinstance(d, BaseModel): ret_val = {} for k, v in d: if k == 'author': print('auth', v, id(v)) ret_val[k] = gen_data(v, id_map) elif isinstance(d, list): ret_val = [] for elem in d: ret_val.append(gen_data(elem, id_map)) else: return d # should be primitive id_map[d_id] = ret_val return ret_val # force authors to be the same library.books[0].author = library.books[1].author = library.authors[0] assert library.books[0].author is library.books[1].author # alternative for .dict() data = gen_data(library) yaml = ruamel.yaml.YAML() yaml.dump(data, sys.stdout) and that results in what you wanted: auth id='auth1' name='John Smith' age=42 140494566559168 already found {140494566559168: {'id': 'auth1', 'name': 'John Smith', 'age': 42}, 140494576359168: [{'id': 'auth1', 'name': 'John Smith', 'age': 42}]} auth id='auth1' name='John Smith' age=42 140494566559168 already found {140494566559168: {'id': 'auth1', 'name': 'John Smith', 'age': 42}, 140494576359168: [{'id': 'auth1', 'name': 'John Smith', 'age': 42}], 140494566559216: {'id': 'book1', 'title': 'Some title', 'author': {'id': 'auth1', 'name': 'John Smith', 'age': 42}}} authors: - &id001 id: auth1 name: John Smith age: 42 books: - id: book1 title: Some title author: *id001 - id: book2 title: Another one author: *id001 Please note that you shouldn't import yaml, but instead intantiate a ruamel.yaml.YAML() instance. If necessary, in ruamel.yaml it is possible to control the name of the anchor/alias to something else than the id001. | 6 | 4 |
71,090,408 | 2022-2-12 | https://stackoverflow.com/questions/71090408/how-to-use-release-branch-to-increment-version-using-setuptools-scm | I am looking at https://github.com/pypa/setuptools_scm and I read this part https://github.com/pypa/setuptools_scm#version-number-construction and i quote Semantic versioning for projects with release branches. The same as guess-next-dev (incrementing the pre-release or micro segment) if on a release branch: a branch whose name (ignoring namespace) parses as a version that matches the most recent tag up to the minor segment. Otherwise if on a non-release branch, increments the minor segment and sets the micro segment to zero, then appends .devN. How does this work? Assuming my setup is at this commit https://github.com/simkimsia/test-setup-py/commit/5ebab14b16b63090ad0554ad8f9a77a28b047323 and the same repo, how do i increment the version by branching? What i tried on 2022-03-15 I updated some files on main branch. Then i did the following python -m pip install --upgrade "pip ~= 21.3" pip install pip-tools "pip-tools ~= 6.5" git init . git add . git commit -m '♻️ REFACTOR' git tag -a v0.0.0 -m '🔖 First tag v0.0.0' pip-compile pip-sync pip install -e . Then i push my changes including the tag So this commit is https://github.com/simkimsia/test-setup-py/commit/75838db70747fd06cc190218562d0548baa16e9d When I run python -m demopublicpythonproject the version that appears is correct The version number that appears here is based on https://github.com/simkimsia/test-setup-py/blob/75838db70747fd06cc190218562d0548baa16e9d/demopublicpythonproject/framework/__init__.py#L14 Then i branch off git checkout -b v0.0.1 Then i added a pyproject.toml and set to release-branch # pyproject.toml [build-system] requires = ["setuptools>=45", "setuptools_scm[toml]>=6.2"] version_scheme = "release-branch-semver" see https://github.com/simkimsia/test-setup-py/blob/v0.0.1/pyproject.toml Then i run python -m setuptools_scm I get /Users/kimsia/.venv/test-setup-py-py3812/bin/python: No module named setuptools_scm In any case i run the following pip-compile pip-sync pip install -e . git commit -m 'Attempt to do branch semver' then i have this commit as a result https://github.com/simkimsia/test-setup-py/commit/527885531afe37014dc66432a43a402ec0808caa When I run python -m demopublicpythonproject I get this image The version appears to follow based on the branch number but i might be wrong because the latest tag is v0.0.0 so i git checkout -b main git checkout -b v0.1.0 pip-sync pip install -e . python -m demopublicpythonproject i get a different version number 0.0.1.dev1+g45f5696 but not 0.1.0 | Branches main and v0.1.0 don't have pyproject.toml, so you need to add that file. version_scheme should be under [tool.setuptools_scm] instead of [build-system]: # pyproject.toml [build-system] requires = ["setuptools>=45", "setuptools_scm[toml]>=6.2"] [tool.setuptools_scm] version_scheme = "release-branch-semver" This will give you 0.1.0.dev1+g45f5696. You can check the version number locally: python setup.py --version Release branches git checkout -b main git checkout -b v0.1.0 If you're on a release branch (e.g. v0.1, release-0.1), then the patch version is bumped. If you're on main or a feature branch, then the minor version is bumped. Tag names and branch names should not be exactly the same. Release branch names usually only contain up to the minor version: git checkout -b v0.1 pip-tools + setuptools_scm Since setup.cfg only has setuptools_scm in setup_requires and not install_requires, pip-compile (without options) does not compile it into requirements.txt and pip-sync will uninstall setuptools-scm, so you have to pip install setuptools_scm after pip-sync. Alternatively, you can add setup = setuptools_scm to [options.extras_require]: # setup.cfg ... [options] setup_requires = setuptools_scm ... [options.extras_require] setup = setuptools_scm Usage: pip-compile --extra setup -o setup-requirements.txt pip-sync setup-requirements.txt References: https://github.com/jazzband/pip-tools/issues/825 https://github.com/jazzband/pip-tools#workflow-for-layered-requirements How to use setup.py and setup.cfg and pip-tools to obtain layered requirements.txt under different environment of a Django project? Release versions setuptools_scm mainly generates development and post-release versions. To generate a release version like 0.1.0, you can pass a callable into use_scm_version: # content of setup.py def myversion(): from setuptools_scm.version import SEMVER_MINOR, guess_next_simple_semver, release_branch_semver_version def my_release_branch_semver_version(version): v = release_branch_semver_version(version) if v == version.format_next_version(guess_next_simple_semver, retain=SEMVER_MINOR): return version.format_next_version(guess_next_simple_semver, fmt="{guessed}", retain=SEMVER_MINOR) return v return { 'version_scheme': my_release_branch_semver_version, 'local_scheme': 'no-local-version', } setup(use_scm_version=myversion) Reference: https://github.com/pypa/setuptools_scm#importing-in-setuppy | 6 | 4 |
71,119,083 | 2022-2-14 | https://stackoverflow.com/questions/71119083/python-interpreter-version-not-showing-in-status-bar-of-vs-code-on-mac | My python interpreter version does not show up at the bottom of the status bar on VS code on my Mac, it used to but suddenly stopped. Everything works but it just doesn’t show, I tried many possible solutions such as: right clicking the bar to have the Python Extension checked (which I don’t even have an option to check) uninstalling all the extensions then reinstalling it but it didn’t help. Even after restarting my computer. I also can't seem to add python.pythonPath in my settings.json file if that has something to do with it and if it does how can I get that? When I try to add that in my VS code settings.json, it says 'unknown configuration" Basically I would just like to see the python version on the status bar. status bar on vs code | Turned out it was placed to a new place in the status bar. Here's how to pin it on the status bar now: Hover over the {} next to the Python language chooser Click the pin icon The selection of the Python environment becomes pinned to the status bar on the right hand side | 22 | 42 |
71,106,940 | 2022-2-14 | https://stackoverflow.com/questions/71106940/cannot-import-name-centered-from-scipy-signal-signaltools | Unable to import functions from scipy module. Gives error : from scipy.signal.signaltools import _centered Cannot import name '_centered' from 'scipy.signal.signaltools' scipy.__version__ 1.8.0 | If you need to use that specific version of statsmodels 0.12.x with scipy 1.8.0 I have the following hack. Basically it just re-publishes the existing (but private) _centered function as a public attribute to the module already imported in RAM. It is a workaround, and if you can simply upgrade your dependencies to the latest versions. Only use this if you are forced to use those specific versions. import scipy.signal.signaltools def _centered(arr, newsize): # Return the center newsize portion of the array. newsize = np.asarray(newsize) currsize = np.array(arr.shape) startind = (currsize - newsize) // 2 endind = startind + newsize myslice = [slice(startind[k], endind[k]) for k in range(len(endind))] return arr[tuple(myslice)] scipy.signal.signaltools._centered = _centered | 21 | 16 |
71,087,502 | 2022-2-11 | https://stackoverflow.com/questions/71087502/datetime-timestamp-using-python-with-microsecond-level-accuracy | I am trying to get timestamps that are accurate down to the microsecond on Windows OS and macOS in Python 3.10+. On Windows OS, I have noticed Python's built-in time.time() (paired with datetime.fromtimestamp()) and datetime.datetime.now() seem to have a slower clock. They don't have enough resolution to differentiate microsecond-level events. The good news is time functions like time.perf_counter() and time.time_ns() do seem to use a clock that is fast enough to measure microsecond-level events. Sadly, I can't figure out how to get them into datetime objects. How can I get the output of time.perf_counter() or PEP 564's nanosecond resolution time functions into a datetime object? Note: I don't need nanosecond-level stuff, so it's okay to throw away out precision below 1-μs). Current Solution This is my current (hacky) solution, which actually works fine, but I am wondering if there's a cleaner way: import time from datetime import datetime, timedelta from typing import Final IMPORT_TIMESTAMP: Final[datetime] = datetime.now() INITIAL_PERF_COUNTER: Final[float] = time.perf_counter() def get_timestamp() -> datetime: """Get a high resolution timestamp with μs-level precision.""" dt_sec = time.perf_counter() - INITIAL_PERF_COUNTER return IMPORT_TIMESTAMP + timedelta(seconds=dt_sec) | That's almost as good as it gets, since the C module, if available, overrides all classes defined in the pure Python implementation of the datetime module with the fast C implementation, and there are no hooks. Reference: python/cpython@cf86e36 Note that: There's an intrinsic sub-microsecond error in the accuracy equal to the time it takes between obtaining the system time in datetime.now() and obtaining the performance counter time. There's a sub-microsecond performance cost to add a datetime and a timedelta. Depending on your specific use case if calling multiple times, that may or may not matter. A slight improvement would be: INITIAL_TIMESTAMP: Final[float] = time.time() INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter() def get_timestamp_float() -> float: dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER return INITIAL_TIMESTAMP + dt_sec def get_timestamp_now() -> datetime: dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER return datetime.fromtimestamp(INITIAL_TIMESTAMP + dt_sec) Anecdotal numbers Windows: # Intrinsic error timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 # 0.31 μs timeit.timeit('time.time()', setup='import time')/1000000 # 0.07 μs # Performance cost setup = 'from datetime import datetime, timedelta; import time' timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 # 0.79 μs timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 # 0.44 μs # Resolution min get_timestamp_float() delta: 239 ns Windows and macOS: Windows macOS # Intrinsic error timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 0.31 μs 0.61 μs timeit.timeit('time.time()', setup='import time')/1000000 0.07 μs 0.08 μs # Performance cost setup = 'from datetime import datetime, timedelta; import time' - - timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 0.79 μs 1.26 μs timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 0.44 μs 0.69 μs # Resolution min time() delta (benchmark) x ms 716 ns min get_timestamp_float() delta 239 ns 239 ns 239 ns is the smallest difference that float allows at the magnitude of Unix time, as noted by Kelly Bundy in the comments. x = time.time() print((math.nextafter(x, 2*x) - x) * 1e9) # 238.4185791015625 Script Resolution script, based on https://www.python.org/dev/peps/pep-0564/#script: import math import time from typing import Final LOOPS = 10 ** 6 INITIAL_TIMESTAMP: Final[float] = time.time() INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter() def get_timestamp_float() -> float: dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER return INITIAL_TIMESTAMP + dt_sec min_dt = [abs(time.time() - time.time()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time() delta: %s ns" % math.ceil(min_dt * 1e9)) min_dt = [abs(get_timestamp_float() - get_timestamp_float()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min get_timestamp_float() delta: %s ns" % math.ceil(min_dt * 1e9)) | 5 | 6 |
71,104,848 | 2022-2-13 | https://stackoverflow.com/questions/71104848/mapping-complex-json-to-pandas-dataframe | BackgroundI have a complex nested JSON object, which I am trying to unpack into a pandas df in a very specific way. JSON Objectthis is an extract, containing randomized data of the JSON object, which shows examples of the hierarchy (inc. children) for 1x family (i.e. 'Falconer Family'), however there is 100s of them in total and this extract just has 1x family, however the full JSON object has multiple - { "meta": { "columns": [{ "key": "value", "display_name": "Adjusted Value (No Div, USD)", "output_type": "Number", "currency": "USD" }, { "key": "time_weighted_return", "display_name": "Current Quarter TWR (USD)", "output_type": "Percent", "currency": "USD" }, { "key": "time_weighted_return_2", "display_name": "YTD TWR (USD)", "output_type": "Percent", "currency": "USD" }, { "key": "_custom_twr_audit_note_911328", "display_name": "TWR Audit Note", "output_type": "Word" } ], "groupings": [{ "key": "_custom_name_747205", "display_name": "* Reporting Client Name" }, { "key": "_custom_new_entity_group_453577", "display_name": "NEW Entity Group" }, { "key": "_custom_level_2_624287", "display_name": "* Level 2" }, { "key": "legal_entity", "display_name": "Legal Entity" } ] }, "data": { "type": "portfolio_views", "attributes": { "total": { "name": "Total", "columns": { "time_weighted_return": -0.046732301295604683, "time_weighted_return_2": -0.046732301295604683, "_custom_twr_audit_note_911328": null, "value": 23132492.905107163 }, "children": [{ "name": "Falconer Family", "grouping": "_custom_name_747205", "columns": { "time_weighted_return": -0.046732301295604683, "time_weighted_return_2": -0.046732301295604683, "_custom_twr_audit_note_911328": null, "value": 23132492.905107163 }, "children": [{ "name": "Wealth Bucket A", "grouping": "_custom_new_entity_group_453577", "columns": { "time_weighted_return": -0.045960317420568164, "time_weighted_return_2": -0.045960317420568164, "_custom_twr_audit_note_911328": null, "value": 13264448.506587159 }, "children": [{ "name": "Asset Class A", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": 0.000003434094574039648, "time_weighted_return_2": 0.000003434094574039648, "_custom_twr_audit_note_911328": null, "value": 3337.99 }, "children": [{ "entity_id": 10604454, "name": "HUDJ Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.000003434094574039648, "time_weighted_return_2": 0.000003434094574039648, "_custom_twr_audit_note_911328": null, "value": 3337.99 }, "children": [] }] }, { "name": "Asset Class B", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.025871339096964152, "time_weighted_return_2": -0.025871339096964152, "_custom_twr_audit_note_911328": null, "value": 1017004.7192636987 }, "children": [{ "entity_id": 10604454, "name": "HUDG Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.025871339096964152, "time_weighted_return_2": -0.025871339096964152, "_custom_twr_audit_note_911328": null, "value": 1017004.7192636987 }, "children": [] }] }, { "name": "Asset Class C", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.030370376329670656, "time_weighted_return_2": -0.030370376329670656, "_custom_twr_audit_note_911328": null, "value": 231142.67772000004 }, "children": [{ "entity_id": 10604454, "name": "HKDJ Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.030370376329670656, "time_weighted_return_2": -0.030370376329670656, "_custom_twr_audit_note_911328": null, "value": 231142.67772000004 }, "children": [] }] }, { "name": "Asset Class D", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.05382756475465478, "time_weighted_return_2": -0.05382756475465478, "_custom_twr_audit_note_911328": null, "value": 9791282.570000006 }, "children": [{ "entity_id": 10604454, "name": "HUDW Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.05382756475465478, "time_weighted_return_2": -0.05382756475465478, "_custom_twr_audit_note_911328": null, "value": 9791282.570000006 }, "children": [] }] }, { "name": "Asset Class E", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.01351630404081805, "time_weighted_return_2": -0.01351630404081805, "_custom_twr_audit_note_911328": null, "value": 2153366.6396034593 }, "children": [{ "entity_id": 10604454, "name": "HJDJ Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.01351630404081805, "time_weighted_return_2": -0.01351630404081805, "_custom_twr_audit_note_911328": null, "value": 2153366.6396034593 }, "children": [] }] }, { "name": "Asset Class F", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.002298190175237247, "time_weighted_return_2": -0.002298190175237247, "_custom_twr_audit_note_911328": null, "value": 68313.90999999999 }, "children": [{ "entity_id": 10604454, "name": "HADJ Trust", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.002298190175237247, "time_weighted_return_2": -0.002298190175237247, "_custom_twr_audit_note_911328": null, "value": 68313.90999999999 }, "children": [] }] } ] }, { "name": "Wealth Bucket B", "grouping": "_custom_new_entity_group_453577", "columns": { "time_weighted_return": -0.04769870075659244, "time_weighted_return_2": -0.04769870075659244, "_custom_twr_audit_note_911328": null, "value": 9868044.398519998 }, "children": [{ "name": "Asset Class A", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": 0.000028632718065191298, "time_weighted_return_2": 0.000028632718065191298, "_custom_twr_audit_note_911328": null, "value": 10234.94 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.0000282679297198829, "time_weighted_return_2": 0.0000282679297198829, "_custom_twr_audit_note_911328": null, "value": 244.28 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.000049373572795108345, "time_weighted_return_2": 0.000049373572795108345, "_custom_twr_audit_note_911328": null, "value": 5081.08 }, "children": [] }, { "entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.000006609603754315074, "time_weighted_return_2": 0.000006609603754315074, "_custom_twr_audit_note_911328": null, "value": 1523.62 }, "children": [] }, { "entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.000010999769004760296, "time_weighted_return_2": 0.000010999769004760296, "_custom_twr_audit_note_911328": null, "value": 1828.9 }, "children": [] }, { "entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": { "time_weighted_return": 0.000006466673995619843, "time_weighted_return_2": 0.000006466673995619843, "_custom_twr_audit_note_911328": null, "value": 1557.06 }, "children": [] } ] }, { "name": "Asset Class B", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.024645947842438676, "time_weighted_return_2": -0.024645947842438676, "_custom_twr_audit_note_911328": null, "value": 674052.31962 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.043304004172576405, "time_weighted_return_2": -0.043304004172576405, "_custom_twr_audit_note_911328": null, "value": 52800.96 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.022408434778798836, "time_weighted_return_2": -0.022408434778798836, "_custom_twr_audit_note_911328": null, "value": 599594.11962 }, "children": [] }, { "entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08 }, "children": [] }, { "entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08 }, "children": [] }, { "entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08 }, "children": [] } ] }, { "name": "Asset Class C", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.03037038746301135, "time_weighted_return_2": -0.03037038746301135, "_custom_twr_audit_note_911328": null, "value": 114472.69744 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.030370390035505124, "time_weighted_return_2": -0.030370390035505124, "_custom_twr_audit_note_911328": null, "value": 114472.68744000001 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": 0, "time_weighted_return_2": 0, "_custom_twr_audit_note_911328": null, "value": 0.01 }, "children": [] } ] }, { "name": "Asset Class D", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.06604362523792162, "time_weighted_return_2": -0.06604362523792162, "_custom_twr_audit_note_911328": null, "value": 5722529.229999997 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.06154960593668424, "time_weighted_return_2": -0.06154960593668424, "_custom_twr_audit_note_911328": null, "value": 1191838.9399999995 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.06750460387418267, "time_weighted_return_2": -0.06750460387418267, "_custom_twr_audit_note_911328": null, "value": 4416618.520000002 }, "children": [] }, { "entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 38190.33 }, "children": [] }, { "entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 37940.72 }, "children": [] }, { "entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 37940.72 }, "children": [] } ] }, { "name": "Asset Class E", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.017118805423322003, "time_weighted_return_2": -0.017118805423322003, "_custom_twr_audit_note_911328": null, "value": 3148495.0914600003 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.015251157805867277, "time_weighted_return_2": -0.015251157805867277, "_custom_twr_audit_note_911328": null, "value": 800493.06146 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.01739609576880241, "time_weighted_return_2": -0.01739609576880241, "_custom_twr_audit_note_911328": null, "value": 2215511.2700000005 }, "children": [] }, { "entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.02085132265594647, "time_weighted_return_2": -0.02085132265594647, "_custom_twr_audit_note_911328": null, "value": 44031.21 }, "children": [] }, { "entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.02089393244695803, "time_weighted_return_2": -0.02089393244695803, "_custom_twr_audit_note_911328": null, "value": 44394.159999999996 }, "children": [] }, { "entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.020607507059866248, "time_weighted_return_2": -0.020607507059866248, "_custom_twr_audit_note_911328": null, "value": 44065.39000000001 }, "children": [] } ] }, { "name": "Asset Class F", "grouping": "_custom_level_2_624287", "columns": { "time_weighted_return": -0.0014710489231547497, "time_weighted_return_2": -0.0014710489231547497, "_custom_twr_audit_note_911328": null, "value": 198260.12 }, "children": [{ "entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.0014477244560456848, "time_weighted_return_2": -0.0014477244560456848, "_custom_twr_audit_note_911328": null, "value": 44612.33 }, "children": [] }, { "entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": { "time_weighted_return": -0.001477821083437858, "time_weighted_return_2": -0.001477821083437858, "_custom_twr_audit_note_911328": null, "value": 153647.78999999998 }, "children": [] } ] } ] } ] }] } }, "included": [] } } Notes on JSON Object extract data - data in here can be ignored, these are aggregated values for underlying children. meta - columns – contains the column header values I want to use for each applicable children ‘column` key:pair values. groupings - can be ignored. children hierarchy – there are 4x levels of children which can be identified by their name as follows – Family name (i.e., ‘Falconer Family’) Wealth Bucket name (e.g., ‘Wealth Bucket A’) Asset Class name (e.g., ‘Asset Class A’) Fund name (e.g., ‘HUDJ Trust’) Target Outputthis is an extract of target df structure I am trying to achieve - portfolio name entity_id Adjusted Value (No Div, USD) Current Quarter TWR (USD) YTD TWR (USD) TWR Audit Note Falconer Family Falconer Family 23132492.90510712 -0.046732301295604683 -0.046732301295604683 None Falconer Family Wealth Bucket A 13264448.506587146 -0.045960317420568164 -0.045960317420568164 None Falconer Family Asset Class A 3337.99 0.000003434094574039648 0.000003434094574039648 None Falconer Family HUDJ Trust 10604454 3337.99 0.000003434094574039648 0.000003434094574039648 None Falconer Family Asset Class B 1017004.7192636987 -0.025871339096964152 -0.025871339096964152 None Falconer Family HUDG Trust 10604454 1017004.7192636987 -0.025871339096964152 -0.025871339096964152 None Falconer Family Asset Class C 231142.67772000004 -0.030370376329670656 -0.030370376329670656 None Falconer Family HKDJ Trust 10604454 231142.67772000004 -0.030370376329670656 -0.030370376329670656 None Falconer Family Asset Class D 9791282.570000006 -0.05382756475465478 -0.05382756475465478 None Falconer Family HUDW Trust 10604454 9791282.570000006 -0.05382756475465478 -0.05382756475465478 None Notes on Target Output Portfolio header – for every row, I would like to map the top-level children name value [family name]. E.g., ‘Falconer Family. Name header – this should simply be the name value from each respective children. Entity ID – all 4th level children entity_id value should be mapped to this column. Data columns – regardless of level, all children have identical time_weighted_return, time-weighted_return2 and value columns which should be mapped respectively. TWR Audit Note – these children _custom_twr_audit_note_911318 values are currently blank, but will be utilized in the future. Current OutputMy main issue is that you can see that I have only been able to tap into the 1st [Family] and 2nd [Wealth Bucket] children level. This leaves me missing the 3rd [Asset Class] and 4th [Fund] - portfolio name Adjusted Value (No Div, USD) Current Quarter TWR (USD) YTD TWR (USD) TWR Audit Note) 0 Falconer Family Falconer Family 2.313249e+07 -0.046732 -0.046732 None 1 Falconer Family Wealth Bucket A 1.326445e+07 -0.045960 -0.045960 None 2 Falconer Family Wealth Bucket B 9.868044e+06 -0.047699 -0.047699 None Current codeThis is a function which gets me the correct df formatting, however my main issue is that I haven't been able to find a solution to returning all children, but rather only the top-level - # Function to read API response / JSON Object def response_writer(): with open('api_response_2022-02-13.json') as f: api_response = json.load(f) return api_response # Function to unpack JSON response into pandas dataframe. def unpack_response(): while True: try: api_response = response_writer() portfolio_views_children = api_response['data']['attributes']['total']['children'] portfolios = [] for portfolio in portfolio_views_children: entity_columns = [] # include portfolio itself within an iterable so the total is the header for entity in itertools.chain([portfolio], portfolio["children"]): entity_data = entity["columns"].copy() # don't mutate original response entity_data["portfolio"] = portfolio["name"] # from outer entity_data["name"] = entity["name"] entity_columns.append(entity_data) df = pd.DataFrame(entity_columns) portfolios.append(df) # combine dataframes df = pd.concat(portfolios) # reorder and rename column_ordering = {"portfolio": "portfolio", "name": "name"} column_ordering.update({c["key"]: c["display_name"] for c in api_response["meta"]["columns"]}) df = df[column_ordering.keys()] # beware: un-named cols will be dropped df = df.rename(columns=column_ordering) break except KeyError: print("-----------------------------------\n","API TIMEOUT ERROR: TRY AGAIN...", "\n-----------------------------------\n") return df unpack_response() HelpIn short, I am looking for some advice on how I can tap into the remaining children by enhancing the existing code. Whilst I have taken much time to fully explain my problem, please ask if anything isn't clear. Please note that the JSON may have multiple families, so the solution / advice offered must observe this | jsonpath-ng can parse even such a nested json object very easily. You can install this convenient library by the following command: pip install --upgrade jsonpath-ng Code: import json import jsonpath_ng as jp import pandas as pd def unpack_response(r): # Create a dataframe from extracted data expr = jp.parse('$..children.[*]') data = [{'full_path': str(m.full_path), **m.value} for m in expr.find(r)] df = pd.json_normalize(data).sort_values('full_path', ignore_index=True) # Append a portfolio column df['portfolio'] = df.loc[df.full_path.str.contains(r'total\.children\.\[\d+]$'), 'name'] df['portfolio'].fillna(method='ffill', inplace=True) # Deal with columns trans = {'columns.' + c['key']: c['display_name'] for c in r['meta']['columns']} cols = ['full_path', 'portfolio', 'name', 'entity_id', 'Adjusted Value (No Div, USD)', 'Current Quarter TWR (USD)', 'YTD TWR (USD)', 'TWR Audit Note'] df = df.rename(columns=trans)[cols] return df # Load the sample data from file # with open('api_response_2022-02-13.json', 'r') as f: # api_response = json.load(f) # Load the sample data from string api_response = json.loads('{"meta": {"columns": [{"key": "value", "display_name": "Adjusted Value (No Div, USD)", "output_type": "Number", "currency": "USD"}, {"key": "time_weighted_return", "display_name": "Current Quarter TWR (USD)", "output_type": "Percent", "currency": "USD"}, {"key": "time_weighted_return_2", "display_name": "YTD TWR (USD)", "output_type": "Percent", "currency": "USD"}, {"key": "_custom_twr_audit_note_911328", "display_name": "TWR Audit Note", "output_type": "Word"}], "groupings": [{"key": "_custom_name_747205", "display_name": "* Reporting Client Name"}, {"key": "_custom_new_entity_group_453577", "display_name": "NEW Entity Group"}, {"key": "_custom_level_2_624287", "display_name": "* Level 2"}, {"key": "legal_entity", "display_name": "Legal Entity"}]}, "data": {"type": "portfolio_views", "attributes": {"total": {"name": "Total", "columns": {"time_weighted_return": -0.046732301295604683, "time_weighted_return_2": -0.046732301295604683, "_custom_twr_audit_note_911328": null, "value": 23132492.905107163}, "children": [{"name": "Falconer Family", "grouping": "_custom_name_747205", "columns": {"time_weighted_return": -0.046732301295604683, "time_weighted_return_2": -0.046732301295604683, "_custom_twr_audit_note_911328": null, "value": 23132492.905107163}, "children": [{"name": "Wealth Bucket A", "grouping": "_custom_new_entity_group_453577", "columns": {"time_weighted_return": -0.045960317420568164, "time_weighted_return_2": -0.045960317420568164, "_custom_twr_audit_note_911328": null, "value": 13264448.506587159}, "children": [{"name": "Asset Class A", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": 3.434094574039648e-06, "time_weighted_return_2": 3.434094574039648e-06, "_custom_twr_audit_note_911328": null, "value": 3337.99}, "children": [{"entity_id": 10604454, "name": "HUDJ Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": 3.434094574039648e-06, "time_weighted_return_2": 3.434094574039648e-06, "_custom_twr_audit_note_911328": null, "value": 3337.99}, "children": []}]}, {"name": "Asset Class B", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.025871339096964152, "time_weighted_return_2": -0.025871339096964152, "_custom_twr_audit_note_911328": null, "value": 1017004.7192636987}, "children": [{"entity_id": 10604454, "name": "HUDG Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.025871339096964152, "time_weighted_return_2": -0.025871339096964152, "_custom_twr_audit_note_911328": null, "value": 1017004.7192636987}, "children": []}]}, {"name": "Asset Class C", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.030370376329670656, "time_weighted_return_2": -0.030370376329670656, "_custom_twr_audit_note_911328": null, "value": 231142.67772000004}, "children": [{"entity_id": 10604454, "name": "HKDJ Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.030370376329670656, "time_weighted_return_2": -0.030370376329670656, "_custom_twr_audit_note_911328": null, "value": 231142.67772000004}, "children": []}]}, {"name": "Asset Class D", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.05382756475465478, "time_weighted_return_2": -0.05382756475465478, "_custom_twr_audit_note_911328": null, "value": 9791282.570000006}, "children": [{"entity_id": 10604454, "name": "HUDW Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.05382756475465478, "time_weighted_return_2": -0.05382756475465478, "_custom_twr_audit_note_911328": null, "value": 9791282.570000006}, "children": []}]}, {"name": "Asset Class E", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.01351630404081805, "time_weighted_return_2": -0.01351630404081805, "_custom_twr_audit_note_911328": null, "value": 2153366.6396034593}, "children": [{"entity_id": 10604454, "name": "HJDJ Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.01351630404081805, "time_weighted_return_2": -0.01351630404081805, "_custom_twr_audit_note_911328": null, "value": 2153366.6396034593}, "children": []}]}, {"name": "Asset Class F", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.002298190175237247, "time_weighted_return_2": -0.002298190175237247, "_custom_twr_audit_note_911328": null, "value": 68313.90999999999}, "children": [{"entity_id": 10604454, "name": "HADJ Trust", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.002298190175237247, "time_weighted_return_2": -0.002298190175237247, "_custom_twr_audit_note_911328": null, "value": 68313.90999999999}, "children": []}]}]}, {"name": "Wealth Bucket B", "grouping": "_custom_new_entity_group_453577", "columns": {"time_weighted_return": -0.04769870075659244, "time_weighted_return_2": -0.04769870075659244, "_custom_twr_audit_note_911328": null, "value": 9868044.398519998}, "children": [{"name": "Asset Class A", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": 2.8632718065191298e-05, "time_weighted_return_2": 2.8632718065191298e-05, "_custom_twr_audit_note_911328": null, "value": 10234.94}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": 2.82679297198829e-05, "time_weighted_return_2": 2.82679297198829e-05, "_custom_twr_audit_note_911328": null, "value": 244.28}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": 4.9373572795108345e-05, "time_weighted_return_2": 4.9373572795108345e-05, "_custom_twr_audit_note_911328": null, "value": 5081.08}, "children": []}, {"entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": {"time_weighted_return": 6.609603754315074e-06, "time_weighted_return_2": 6.609603754315074e-06, "_custom_twr_audit_note_911328": null, "value": 1523.62}, "children": []}, {"entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": {"time_weighted_return": 1.0999769004760296e-05, "time_weighted_return_2": 1.0999769004760296e-05, "_custom_twr_audit_note_911328": null, "value": 1828.9}, "children": []}, {"entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": {"time_weighted_return": 6.466673995619843e-06, "time_weighted_return_2": 6.466673995619843e-06, "_custom_twr_audit_note_911328": null, "value": 1557.06}, "children": []}]}, {"name": "Asset Class B", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.024645947842438676, "time_weighted_return_2": -0.024645947842438676, "_custom_twr_audit_note_911328": null, "value": 674052.31962}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.043304004172576405, "time_weighted_return_2": -0.043304004172576405, "_custom_twr_audit_note_911328": null, "value": 52800.96}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.022408434778798836, "time_weighted_return_2": -0.022408434778798836, "_custom_twr_audit_note_911328": null, "value": 599594.11962}, "children": []}, {"entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08}, "children": []}, {"entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08}, "children": []}, {"entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.039799855483646174, "time_weighted_return_2": -0.039799855483646174, "_custom_twr_audit_note_911328": null, "value": 7219.08}, "children": []}]}, {"name": "Asset Class C", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.03037038746301135, "time_weighted_return_2": -0.03037038746301135, "_custom_twr_audit_note_911328": null, "value": 114472.69744}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.030370390035505124, "time_weighted_return_2": -0.030370390035505124, "_custom_twr_audit_note_911328": null, "value": 114472.68744000001}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": 0, "time_weighted_return_2": 0, "_custom_twr_audit_note_911328": null, "value": 0.01}, "children": []}]}, {"name": "Asset Class D", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.06604362523792162, "time_weighted_return_2": -0.06604362523792162, "_custom_twr_audit_note_911328": null, "value": 5722529.229999997}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.06154960593668424, "time_weighted_return_2": -0.06154960593668424, "_custom_twr_audit_note_911328": null, "value": 1191838.9399999995}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.06750460387418267, "time_weighted_return_2": -0.06750460387418267, "_custom_twr_audit_note_911328": null, "value": 4416618.520000002}, "children": []}, {"entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 38190.33}, "children": []}, {"entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 37940.72}, "children": []}, {"entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.05604507809250081, "time_weighted_return_2": -0.05604507809250081, "_custom_twr_audit_note_911328": null, "value": 37940.72}, "children": []}]}, {"name": "Asset Class E", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.017118805423322003, "time_weighted_return_2": -0.017118805423322003, "_custom_twr_audit_note_911328": null, "value": 3148495.0914600003}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.015251157805867277, "time_weighted_return_2": -0.015251157805867277, "_custom_twr_audit_note_911328": null, "value": 800493.06146}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.01739609576880241, "time_weighted_return_2": -0.01739609576880241, "_custom_twr_audit_note_911328": null, "value": 2215511.2700000005}, "children": []}, {"entity_id": 10598341, "name": "Cht 11th Tr HBO Shirley", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.02085132265594647, "time_weighted_return_2": -0.02085132265594647, "_custom_twr_audit_note_911328": null, "value": 44031.21}, "children": []}, {"entity_id": 10598337, "name": "Cht 11th Tr HBO Hannah", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.02089393244695803, "time_weighted_return_2": -0.02089393244695803, "_custom_twr_audit_note_911328": null, "value": 44394.159999999996}, "children": []}, {"entity_id": 10598334, "name": "Cht 11th Tr HBO Lau", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.020607507059866248, "time_weighted_return_2": -0.020607507059866248, "_custom_twr_audit_note_911328": null, "value": 44065.39000000001}, "children": []}]}, {"name": "Asset Class F", "grouping": "_custom_level_2_624287", "columns": {"time_weighted_return": -0.0014710489231547497, "time_weighted_return_2": -0.0014710489231547497, "_custom_twr_audit_note_911328": null, "value": 198260.12}, "children": [{"entity_id": 10868778, "name": "2012 Desc Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.0014477244560456848, "time_weighted_return_2": -0.0014477244560456848, "_custom_twr_audit_note_911328": null, "value": 44612.33}, "children": []}, {"entity_id": 10643052, "name": "2013 Irrev Tr HBO Thalia", "grouping": "legal_entity", "columns": {"time_weighted_return": -0.001477821083437858, "time_weighted_return_2": -0.001477821083437858, "_custom_twr_audit_note_911328": null, "value": 153647.78999999998}, "children": []}]}]}]}]}}, "included": []}}') df = unpack_response(api_response) Explanation: Firstly, you can confirm the expected output by the following command: print(df.iloc[:5:,1:]) portfolio name entity_id Adjusted Value (No Div, USD) Current Quarter TWR (USD) YTD TWR (USD) TWR Audit Note Falconer Family Falconer Family nan 2.31325e+07 -0.0467323 -0.0467323 Falconer Family Wealth Bucket A nan 1.32644e+07 -0.0459603 -0.0459603 Falconer Family Asset Class A nan 3337.99 3.43409e-06 3.43409e-06 Falconer Family HUDJ Trust 1.06045e+07 3337.99 3.43409e-06 3.43409e-06 Falconer Family Asset Class B nan 1.017e+06 -0.0258713 -0.0258713 Subsequently, you can see one of the wonderful features in jsonpath-ng by the following command: print(df.iloc[:10,:3]) full_path portfolio name data.attributes.total.children.[0] Falconer Family Falconer Family data.attributes.total.children.[0].children.[0] Falconer Family Wealth Bucket A data.attributes.total.children.[0].children.[0].children.[0] Falconer Family Asset Class A data.attributes.total.children.[0].children.[0].children.[0].children.[0] Falconer Family HUDJ Trust data.attributes.total.children.[0].children.[0].children.[1] Falconer Family Asset Class B data.attributes.total.children.[0].children.[0].children.[1].children.[0] Falconer Family HUDG Trust data.attributes.total.children.[0].children.[0].children.[2] Falconer Family Asset Class C data.attributes.total.children.[0].children.[0].children.[2].children.[0] Falconer Family HKDJ Trust data.attributes.total.children.[0].children.[0].children.[3] Falconer Family Asset Class D data.attributes.total.children.[0].children.[0].children.[3].children.[0] Falconer Family HUDW Trust Thanks to the full_path column, you can grasp the nesting level of the extracted data in each row instantaneously. Actually, I appended the correct portfolio values by using these paths. In terms of the code, the key point is the following line: expr = jp.parse('$..children.[*]') By the above expression, you can search the children attributes at any level of the json object. README.rst tells you what each syntax stands for. Syntax Meaning $ The root object jsonpath1 .. jsonpath2 All nodes matched by jsonpath2 that descend from any node matching jsonpath1 [*] any array index Speed: I compared the speed between the above method with jsonpath-ng and a nested-for-loop method shown below. # Comparison: Method Duration Speed ratio jsonpath-ng 9.72 ms ± 342 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 5.7 (faster) Nested-for-loop 55.4 ms ± 7.39 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 1 # Code of the nested-for-loop method: def unpack_response(r): df = pd.DataFrame() for _, r1 in pd.json_normalize(r, ['data', 'attributes', 'total', 'children']).iterrows(): r1['portfolio'] = r1['name'] df = df.append(r1) for _, r2 in pd.json_normalize(r1.children).iterrows(): df = df.append(r2) for _, r3 in pd.json_normalize(r2.children).iterrows(): df = df.append(r3).append(pd.json_normalize(r3.children)) df['portfolio'].fillna(method='ffill', inplace=True) trans = {'columns.' + c['key']: c['display_name'] for c in r['meta']['columns']} cols = ['portfolio', 'name', 'entity_id', 'Adjusted Value (No Div, USD)', 'Current Quarter TWR (USD)', 'YTD TWR (USD)', 'TWR Audit Note'] df = df.rename(columns=trans)[cols].reset_index(drop=True) return df | 10 | 6 |
71,116,130 | 2022-2-14 | https://stackoverflow.com/questions/71116130/ipykernel-throwing-typeerror-object-nonetype-cant-be-used-in-await-expressi | I know that several similar questions exist on this topic, but to my knowledge all of them concern an async code (wrongly) written by the user, while in my case it comes from a Python package. I have a Jupyter notebook whose first cell is ! pip install numpy ! pip install pandas and I want to automatically play the notebook using Papermill. No problem on my local machine (Windows 11 with Python 3.7): I install iPyKernel and Papermill and everything is fine. The problem is when I try to do the same on my BitBucket pipeline (Python image 3-alpine, but it happens under different others); the first cell throws the following error: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 461, in dispatch_queue await self.process_one() File "/usr/local/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 450, in process_one await dispatch(*args) TypeError: object NoneType can't be used in 'await' expression that makes the script stop at the 2nd cell, where I import numpy. If it can be relevant, I've "papermilled" under the GitLab CI without any problem in the past. | Seems to be a bug in ipykernel 6.9.0 - options that worked for me: upgrade to 6.9.1 (latest version as of 2022-02-22); e.g. via pip install ipykernel --upgrade downgrade to 6.8.0 (if upgrading messes with other dependencies you might have); e.g. via pip install ipykernel==6.8.0 | 6 | 2 |
71,049,497 | 2022-2-9 | https://stackoverflow.com/questions/71049497/visual-studio-code-is-not-loading-my-python-interpreters | I've been using a python interpreter that I set in my venv the whole time for this project. Recently I changed my python interpreter which I've set as a default interpreter in my user settings like the vs docs vs code python environments describes it and also set in my JSON settings file refuses to load instead. I'm getting the default conda python interpreter with no other options. here is my Json file: settings.json { "python.defaultInterpreterPath": "C:/Users/houst/Envs/Mach/Scripts/python.exe" } | I think you should use the \ instead of the / in the path. For safety just copy paste the path from the windows explorer. Another Solution : Writing the path in Json should definitely work. But I prefer using the functionality vscode provides. Now come to my solution. press ctrl+shift+p to open the search bar. now type >Python:Select Interpreter. This should allow you to select the existing interpreter or to add the interpreter path in the vscode. I'm attaching a screen shot for your convenient. You should keep in mind that, if you add the Python interpreted path in environment path of your system, vscode should automatically find the interpreter. | 5 | 1 |
71,099,132 | 2022-2-13 | https://stackoverflow.com/questions/71099132/how-to-set-schema-translate-map-in-sqlalchemy-object-in-flask-app | My app.py file from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres:////tmp/test.db' db = SQLAlchemy(app) # refer https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy One of my model classes, where I imported db from app import db Base = declarative_base() # User class class User(db.Model, Base): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) def __repr__(self): return '<User %r>' % self.username def get_user_by_id(self, id): return self.query.get(id) My database has the same set of tables in different schema (multi-tenancy) and there I need to select the schema as per the request initiated by a particular tenant on the fly by using before_request (grabbing tenant_id from subdomain URL). I found Postgres provides selecting the schema name on fly by using schema_translate_map ref. https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names and that is under execution_options https://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.Connection.execution_options In my above code snippet where you see db = SQLAlchemy(app), as per official documentation, two parameters can be set in SQLAlchemy objct creation and they are - session_options and engine_options, but no execution_options ref. https://flask-sqlalchemy.palletsprojects.com/en/2.x/api/#flask_sqlalchemy.SQLAlchemy But how do I set schema_translate_map setting when I am creating an object of SQLAlchemy I tried this - db = SQLAlchemy(app, session_options={ "autocommit": True, "autoflush": False, "schema_translate_map": { None: "public" } } ) But obviously, it did not work, because schema_translate_map is under execution_options as mentioned here https://docs.sqlalchemy.org/en/14/core/connections.html#translation-of-schema-names Anyone has an idea, how to set schema_translate_map at the time of creating SQLAlchemy object. My goal is to set it dynamically for each request. I want to control it from this centralized place, rather than going in each model file and specifying it when I execute queries. I am aware of doing this differently as suggested here https://stackoverflow.com/a/56490246/1560470 but my need is to set somewhere around db = SQLAlchemy(app) in app.py file only. Then after I import db in all my model classes (as shown above) and in those model classes, all queries execute under the selected schema. | I found a way to accomplish it. This is what needed db = SQLAlchemy(app, session_options={ "autocommit": True, "autoflush": False }, engine_options={ "execution_options": { "schema_translate_map": { None: "public", "abc": "xyz" } } } ) | 5 | 5 |
71,121,056 | 2022-2-15 | https://stackoverflow.com/questions/71121056/plotly-python-update-figure-with-dropmenu | i am currently working with plotly i have a function called plotChart that takes a dataframe as input and plots a candlestick chart. I am trying to figure out a way to pass a list of dataframes to the function plotChart and use a plotly dropdown menu to show the options on the input list by the stock name. The drop down menu will have the list of dataframe and when an option is clicked on it will update the figure in plotly is there away to do this. below is the code i have to plot a single dataframe def make_multi_plot(df): fig = make_subplots(rows=2, cols=2, shared_xaxes=True, vertical_spacing=0.03, subplot_titles=('OHLC', 'Volume Profile'), row_width=[0.2, 0.7]) for s in df.name.unique(): trace1 = go.Candlestick( x=df.loc[df.name.isin([s])].time, open=df.loc[df.name.isin([s])].open, high=df.loc[df.name.isin([s])].high, low=df.loc[df.name.isin([s])].low, close=df.loc[df.name.isin([s])].close, name = s) fig.append_trace(trace1,1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].BbandsMid, mode='lines',name='MidBollinger'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].BbandsUpp, mode='lines',name='UpperBollinger'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].BbandsLow, mode='lines',name='LowerBollinger'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].vwap, mode='lines',name='VWAP'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].STDEV_1, mode='lines',name='UPPERVWAP'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].STDEV_N1, mode='lines',name='LOWERVWAP'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].KcMid, mode='lines',name='KcMid'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].KcUpper, mode='lines',name='KcUpper'),1,1) fig.append_trace(go.Scatter(x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].KcLow, mode='lines',name='KcLow'),1,1) trace2 = go.Bar( x=df.loc[df.name.isin([s])].time, y=df.loc[df.name.isin([s])].volume, name = s) fig.append_trace(trace2,2,1) # fig.update_layout(title_text=s) graph_cnt=len(fig.data) tr = 11 symbol_cnt =len(df.name.unique()) for g in range(tr, graph_cnt): fig.update_traces(visible=False, selector=g) #print(g) def create_layout_button(k, symbol): start, end = tr*k, tr*k+2 visibility = [False]*tr*symbol_cnt visibility[start:end] = [True,True,True,True,True,True,True,True,True,True,True] return dict(label = symbol, method = 'restyle', args = [{'visible': visibility[:-1], 'title': symbol, 'showlegend': False}]) fig.update(layout_xaxis_rangeslider_visible=False) fig.update_layout( updatemenus=[go.layout.Updatemenu( active = 0, buttons = [create_layout_button(k, s) for k, s in enumerate(df.name.unique())] ) ]) fig.show() i am trying to add annotations to the figure it will be different for each chart below is how i had it setup for the single chart df['superTrend'] is a Boolean column for i in range(df.first_valid_index()+1,len(df.index)): prev = i - 1 if df['superTrend'][i] != df['superTrend'][prev] and not np.isnan(df['superTrend'][i]) : #print(i,df['inUptrend'][i]) fig.add_annotation(x=df['time'][i], y=df['open'][i], text= 'Buy' if df['superTrend'][i] else 'Sell', showarrow=True, arrowhead=6, font=dict( #family="Courier New, monospace", size=20, #color="#ffffff" ),) | I adapted an example from the plotly community to your example and created the code. The point of creation is to create the data for each subplot and then switch between them by means of buttons. The sample data is created using representative companies of US stocks. one issue is that the title is set but not displayed. We are currently investigating this issue. import yfinance as yf import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd symbols = ['AAPL','GOOG','TSLA'] stocks = pd.DataFrame() for s in symbols: data = yf.download(s, start="2021-01-01", end="2021-12-31") data['mean'] = data['Close'].rolling(20).mean() data['std'] = data['Close'].rolling(20).std() data['upperBand'] = data['mean'] + (data['std'] * 2) data.reset_index(inplace=True) data['symbol'] = s stocks = stocks.append(data, ignore_index=True) def make_multi_plot(df): fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.03, subplot_titles=('OHLC', 'Volume Profile'), row_width=[0.2, 0.7]) for s in df.symbol.unique(): trace1 = go.Candlestick( x=df.loc[df.symbol.isin([s])].Date, open=df.loc[df.symbol.isin([s])].Open, high=df.loc[df.symbol.isin([s])].High, low=df.loc[df.symbol.isin([s])].Low, close=df.loc[df.symbol.isin([s])].Close, name=s) fig.append_trace(trace1,1,1) trace2 = go.Scatter( x=df.loc[df.symbol.isin([s])].Date, y=df.loc[df.symbol.isin([s])].upperBand, name=s) fig.append_trace(trace2,1,1) trace3 = go.Bar( x=df.loc[df.symbol.isin([s])].Date, y=df.loc[df.symbol.isin([s])].Volume, name=s) fig.append_trace(trace3,2,1) # fig.update_layout(title_text=s) # Calculate the total number of graphs graph_cnt=len(fig.data) # Number of Symbols symbol_cnt =len(df.symbol.unique()) # Number of graphs per symbol tr = 3 # Hide setting for initial display for g in range(tr, graph_cnt): fig.update_traces(visible=False, selector=g) def create_layout_button(k, symbol): start, end = tr*k, tr*k+2 visibility = [False]*tr*symbol_cnt # Number of graphs per symbol, so if you add a graph, add True. visibility[start:end] = [True,True,True] return dict(label = symbol, method = 'restyle', args = [{'visible': visibility[:-1], 'title': symbol, 'showlegend': True}]) fig.update(layout_xaxis_rangeslider_visible=False) fig.update_layout( updatemenus=[go.layout.Updatemenu( active = 0, buttons = [create_layout_button(k, s) for k, s in enumerate(df.symbol.unique())] ) ]) fig.show() return fig.layout make_multi_plot(stocks) | 6 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.