question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
74,826,321 | 2022-12-16 | https://stackoverflow.com/questions/74826321/how-to-type-hint-a-matplotlib-figure-object-in-python3 | I am trying to add type hints for data returned by plt.subplots. That works fine for plt.Axes, but I can't seem to find a solution for Figure. Any ideas what I could do? An abbreviated version of my code is: def draw_graph() -> Tuple[plt.Figure, plt.Axes]: fig, ax = plt.subplots(figsize=(14,10)) return (fig, ax) I get the message: "Figure" is not a known member of module Pylance | With the latest Matplotlib (v3.7.1) I was able to do the following: import matplotlib.pyplot as plt import matplotlib.figure def draw_graph() -> Tuple[matplotlib.figure.Figure, plt.Axes]: fig, ax = plt.subplots(figsize=(14,10)) return (fig, ax) I haven't tested using plt.Figure, but my IDE (i.e., VS Code) was not giving me any errors with plt.Figure. | 6 | 6 |
74,867,332 | 2022-12-20 | https://stackoverflow.com/questions/74867332/websocket-django-channels-doesnt-work-with-postman | Django Channels Throw error with postman while working well with Html. I'm following Django Socket Tutorial "here's the error showing in Django". WebSocket HANDSHAKING /ws/chat/roomName/ [127.0.0.1:56504] WebSocket REJECT /ws/chat/roomName/ [127.0.0.1:56504] WebSocket DISCONNECT /ws/chat/roomName/ [127.0.0.1:56504] "Error showing in postman when connecting to ws://127.0.0.1:8000/ws/chat/roomName/" Sec-WebSocket-Version: 13 Sec-WebSocket-Key: fSSuMD2QozIrgywqTX38/A== Connection: Upgrade Upgrade: websocket Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits Host: 127.0.0.1:8000 My Code asgi.py django_asgi_app = get_asgi_application() import digital_signage.playlist_management.routing application = ProtocolTypeRouter( { "http": django_asgi_app, "websocket": AllowedHostsOriginValidator( AuthMiddlewareStack(URLRouter(digital_signage.playlist_management.routing.websocket_urlpatterns)) ), } ) consumer.py class ChatConsumer(WebsocketConsumer): def connect(self): print("self", self) self.accept() | Stepping through the \channels\security\websocket.py module, Channels's OriginValidator is looking for an origin header, not a hosts header, which is what Postman defaults to. Assuming your ALLOWED_HOSTS looks like: ALLOWED_HOSTS = ["localhost", "127.0.0.1"] Then in Postman, in headers add: origin:http://127.0.0.1:8000 And that will pass the origin validator without needing to modify allowed hosts in Django or the ASGI config. | 3 | 5 |
74,837,978 | 2022-12-17 | https://stackoverflow.com/questions/74837978/syntaxerror-invalid-non-printable-character-u00a0-in-python | I'm getting the error: SyntaxError: invalid non-printable character U+00A0 When I'm running the code below: # coding=utf-8 from PIL import Image img = Image.open("img.png") I have tried to load different images with different formats (png, jpg, jpeg). I have tried using different versions of the Pillow library. I have also tried running it using python 2 and 3. | The problem was related to a fake space found in the third line (the empty one). It is a character that looks like a space but is actually something else which is not detected by python. By removing this character the error disappeared. The character is this: | 8 | 20 |
74,844,262 | 2022-12-18 | https://stackoverflow.com/questions/74844262/how-can-i-solve-error-module-numpy-has-no-attribute-float-in-python | I am using NumPy 1.24.0. On running this sample code line, import numpy as np num = np.float(3) I am getting this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/.local/lib/python3.8/site-packages/numpy/__init__.py", line 284, in __getattr__ raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'float' How can I fix it? | The answer is already provided in the comments by @mattdmo and @tdelaney: NumPy 1.20 (release notes) deprecated numpy.float, numpy.int, and similar aliases, causing them to issue a deprecation warning NumPy 1.24 (release notes) removed these aliases altogether, causing an error when they are used In many cases you can simply replace the deprecated NumPy types by the equivalent Python built-in type, e.g., numpy.float becomes a "plain" Python float. For detailed guidelines on how to deal with various deprecated types, have a closer look at the table and guideline in the release notes for 1.20: ... To give a clear guideline for the vast majority of cases, for the types bool, object, str (and unicode) using the plain version is shorter and clear, and generally a good replacement. For float and complex you can use float64 and complex128 if you wish to be more explicit about the precision. For np.int a direct replacement with np.int_ or int is also good and will not change behavior, but the precision will continue to depend on the computer and operating system. If you want to be more explicit and review the current use, you have the following alternatives: np.int64 or np.int32 to specify the precision exactly. This ensures that results cannot depend on the computer or operating system. np.int_ or int (the default), but be aware that it depends on the computer and operating system. The C types: np.cint (int), np.int_ (long), np.longlong. np.intp which is 32bit on 32bit machines 64bit on 64bit machines. This can be the best type to use for indexing. ... If you have dependencies that use the deprecated types, a quick workaround would be to roll back your NumPy version to 1.24 or less (as suggested in some of the other answers), while waiting for the dependency to catch up. Alternatively, you could create a patch yourself and open a pull request, or monkey patch the dependency in your own code. | 75 | 64 |
74,857,405 | 2022-12-20 | https://stackoverflow.com/questions/74857405/how-to-use-diffusers-with-custom-ckpt-file | Currently I have the current code which runs a prompt on a model which it downloads from huggingface. from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler model_id = "stabilityai/stable-diffusion-2" # Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler) pipe = pipe.to("mps") pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" pipe(prompt).images[0] I wanted to know how can I feed a custom ckpt file to this script instead of it downloading it from stabilityAi repo? | You cannot use a ckpt file with diffusers out of the box. The ckpt file has to be converted to a diffusers friendly format. You can do that with a tool named StableTuner. Or this utility script on HuggingFace https://huggingface.co/spaces/anzorq/sd-to-diffusers | 4 | 3 |
74,866,909 | 2022-12-20 | https://stackoverflow.com/questions/74866909/inadequate-transform-produced-by-good-homographic-feature-match | I'm trying to determine a method to rotate and translate a scanned 2D image to match it's near-identical but lower-quality digital template. After running the code, I want the images to very closely align when overlaid, so that post-processing effects can be applied to the scan based on knowledge of the template layout. I've tried a number of different approaches based on identifying certain features on the scanned image that could be consistently mapped to the template (not a lot of luck with a findContour based approach), but ultimately determined that the most effective approach is to perform a homographic match using openCV then apply a transform (either using perspectiveTransform or warpPerspective). The match I'm getting is phenomenally good. Even when I make the distance threshold for matches extremely restrictive, I'm getting dozens of point matches. I've varied both the threshold and findHomography RANSAC a fair bit. But ultimately, the transform I get from findHomography is not good enough for my needs; I'm not sure if there's knobs I'm not adequately exploring, or if the the disparity in image quality is just enough that this isn't doable. Here's the code I'm using: from matplotlib import pyplot as plt import numpy as np import cv2 as cv def feature_match(scanned_image, template_image, MIN_MATCH_COUNT=10, dist_thresh=0.2, RANSAC=10.0): # Initiate SIFT detector sift = cv.SIFT_create() # find the keypoints and descriptors with SIFT kp1, des1 = sift.detectAndCompute(scanned_image, None) kp2, des2 = sift.detectAndCompute(template_image, None) FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) flann = cv.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(des1, des2, k=2) # store all the good matches as per Lowe's ratio test. good = [] for m, n in matches: if m.distance < dist_thresh * n.distance: good.append(m) # Do we have enough? if len(good) > MIN_MATCH_COUNT: print("%s good matches using distance threshold of %s" % (len(good), dist_thresh)) src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2) M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC, RANSAC) matchesMask = mask.ravel().tolist() # Apply warp perspective based on homography matrix warped_image = cv.warpPerspective(scanned_image, M, (scanned_image.shape[1], scanned_image.shape[0])) plt.imshow(warped_image, 'gray'), plt.show() else: print("Not enough matches are found - {}/{}".format(len(good), MIN_MATCH_COUNT)) matchesMask = None # Show quality of matches draw_params = dict(matchColor=(0, 255, 0), # draw matches in green color singlePointColor=None, matchesMask=matchesMask, # draw only inliers flags=2) match_quality = cv.drawMatches(scanned_image, kp1, template_image, kp2, good, None, **draw_params) plt.imshow(match_quality, 'gray'), plt.show() cv.imwrite(r"img1.png", cv.cvtColor(img1, cv.COLOR_GRAY2RGB)) cv.imwrite(r"img2.png", cv.cvtColor(img2, cv.COLOR_GRAY2RGB)) cv.imwrite(r"warped_image.png", cv.cvtColor(warped_image, cv.COLOR_GRAY2RGB)) # Load images img1_path = r"scanned_image.png" img2_path = r"template_image.png" img1 = cv.imread(img1_path) img1 = cv.cvtColor(img1, cv.COLOR_BGR2GRAY) img2 = cv.imread(img2_path) img2 = cv.cvtColor(img2, cv.COLOR_BGR2GRAY) # upscaling img2 to the final scale I'm ultimately after; saves an upscale img2 = cv.resize(img2, (img2.shape[1] * 2, img2.shape[0] * 2), cv.IMREAD_UNCHANGED) feature_match(scanned_image=img1, template_image=img2, MIN_MATCH_COUNT=10, dist_thresh=0.2) Here's the imagery I'm using: Scanned Image Template Image Note: both a lower quality image, and initially lower resolution. There are minor differences between the two, but not enough that should degrade the match (I think?) matches between scanned and template image Using a distance threshold of 0.2 I'm getting 100 matches. Setting it around 0.8 and I get over 2400 scanned image warped into the template using the returned homographic matrix The warped scan overlaid ontop of the template I was expecting a better outcome than this given the volume of match points. The transformed image looks good at first glance (and certainly better than it started), but is lacking in terms of the ability to use knowledge of the template layout to then modify the scan. Is there an alternative approach I should take here? Parameters I should be leveraging instead? Or is this just what's achievable here given the quality of the template-- or the methodology being leveraged? | To answer my own question in case someone mysteriously has a similar issue in future: it's important to make sure that when you're applying your homography matrix that your destination size corresponds with the template you're attempting to match if you're looking to get an "exact" match with said template. In my original I had: warped_image = cv.warpPerspective(scanned_image, M, (scanned_image.shape[1], scanned_image.shape[0])) It should have been this: warped_image = cv.warpPerspective(scanned_image, M, (template_image.shape[1], template_image.shape[0])) There are minor scale differences between the size of scanned_image and template_image; while they are close, those minuscule differences are enough that, yes, projecting into the wrong size will skew the alignment when comparing them directly. This updated version isn't perfect, but it's probably close enough for my needs. I suspect that the ECC matching approach / second stage treatment methods @fmw42 describes would be a good second pass. I'll look into it and edit this answer if they are substantial enough to be worth exploring if anyone is similarly dealing with this kind of thing in the future. edit: the ECC method is about equivalent, honestly. Not necessarily worse or better. Definitely slower, though, so probably worse in that regard. | 3 | 4 |
74,827,982 | 2022-12-16 | https://stackoverflow.com/questions/74827982/using-a-buffer-to-write-a-psycopg3-copy-result-through-pandas | Using psycopg2, I could write large results as CSV using copy_expert and a BytesIO buffer like this with pandas: copy_sql = "COPY (SELECT * FROM big_table) TO STDOUT CSV" buffer = BytesIO() cursor.copy_expert(copy_sql, buffer, size=8192) buffer.seek(0) pd.read_csv(buffer, engine="c").to_excel(self.output_file) However, I can't figure out how to replace the buffer in copy_expert with psycopg3's new copy command. Has anyone figured out a way to do this? | The key to writing a large query to a file through psycopg3 in this fashion is to use a SpooledTemporaryFile, which will limit the amount of memory usage in Python (see max_size). Then after the CSV is written to disk, convert with pandas. from tempfile import SpooledTemporaryFile from pandas import read_csv from psycopg import connect cursor = connect([connection]).cursor() copy_sql = "COPY (SELECT * FROM stocks WHERE price > %s) TO STDOUT" price = 100 with SpooledTemporaryFile( mode="wb", max_size=65546, buffering=8192, ) as tmpfile: with cursor.copy(copy_sql, (price,)) as copy: for data in copy: tmpfile.write(data) tmpfile.seek(0) read_csv(tmpfile, engine="c").to_excel("my_spreadsheet.xlsx") | 4 | 3 |
74,852,107 | 2022-12-19 | https://stackoverflow.com/questions/74852107/pytorch-linear-regression-1x1d-consistantly-wrong-slope | I am mastering pytorch here, and decided to implement very simple 1 to 1 linear regression, from height to weight. Got dataset: https://www.kaggle.com/datasets/mustafaali96/weight-height but any other would do nicely. Lets import libraries and information about females: import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv('weight-height.csv',sep=',') #https://www.kaggle.com/datasets/mustafaali96/weight-height height_f=df[df['Gender']=='Female']['Height'].to_numpy() weight_f=df[df['Gender']=='Female']['Weight'].to_numpy() plt.scatter(height_f, weight_f, c ="red",alpha=0.1) plt.show() Which gives nice scatter of measured females: So far, so good. Lets make Dataloader: class Data(Dataset): def __init__(self, X: np.ndarray, y: np.ndarray) -> None: # need to convert float64 to float32 else # will get the following error # RuntimeError: expected scalar type Double but found Float self.X = torch.from_numpy(X.reshape(-1, 1).astype(np.float32)) self.y = torch.from_numpy(y.reshape(-1, 1).astype(np.float32)) self.len = self.X.shape[0] def __getitem__(self, index: int) -> tuple: return self.X[index], self.y[index] def __len__(self) -> int: return self.len traindata = Data(height_f, weight_f) batch_size = 500 num_workers = 2 trainloader = DataLoader(traindata, batch_size=batch_size, shuffle=True, num_workers=num_workers) ...linear regression model... class linearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super(linearRegression, self).__init__() self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, x): out = self.linear(x) return out model = linearRegression(1, 1) criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.00001) .. lets train it: epochs=10 for epoch in range(epochs): print(epoch) for i, (inputs, labels) in enumerate(trainloader): outputs=model(inputs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() gives 0,1,2,3,4,5,6,7,8,9 now lets see what our model gives: range_height_f=torch.linspace(height_f.min(),height_f.max(),150) plt.scatter(height_f, weight_f, c ="red",alpha=0.1) pred=model(range_height_f.reshape(-1, 1)) plt.scatter(range_height_f, pred.detach().numpy(), c ="green",alpha=0.1) ... Why does it do this? Why wrong slope? consistently wrong slope, I might add Whatever I change, optimizer, batch size, epochs, females to males.. it gives me this very wrong slope, and I really don't get - why? Edit 1: Added loss, here is plot Edit 2: Have decided to explore a bit, and made regression with skilearn: from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split(height_f, weight_f, test_size = 0.25) regr = LinearRegression() regr.fit(X_train.reshape(-1,1), y_train) plt.scatter(height_f, weight_f, c ="red",alpha=0.1) range_pred=regr.predict(range_height_f.reshape(-1, 1)) range_pred plt.scatter(range_height_f, range_pred, c ="green",alpha=0.1) which gives following regression, which looks nice: t = torch.from_numpy(height_f.astype(np.float32)) p=regr.predict(t.reshape(-1,1)) p=torch.from_numpy(p).reshape(-1,1) w= torch.from_numpy(weight_f.astype(np.float32)).reshape(-1,1) print(criterion(p,w).item()) However in this case criterion=100.65161998527695 Pytorch in own turn converges to about 210 Edit 3 Changed optimisation to Adam from SGD: #optimizer = torch.optim.SGD(model.parameters(), lr=0.00001) optimizer = torch.optim.Adam(model.parameters(), lr=0.5) lr is larger in this case, which yields interesting, but consistent result. Here is loss: , And here is proposed regression: And, here is log of loss criterion as well for Adam optimizer: | I think your issue stems from the data not being centered around zero. See this thread for another example where "centering" the data prior to training has a huge effect on the convergence of SGD optimization. Update (Dec 29the, 2022): TL;DR It's all about normalization/initialization. In detail: Your data is not centered around 0 and it is not scaled "nicely". This makes it very difficult to SGD (and all other variants of it) to struggle with optimization. In this answer I showed how centering the training data (subtracting mean and deciding by the std) solves this problem. Here I'll show you how to leave your data as-is, but change the initialization of the weights to solve your problem. let m_x, s_x be the mean and std of X, and m_y, s_y be the mean and std of y. When pytorch init the weights, a and b, for the linear layer y = aX + b it assumes X and y have zero mean and unit variance. This is NOT the case here. Far from it. Therefore, we need to re-adjust the initial a and b accordingly. Here's the math for it: And the code: mu_x, sig_x, mu_y, sig_y = traindata.X.mean().item(), traindata.X.std().item(), traindata.y.mean().item(), traindata.y.std().item() # just for fun, here are the values: # (63.7087, 2.6962, 135.8601, 19.0225) # start a fresh model and adjust its initial values: model = linearRegression(1, 1) model.linear.weight.data *= (sig_x / sig_y) model.linear.bias.data = sig_y * (-(mu_x/sig_x)+(mu_y/sig_y)) # now you are good to go! continue optimizing like you originally did: # init an optimizer optimizer = torch.optim.SGD(model.parameters(), lr=0.00001) # optimize for 10 epochs (now you don't need this much, you can even increase the learning rate...) epochs=10 for epoch in range(epochs): print(epoch) for i, (inputs, labels) in enumerate(trainloader): outputs=model(inputs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() The loss curve looks like this: And the optimizer converged to In []: loss.item() Out[]: 100.9453125 Similar to that of sklearn.linear_model.LinearRegression. Plotting the prediction on the data: | 5 | 6 |
74,832,296 | 2022-12-17 | https://stackoverflow.com/questions/74832296/typeerror-string-indices-must-be-integers-when-getting-data-of-a-stock-from-y | import pandas_datareader end = "2022-12-15" start = "2022-12-15" stock_list = ["TATAELXSI.NS"] data = pandas_datareader.get_data_yahoo(symbols=stock_list, start=start, end=end) print(data) When I run this code, I get error "TypeError: string indices must be integers". Edit : I have updated the code and passed list as symbol parameter but it still shows the same error Error : Traceback (most recent call last): File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\mp3downloader.py", line 7, in <module> data = pandas_datareader.get_data_yahoo(symbols=[TATAELXSI], start=start, end=end) File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\data.py", line 80, in get_data_yahoo return YahooDailyReader(*args, **kwargs).read() File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\base.py", line 258, in read df = self._dl_mult_symbols(self.symbols) File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\base.py", line 268, in _dl_mult_symbols stocks[sym] = self._read_one_data(self.url, self._get_params(sym)) File "C:\Users\Deepak Shetter\PycharmProjects\100DAYSOFPYTHON\venv\lib\site-packages\pandas_datareader\yahoo\daily.py", line 153, in _read_one_data data = j["context"]["dispatcher"]["stores"]["HistoricalPriceStore"] TypeError: string indices must be integers | None of the solutions reported here so far worked for me. As per the discussion here Yahoo made changes to their API that broke compatibility with previous pandas datareader versions. In the same Github thread a fix is reported, implemented in a pull request from Github user raphi6. I confirmed the pull request works fine. The version from the pull request can be installed with this 3 lines: conda install pycryptodome pycryptodomex conda uninstall pandas-datareader pip install git+https://github.com/raphi6/pandas-datareader.git@ea66d6b981554f9d0262038aef2106dda7138316 The pycrypto* packages are dependencies I have to install to make it work. Notice I am using the commit hash here instead of the branch name, because it is Yahoo!_Issue#952 and there is an issue with hash characters when using pip this way. This can also be done using pip for all the commands instead of conda (see Update 1 below). Update 1 To try this on Google Colab use (as shown here): ! pip install pycryptodome pycryptodomex ! pip uninstall --yes pandas-datareader ! pip install git+https://github.com/raphi6/pandas-datareader.git@ea66d6b981554f9d0262038aef2106dda7138316 Update 2 (27/12/2022) Although past week I could not make it work, I have tried again, and I can confirm that the pdr_override() workaround mentioned below by Nikhil Mulley is working now (at least with yfinance 0.2.3 and pandas-datareader 0.10.0). Original answer (works but more lines of code) In the same Github thread a fix is reported, implemented in a pull request from Github user raphi6. I confirmed the pull request works fine. Detailed installation instructions for the pull request can be found here, copied below for the sake of completeness: git clone https://github.com/raphi6/pandas-datareader.git cd pandas-datareader conda uninstall pandas-datareader conda install pycryptodome pycryptodomex git checkout 'Yahoo!_Issue#952' python setup.py install --record installed_files.txt The --record argument in the install command is to get a list of installed files, so that it is easy to uninstall in the future (following this SO thread). The pycrypto* files are dependencies I have to install to make it work. | 49 | 20 |
74,868,286 | 2022-12-20 | https://stackoverflow.com/questions/74868286/how-do-i-modify-this-function-to-return-a-4d-array-instead-of-3d | I created this function that takes in a dataframe to return an ndarrays of input and label. def transform_to_array(dataframe, chunk_size=100): grouped = dataframe.groupby('id') # initialize accumulators X, y = np.zeros([0, 1, chunk_size, 4]), np.zeros([0,]) # original inpt shape: [0, 1, chunk_size, 4] # loop over each group (df[df.id==1] and df[df.id==2]) for _, group in grouped: inputs = group.loc[:, 'A':'D'].values label = group.loc[:, 'label'].values[0] # calculate number of splits N = (len(inputs)-1) // chunk_size if N > 0: inputs = np.array_split( inputs, [chunk_size + (chunk_size*i) for i in range(N)]) else: inputs = [inputs] # loop over splits for inpt in inputs: inpt = np.pad( inpt, [(0, chunk_size-len(inpt)),(0, 0)], mode='constant') # add each inputs split to accumulators X = np.concatenate([X, inpt[np.newaxis, np.newaxis]], axis=0) y = np.concatenate([y, label[np.newaxis]], axis=0) return X, y The function returned X of shape (n_samples, 1, chunk_size, 4) and y of shape (n_samples, ). For examples: N = 10_000 id = np.arange(N) labels = np.random.randint(5, size=N) df = pd.DataFrame(data = np.random.randn(N, 4), columns=list('ABCD')) df['label'] = labels df.insert(0, 'id', id) df = df.loc[df.id.repeat(157)] df.head() id A B C D label 0 0 -0.571676 -0.337737 -0.019276 -1.377253 1 0 0 -0.571676 -0.337737 -0.019276 -1.377253 1 0 0 -0.571676 -0.337737 -0.019276 -1.377253 1 0 0 -0.571676 -0.337737 -0.019276 -1.377253 1 0 0 -0.571676 -0.337737 -0.019276 -1.377253 1 To generate the followings: X, y = transform_to_array(df) X.shape # shape of input (20000, 1, 100, 4) y.shape # shape of label (20000,) This function works fine as intended, however, it takes long time to finish execution: start_time = time.time() X, y = transform_to_array(df) end_time = time.time() print(f'Time taken: {end_time - start_time} seconds.') Time taken: 227.83956217765808 seconds. In attempt to improve performance of the function (minimise exec. time), I created the following modified func: def modified_transform_to_array(dataframe, chunk_size=100): # group data by 'id' grouped = dataframe.groupby('id') # initialize lists to store transformed data X, y = [], [] # loop over each group (df[df.id==1] and df[df.id==2]) for _, group in grouped: # get input and label data for group inputs = group.loc[:, 'A':'D'].values label = group.loc[:, 'label'].values[0] # calculate number of splits N = (len(inputs)-1) // chunk_size if N > 0: # split input data into chunks inputs = np.array_split( inputs, [chunk_size + (chunk_size*i) for i in range(N)]) else: inputs = [inputs] # loop over splits for inpt in inputs: # pad input data to have a chunk size of chunk_size inpt = np.pad( inpt, [(0, chunk_size-len(inpt)),(0, 0)], mode='constant') # add each input split and corresponding label to lists X.append(inpt) y.append(label) # convert lists to numpy arrays X = np.array(X) y = np.array(y) return X, y At first, it seems like I succeeded reducing time taken: start_time = time.time() X2, y2 = modified_transform_to_array(df) end_time = time.time() print(f'Time taken: {end_time - start_time} seconds.') Time taken: 5.842168092727661 seconds. However, the result is that it changes the shape of the intended returned value. X2.shape # this should be (20000, 1, 100, 4) (20000, 100, 4) y.shape # this is fine (20000, ) Question How do I modify modified_transform_to_array() to return the intended array shape (n_samples, 1, chunk_size, 4) since it is much faster? | You can simply reshape the X just before returning it at the end of modified_transform_to_array(), e.g.: def modified_transform_to_array( ... ): ... # convert lists to numpy arrays X = np.array(X) y = np.array(y) X = X.reshape((X.shape[0], 1, *X.shape[1:])) # <-- THIS LINE return X, y or, equivalently: X = X.reshape((X.shape[0], 1, X.shape[1], X.shape[2])) As pointed out in @MSS's answer, you can achieve the same reshaping result also with slicing, by starting from a a slicing where you are selecting the whole array (i.e. X[:, :, :]) and inserting a None (or its more explicit alias np.newaxis) in the position where you want to augment the number of dimensions: X = X[:, None, :, :] X = X[:, np.newaxis, :, :] The last two slicing can be replaced by an Ellipsis ... which essentially produces enough full-axis slicing (i.e. : or slice(None)) to fill the whole array dimensions. X = X[:, None, ...] X = X[:, np.newaxis, ...] You may want to read the relevant section of NumPy's user guide for further explanations on the use of None and Ellipsis in NumPy's slicing. | 5 | 4 |
74,844,094 | 2022-12-18 | https://stackoverflow.com/questions/74844094/projection-onto-unit-simplex-using-gradient-decent-in-pytorch | In Professor Boyd homework solution for projection onto the unit simplex, he winds up with the following equation: g_of_nu = (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2 If one calculates nu*, then the projection to unit simplex would be y*=relu(x-nu*1). What he suggests is to find the maximizer of g_of_nu. Since g_of_nu is strictly concave, I multiply it by a negative sign (f_of_nu) and find its global minimizer using gradient descent. Question My final vector y*, does not add up to one, what am I doing wrong? Code for replication torch.manual_seed(1) x = torch.randn(10)#.view(-1, 1) x_list = x.tolist() print(list(map(lambda x: round(x, 4), x_list))) nu_0 = torch.tensor(0., requires_grad = True) nu = nu_0 optimizer = torch.optim.SGD([nu], lr=1e-1) nu_old = torch.tensor(float('inf')) steps = 100 eps = 1e-6 i = 1 while torch.norm(nu_old-nu) > eps: nu_old = nu.clone() optimizer.zero_grad() f_of_nu = -( (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2 ) f_of_nu.backward() optimizer.step() print(f'At step {i+1:2} the function value is {f_of_nu.item(): 1.4f} and nu={nu: 0.4f}' ) i += 1 y_star = relu(x-nu).cpu().detach() print(list(map(lambda x: round(x, 4), y_star.tolist()))) print(y_star.sum()) [0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817, -1.0276, -0.5631] At step 1 the function value is -1.9618 and nu= 0.0993 . . . At step 14 the function value is -1.9947 and nu= 0.0665 [0.5948, 0.2004, 0.0, 0.5548, 0.0, 0.0, 0.0, 0.3152, 0.0, 0.0] tensor(1.6652) The function torch.manual_seed(1) x = torch.randn(10) nu = torch.linspace(-1, 1, steps=10000) f = lambda x, nu: -( (1/2)*torch.norm(-relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) - x.size()[0]*nu**2 ) f_value_list = np.asarray( [f(x, i) for i in nu.tolist()] ) i_min = np.argmin(f_value_list) print(nu[i_min]) fig, ax = plt.subplots() ax.plot(nu.cpu().detach().numpy(), f_value_list); Here is the minimizer from the graph which is consistent with the gradient descent. tensor(0.0665) | The error comes from the derivation of the formula: from: If you develop the expression you will realize that it should be instead of In short, this error comes from forgetting the 1/2 factor while developing the norm. Once you make that change everything works as intended: import torch import torchvision import numpy as np import matplotlib.pyplot as plt torch.manual_seed(1) x = torch.randn(10) x_list = x.tolist() nu_0 = torch.tensor(0., requires_grad = True) nu = nu_0 optimizer = torch.optim.SGD([nu], lr=1e-1) nu_old = torch.tensor(float('inf')) steps = 100 eps = 1e-6 i = 1 while torch.norm(nu_old-nu) > eps: nu_old = nu.clone() optimizer.zero_grad() f_of_nu = -(0.5*torch.norm(-torch.relu(-(x-nu)))**2 + nu*(torch.sum(x) -1) -0.5*x.size()[0]*nu**2) f_of_nu.backward() optimizer.step() print(f'At step {i+1:2} the function value is {f_of_nu.item(): 1.4f} and nu={nu: 0.4f}' ) i += 1 y_star = torch.relu((x-nu)).cpu().detach() print(y_star) print(list(map(lambda x: round(x, 4), y_star.tolist()))) print(y_star.sum()) And the output gives: ... At step 25 the function value is -2.0721 and nu= 0.2328 tensor(0.2328, requires_grad=True) tensor([0.4285, 0.0341, 0.0000, 0.3885, 0.0000, 0.0000, 0.0000, 0.1489, 0.0000, 0.0000]) [0.4285, 0.0341, 0.0, 0.3885, 0.0, 0.0, 0.0, 0.1489, 0.0, 0.0] tensor(1.0000) | 4 | 2 |
74,800,328 | 2022-12-14 | https://stackoverflow.com/questions/74800328/reshape-pandas-dataframe-from-wide-to-long-by-splitting | I am trying to reshape the following data from wide to long format df = pd.DataFrame( { "size_Ent": { pd.Timestamp("2021-01-01 00:00:00"): 600, pd.Timestamp("2021-01-02 00:00:00"): 930, }, "size_Baci": { pd.Timestamp("2021-01-01 00:00:00"): 700, pd.Timestamp("2021-01-02 00:00:00"): 460, }, "min_area_Ent": { pd.Timestamp("2021-01-01 00:00:00"): 1240, pd.Timestamp("2021-01-02 00:00:00"): 1503, }, "min_area_Baci": { pd.Timestamp("2021-01-01 00:00:00"): 1285, pd.Timestamp("2021-01-02 00:00:00"): 953, }, } ) size_Ent size_Baci min_area_Ent min_area_Baci 2021-01-01 600 700 1240 1285 2021-01-02 930 460 1503 953 The problem is that the column names contain two different pieces of information separated by an underscore: The property/variable that was measured (e.g. size or min_area). I'd like these to remain as column names (without duplicates). A label for the item that was measured (e.g., Ent or Baci). I'd like these labels to become the values of a new column called 'bacterium'. Additionally, I'd like the row indexes to remain as timestamps. It should look like this: bacterium min_area size 2021-01-01 Baci 1285 700 2021-01-01 Ent 1240 600 2021-01-02 Baci 953 460 2021-01-02 Ent 1503 930 I tried transposing the data frame with df.T but this did not give the result I want. | This can be solved in three simple steps: First, notice that your column names are actually encoding a 2x2 MultiIndex, so let's start by creating a MultiIndex from tuples. To do this, we need to first transform the existing column names into tuples. This is easy because we know they should be split at the last underscore. # Convert column names into MultiIndex, giving an informative name to the level with label data column_tuples = df.columns.str.rsplit("_", n=1) column_tuples = [tuple(c) for c in column_tuples] df.columns = pd.MultiIndex.from_tuples(column_tuples,names=[None,'bacterium']) Next, use df.stack() to take the 'bacterium' level from the column MultiIndex and move it into a row MultiIndex. This is not quite the same as the transpose operation that you tried. df = df.stack('bacterium') Finally, use df.reset_index() with the level argument to take the bacterium level from the row MultiIndex and make it a proper column. df = df.reset_index('bacterium') Result: bacterium min_area size 2021-01-01 Baci 1285 700 2021-01-01 Ent 1240 600 2021-01-02 Baci 953 460 2021-01-02 Ent 1503 930 | 3 | 3 |
74,865,755 | 2022-12-20 | https://stackoverflow.com/questions/74865755/why-is-the-first-expression-interpreted-as-an-int-and-the-second-as-a-string | Using PyYaml import yaml yaml.full_load(StringIO('a: 16:00:00')) # {'a': 57600} yaml.full_load(StringIO('a: 09:31:00')) # {'a': '09:31:00'} Why is there a difference in those behaviors? | Older versions of YAML supported sexagesimal (base 60) numbers, intended for use for things like times. Instead of adding additional digits (like hexadecimal uses 0-9 and A-F), it simply uses decimal numbers 0-59 separated by :s. 16:00:00 is thus equivalent to 16*(60**2) + 0*60 + 0 == 57600. PyYAML apparently still uses this older YAML specification. 09:30:00, however, does not start with a valid decimal: a leading zero indicates an octal number, but 09 is not a valid octal number. Not being able to parse this as any kind of known number (octal, decimal, or sexagesimal), PyYAML falls back to a string. YAML can represent timestamps, but only if they consist of a date and an optional timestamp. PyYAML parses such timestamps as datetime.datetime objects, as seems reasonable. >>> yaml.full_load(StringIO('a: 2022-12-21T09:31:00')) {a: datetime.datetime(2022, 12, 21, 9, 31)} I referenced an answer in a comment, https://stackoverflow.com/a/45165433/1126841, provided by the author of another package with does adhere to the YAML 1.2 specification, which will parse the value as a string, not a sexagesimal integer. >>> from ruamel import yaml >>> yaml.safe_load('a: 16:00:00') {'a': '16:00:00'} | 4 | 4 |
74,866,750 | 2022-12-20 | https://stackoverflow.com/questions/74866750/error-floatobject-b0-000000000000-14210855-invalid-use-0-0-instead-while-u | I am using function to count occurrences of given word in pdf using PyPDF2. While the function is running I get message in terminal: FloatObject (b'0.000000000000-14210855') invalid; use 0.0 instead My code: def count_words(word): print() print('Counting words..') files = os.listdir('./pdfs') counted_words = [] for idx, file in enumerate(files, 1): with open(f'./pdfs/{file}', 'rb') as pdf_file: ReadPDF = PyPDF2.PdfFileReader(pdf_file, strict=False) pages = ReadPDF.numPages words_count = 0 for page in range(pages): pageObj = ReadPDF.getPage(page) data = pageObj.extract_text() words_count += sum(1 for match in re.findall(rf'\b{word}\b', data, flags=re.I)) counted_words.append(words_count) print(f'File: {idx}') return counted_words How to get rid of this message? | See https://pypdf2.readthedocs.io/en/latest/user/suppress-warnings.html import logging logger = logging.getLogger("PyPDF2") logger.setLevel(logging.ERROR) | 4 | 1 |
74,866,168 | 2022-12-20 | https://stackoverflow.com/questions/74866168/how-to-get-individual-values-from-a-string-seperated-by-commas | I am reading a file using: def readFile(): file = open('Rules.txt', 'r') lines = file.readlines() for line in lines: rulesList.append(line) rulesList: ['\n', "Rule(F1, HTTPS TCP, ['ip', 'ip'], ['www.google.ca', '8.8.8.8'], 443)\n", '\n', "Rule(F2, HTTPS TCP, ['ip', 'ip'], ['75.2.18.233'], 443)\n", '\n'] My file looks like: Rule(F1, HTTPS TCP, ['ip', 'ip'], ['www.google.ca', '8.8.8.8'], 443) Rule(F2, HTTPS TCP, ['ip', 'ip'], ['ip'], 443) I would like to feed the values to a class I created class Rule: def __init__(self, flowNumber, protocol, port, fromIP=[], toIP=[]): self.flowNumber = flowNumber self.protocol = protocol self.port = port self.fromIP = fromIP self.toIP = toIP def __repr__(self): return f'\nRule({self.flowNumber}, {self.protocol}, {self.fromIP}, {self.toIP}, {self.port})' newRule = Rule(currentFlowNum, currentProtocol, currentPort, currentFromIP, currentToIP) to get an output such as: [F1, HTTPS TCP, ['ip', 'ip'], ['www.google.ca', '8.8.8.8'], 443] or be able to assign these values to a variable like: currentFlowNum = F1, currentProtocol = 'HTTPS TCP' , currentPort = 443, currentFromIP = ['ip', 'ip'], currentToIP = ['www.google.ca', '8.8.8.8'] I tried: for rule in rulesList: if rule !='\n': tmp = rule.split(',') print(tmp) tmp: ['Rule(F1', ' HTTPS TCP', " ['ip'", " 'ip']", " ['www.google.ca'", " '8.8.8.8']", ' 443)\n'] ['Rule(F2', ' HTTPS TCP', " ['ip'", " 'ip']", " ['ip']", ' 443)\n'] Is there a way to not split the commas between [] i.e. I would like the output to look like: ['Rule(F1', ' HTTPS TCP', " ['ip','ip']", " ['www.google.ca', '8.8.8.8']", ' 443)\n'] ['Rule(F2', ' HTTPS TCP', " ['ip','ip']", " ['ip']", ' 443)\n'] | If you have control over how the data in the file is stored and can replace the single quotes (') with double quotes (") to make the "list" structures valid JSON, you could use RegExp for this. A word of caution: unless you are absolutely sure that the format you'll be reading will largely remain the same and is rather inflexible, you're better off storing this data in a more well-established format (as mentioned in the comments) like JSON, YAML, etc. There are so many edge cases that could happen here that rolling your own parser like this objectively suboptimal. import re import json def readFile(): file = open('Rules.txt', 'r') myRules = [] for line in file.readlines(): match = re.match(r'Rule\((?P<flow_number>[^,]+),\s(?P<protocol>[^,]+),\s(?P<from_ip>\[[^\]]+\]),\s(?P<to_ip>\[[^\]]+\]),\s(?P<port>[^,)]+)\)', line) if match: myRules.append(Rule(match.group('flow_number'), match.group('protocol'), match.group('port'), json.loads(match.group('from_ip')), json.loads(match.group('to_ip')))) return myRules print(readFile()) # Returns: # [ # Rule(F1, HTTPS TCP, ['ip', 'ip'], ['www.google.ca', '8.8.8.8'], 443), # Rule(F2, HTTPS TCP, ['ip', 'ip'], ['ip'], 443)] Repl.it | Regex101 | 3 | 5 |
74,864,895 | 2022-12-20 | https://stackoverflow.com/questions/74864895/a-pythonic-way-to-init-inherited-dataclass-from-an-object-of-parent-type | Given a dataclass structure: @dataclass class RichParent: many_fields: int = 1 more_fields: bool = False class Child(RichParent): some_extra: bool = False def __init__(seed: RichParent): # how to init more_fields and more_fields fields here without referencing them directly? # self.more_fields = seed.more_fields # self.many_fields = seed.many_fields pass What would be the right way to shallow copy seed object fields into the new child object? I wouldn't mind even converting seed to Child type since there is no use for parent object after initialization. Why do I do that? I want to avoid changing Child class every time RichParent has a change as long as parent stays a plain dataclass. | I am unsure why you'd want to write an explicit __init__ *, although you may need to be on Python 3.10 to be able to pass in fields specific to Child to its init. ** from dataclasses import dataclass @dataclass(kw_only=True) class RichParent: many_fields: int = 1 more_fields: bool = False #for some reason not setting `@dataclass`` again here means the built in str/repr #skips the child-only field. #also if you don't use `kw_only=True` on Child, can't pass in Child only fields @dataclass(kw_only=True) class Child(RichParent): some_extra: bool = False seed = RichParent(many_fields=2, more_fields=True) print(f"\n{seed=}") child = Child(**vars(seed)) print(f"\n{child=} with {vars(child)=} which does include more_fields") child2 = Child(**vars(seed),some_extra=True) print(f"\n{child2=}") #see the behavior changes in print and field acceptable to constructor class ChildNoDC(RichParent): some_extra: bool = False child_no_dcdec = ChildNoDC(**vars(seed)) print(f"\n{child_no_dcdec=} with {vars(child_no_dcdec)=} which does include more_fields") try: child_no_dcdec2 = ChildNoDC(some_extra=True,**vars(seed)) except (Exception,) as e: print("\n",e, "as expected") output: seed=RichParent(many_fields=2, more_fields=True) child=Child(many_fields=2, more_fields=True, some_extra=False) with vars(child)={'many_fields': 2, 'more_fields': True, 'some_extra': False} which does include more_fields child2=Child(many_fields=2, more_fields=True, some_extra=True) child_no_dcdec=ChildNoDC(many_fields=2, more_fields=True) with vars(child_no_dcdec)={'many_fields': 2, 'more_fields': True} which does include more_fields RichParent.__init__() got an unexpected keyword argument 'some_extra' as expected * If you did need some custom init on Child, use the builtin post_init hook to do it: def __post_init__(self): print(f"{self} lets do stuff here besides the std data init...") From talking about shallow copies, you know about mutability issues, but you could hack some stuff like self.my_dict = self.my_dict.copy() hacks here to work around that. ** as far as I understand, @dataclass does not flip the class into a dataclass type. It just builds an __init__ and some methods, (using metaclasses?). So ChildNoDC doesn't know it is dataclass-style class and just tries to pass all fields passed into its constructor call onto Parent's __init__. Child having been "told" it's dataclassed too, knows better. (That's also the reason the child-only field wasn't being printed). And kw-only flag frees up some positioning and defaults-only constraints too. | 5 | 6 |
74,861,252 | 2022-12-20 | https://stackoverflow.com/questions/74861252/downgrade-poetry-version | I need to downgrade my version of poetry to version 1.2.1. Currently, it's 1.2.2. >>> poetry --version Poetry (version 1.2.2) I use the following command: >>> curl -sSL https://install.python-poetry.org | POETRY_VERSION=1.2.1 python3 - Retrieving Poetry metadata The latest version (1.2.1) is already installed. But I'm told that 1.2.1 is already installed. Yet the poetry version is still stuck on the original. >>> poetry --version Poetry (version 1.2.2) The answer given here doesn't work (poetry self [email protected]) => The command "self" does not exist. What am I doing wrong here? | if you want to install specific version in python hereit is , pip install poetry==1.2.1 In Future just simplpe pip install 'Your Library Name'== 'Specific version' | 13 | 5 |
74,859,403 | 2022-12-20 | https://stackoverflow.com/questions/74859403/using-exec-in-a-comprehension-list | I have a script that can be run independently but sometimes will be externally invoked with parameters meant to override the ones defined in the script. I got it working using exec() (the safety of this approach is not the point here) but I don't understand why it works in a for loop and not in a comprehension list. foo = 1 bar = 2 externally_given = ['foo=10', 'bar=20'] for ext in externally_given: exec(ext) print('Exec in for loop ->', foo, bar) externally_given = ['foo=30', 'bar=40'] [exec(ext) for ext in externally_given] print('Exec in comprehension list ->', foo, bar) Output: Exec in for loop -> 10 20 Exec in comprehension list -> 10 20 EDIT: Python version 3.10 | To update global variables, let exec() have access to them by passing globals() as the second parameter: [exec(ext,globals()) for ext in externally_given] # [None, None] foo # 10 bar # 20 (Subject to all the good comments to the original post.) | 3 | 5 |
74,831,594 | 2022-12-17 | https://stackoverflow.com/questions/74831594/cannot-import-name-wkbwriter-from-shapely-geos-when-import-google-cloud-ai-p | When I run this code on google colab. from google.cloud import aiplatform The following error occurred ImportError: cannot import name 'WKBWriter' from 'shapely.geos' (/usr/local/lib/python3.8/dist-packages/shapely/geos.py) Does anyone know how to solve this problem? I was working fine on 2022/12/16, but today it is not working. | The bug is tracked in: https://github.com/googleapis/python-aiplatform/issues/1852 The workaround is to pin shapely < 2.0.0 pip install -U google-cloud-aiplatform "shapely<2" | 11 | 21 |
74,851,861 | 2022-12-19 | https://stackoverflow.com/questions/74851861/how-to-update-a-property-for-all-dataclass-objects-in-a-list | I have a list of objects of the following type: @dataclass class Feature: name: str active: bool and my list is: features = [Feature("name1",False), Feature("name2",False), Feature("name3",True)] I want to get back a list with all the features but switch their active property to True. I tried to use map() like this: active_features=list(map(lambda f: f.active=True,features)) but it gives me an error expected parameter. How can this be achieved? Note I thought it was following from the example, but I guess I should have clarified. I want to do this with some short of inline method, without defining a new separate function as suggested from some of the answers, but maybe it cannot be done like this? | Reason why your logic is not working? It gives you error because the lambda function you're using is trying to modify the value of f.active, which is not allowed in a lambda function. lambda functions are allowed to only for expressions that return a value, rather than statements that perform some actions. So I think one way to do it like below- from dataclasses import dataclass import copy @dataclass class Feature: name: str active: bool features = [Feature("name1",False), Feature("name2",False), Feature("name3",True)] active_features = [] main_features = copy.deepcopy(features) for f in main_features: f.active = True active_features.append(f) print(active_features) print(features) Output: [Feature(name='name1', active=True), Feature(name='name2', active=True), Feature(name='name3', active=True)] [Feature(name='name1', active=False), Feature(name='name2', active=False), Feature(name='name3', active=True)] | 3 | 2 |
74,850,128 | 2022-12-19 | https://stackoverflow.com/questions/74850128/macros-are-not-recognised-in-dbt | {{ config ( pre_hook = before_begin("{{audit_tbl_insert(1,'stg_news_sentiment_analysis_incr') }}"), post_hook = after_commit("{{audit_tbl_update(1,'stg_news_sentiment_analysis_incr','dbt_development','news_sentiment_analysis') }}") ) }} select rd.news_id ,rd.title, rd.description, ns.sentiment from live_crawler_output_rss.rss_data rd left join live_crawler_output_rss.news_sentiment ns on rd.news_id = ns.data_id limit 10000; This is my model in DBT which is configured with pre and post hooks which referance a macro to insert and update the audit table. my macro { % macro audit_tbl_insert (model_id_no, model_name_txt) % } {% set run_id_value = var('run_id') %} insert into {{audit_schema_name}}.{{audit_table_name}} (run_id, model_id, model_name, status, start_time, last_updated_at) values ({{run_id_value}}::bigint,{{model_id_no}}::bigint,{{model_name_txt}},'STARTED',current_timestamp,current_timestamp) {% endmacro %} this is the first time i'm using this macro and I see the following error. Compilation Error in model stg_news_sentiment_analysis_incr (models/staging/stg_news_sentiment_analysis_incr.sql) 'audit_tbl_insert' is undefined in macro run_hooks (macros/materializations/hooks.sql) called by macro materialization_table_default (macros/materializations/models/table/table.sql) called by model stg_news_sentiment_analysis_incr (models/staging/stg_news_sentiment_analysis_incr.sql). This can happen when calling a macro that does not exist. Check for typos and/or install package dependencies with "dbt deps". | Your macro's definition has too much whitespace in the braces that define the jinja block: { % macro audit_tbl_insert (model_id_no, model_name_txt) % } Needs to be {% macro audit_tbl_insert (model_id_no, model_name_txt) %} And then this should work just fine. | 4 | 6 |
74,837,553 | 2022-12-17 | https://stackoverflow.com/questions/74837553/how-to-fix-a-mac-base-conda-environment-when-sqlite3-is-broken | I recently updated the Python version of my base conda environment from 3.8 to 3.9, using mamba update python=3.9, but I can no longer run IPython, because the sqlite3 package appears to be broken. python Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:55:37) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/rosborn/opt/miniconda3/lib/python3.9/sqlite3/__init__.py", line 57, in <module> from sqlite3.dbapi2 import * File "/Users/rosborn/opt/miniconda3/lib/python3.9/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: dlopen(/Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so, 0x0002): Symbol not found: (_sqlite3_enable_load_extension) Referenced from: '/Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so' Expected in: '/usr/lib/libsqlite3.dylib' Since I had another Python 3.9 environment that is still functional, I tried copying over the envs/py39/lib/sqlite3.36.0 and envs/py39/lib/python3.9/sqlite3 directories, as well as envs/py39/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so because I assumed the sqlite3 libraries had been incorrectly compiled, but that doesn't fix the problem. On the Homebrew Github, there was a related issue, where someone suggested checking whether the missing symbol was there. It seems to be all present and correct. $ nm -gj /Users/rosborn/opt/miniconda3/lib/python3.9/lib-dynload/_sqlite3.cpython-39-darwin.so | grep enable_load_extension _sqlite3_enable_load_extension I don't know how Homebrew installs sqlite3, but the remaining fixes seemed to require checking the system libsqlite, which I don't have administrative access to. In case it's relevant, I am on an Intel Mac, so it's not related to the M1 chip, as some related issues appear to be. Does the conda distribution attempt to link to the system libsqlite? If so, why does this problem not affect the py39 environment? Any tips will be welcome. If it were not the base environment, I would just delete the one with the problem and start again. I attempted a forced reinstall of sqlite3, but it appeared not to be installable as a separate package. | Following the suggestions by @merv, the solution to this problem was to force a reinstall of the libsqlite package. $ mamba install libsqlite --force-reinstall After updating Python, it seems that sqlite3 was linked to the Mac system library, /usr/lib/libsqlite3.dylib, rather than one installed by conda-forge. According to discussions elsewhere, it is likely that Apple disables the missing _sqlite3_enable_load_extension extension for security reasons, leading to the observed error message. I don't know why the link error occurred in the first place, but fortunately, conda distributes libsqlite as a separate package, so the fix was simple to implement. | 10 | 13 |
74,853,515 | 2022-12-19 | https://stackoverflow.com/questions/74853515/how-to-make-scatter-plots-similar-to-the-one-in-the-paper-get-me-off-your-f | This is a serious question. Please do not take it as a joke. This is a scatter plot from an infamous paper with the same name, Get me off Your F****** Mailing List by Mazières and Kohle (2005), published in a predatory journal. Some people may know it. I am seriously interested in recreating the same scatter plot to test a new density-based clustering algorithm without the need of creating all the letters from scratch. Is there any way to make this process easier? (e.g. a dataset, or a package, or a smart way to recreate the plot) | Now that the grid package supports clipping paths, we can do: library(grid) library(ggplot2) tg <- textGrob("Get me off\nYour Fuck\ning Mailing\nList", x = 0.2, hjust = 0, gp = gpar(cex = 6, col = "grey", font = 2)) cg <- pointsGrob(x= runif(15000), y = runif(15000), pch = 3, gp = gpar(cex = 0.5)) rg <- rectGrob(width = unit(0.5, 'npc'), height = unit(0.1, 'npc'), gp = gpar(fill = 'red')) ggplot(data = NULL, aes(x = 100, y = 100)) + geom_point(col = 'white') + theme_classic() + theme(panel.border = element_rect(fill = 'white', linewidth = 1)) pushViewport(viewport(clip = tg)) grid.draw(cg) | 4 | 5 |
74,852,879 | 2022-12-19 | https://stackoverflow.com/questions/74852879/finding-the-average-of-the-x-component-of-an-array-of-coordinates-based-on-the | I have the following example array of x-y coordinate pairs: A = np.array([[0.33703753, 3.], [0.90115394, 5.], [0.91172016, 5.], [0.93230994, 3.], [0.08084283, 3.], [0.71531777, 2.], [0.07880787, 3.], [0.03501083, 4.], [0.69253184, 4.], [0.62214452, 3.], [0.26953094, 1.], [0.4617873 , 3.], [0.6495549 , 0.], [0.84531478, 4.], [0.08493308, 5.]]) My goal is to reduce this to an array with six rows by taking the average of the x-values for each y-value, like so: array([[0.6495549 , 0. ], [0.26953094, 1. ], [0.71531777, 2. ], [0.41882167, 3. ], [0.52428582, 4. ], [0.63260239, 5. ]]) Currently I am achieving this by converting to a pandas dataframe, performing the calculation, and converting back to a numpy array: >>> df = pd.DataFrame({'x':A[:, 0], 'y':A[:, 1]}) >>> df.groupby('y').mean().reset_index() y x 0 0.0 0.649555 1 1.0 0.269531 2 2.0 0.715318 3 3.0 0.418822 4 4.0 0.524286 5 5.0 0.632602 Is there a way to perform this calculation using numpy, without having to resort to the pandas library? | Here's a completely vectorized solution that only uses numpy methods and no python iteration: sort_indices = np.argsort(A[:, 1]) unique_y, unique_indices, group_count = np.unique(A[sort_indices, 1], return_index=True, return_counts=True) Once we have the indices and counts of all the unique elements, we can use the np.ufunc.reduceat method to collect the results of np.add for each group, and then divide by their counts to get the mean: group_sum = np.add.reduceat(A[sort_indices, :], unique_indices, axis=0) group_mean = group_sum / group_count[:, None] # array([[0.6495549 , 0. ], # [0.26953094, 1. ], # [0.71531777, 2. ], # [0.41882167, 3. ], # [0.52428582, 4. ], # [0.63260239, 5. ]]) Benchmarks: Comparing this solution with the other answers here (Code at tio.run) for A contains 10k rows, with A[:, 1] containing N groups, N varies from 1 to 10k A contains N rows (N varies from 1 to 10k), with A[:, 1] containing min(N, 1000) groups Observations: The numpy-only solutions (Dani's and mine) win easily -- they are significantly faster than the pandas approach (possibly since the time taken to create the dataframe is an overhead that doesn't exist for the former). The pandas solution is slower than the python+numpy solutions (Jaimu's and mine) for smaller arrays, since it's faster to just iterate in python and get it over with than to create a dataframe first, but these solutions become much slower than pandas as the array size or number of groups increases. Note: The previous version of this answer iterated over the groups as returned by the accepted answer to Is there any numpy group by function? and individually calculated the mean: First, we need to sort the array on the column you want to group by A_s = A[A[:, 1].argsort(), :] Then, run that snippet. np.split splits its first argument at the indices given by the second argument. unique_elems, unique_indices = np.unique(A_s[:, 1], return_index=True) # (array([0., 1., 2., 3., 4., 5.]), array([ 0, 1, 2, 3, 9, 12])) split_indices = unique_indices[1:] # No need to split at the first index groups = np.split(A_s, split_indices) # [array([[0.6495549, 0. ]]), # array([[0.26953094, 1. ]]), # array([[0.71531777, 2. ]]), # array([[0.33703753, 3. ], # [0.93230994, 3. ], # [0.08084283, 3. ], # [0.07880787, 3. ], # [0.62214452, 3. ], # [0.4617873 , 3. ]]), # array([[0.03501083, 4. ], # [0.69253184, 4. ], # [0.84531478, 4. ]]), # array([[0.90115394, 5. ], # [0.91172016, 5. ], # [0.08493308, 5. ]])] Now, groups is a list containing multiple np.arrays. Iterate over the list and mean each array: means = np.zeros((len(groups), groups[0].shape[1])) for i, grp in enumerate(groups): means[i, :] = grp.mean(axis=0) # array([[0.6495549 , 0. ], # [0.26953094, 1. ], # [0.71531777, 2. ], # [0.41882167, 3. ], # [0.52428582, 4. ], # [0.63260239, 5. ]]) | 4 | 4 |
74,849,203 | 2022-12-19 | https://stackoverflow.com/questions/74849203/extract-and-manipulate-dict-data-to-check-certificates | I struggle on a regular basis with data manipulation in Ansible. I'm not very familiar with Python and dict objects. I found an example that sums up a lot of my misunderstandings. I would like to verify a list of certificates. In found an example for a single domain in the documentation, I'm just trying to loop over several domain names. Certs are stored in a folder: certs/ ├── domain.com │ ├── domain.com.pem │ └── domain.com.key └── domain.org ├── domain.org.key └── domain.org.pem My playbook is as follow: --- - name: "check certs" hosts: localhost gather_facts: no vars: domain_names: - domain.com - domain.org certs_folder: certs tasks: - name: Get certificate information community.crypto.x509_certificate_info: path: "{{ certs_folder }}/{{ item }}/{{ item }}.pem" # for valid_at, invalid_at and valid_in register: result_certs loop: "{{ domain_names }}" failed_when: 0 - name: Get private key information community.crypto.openssl_privatekey_info: path: "{{ certs_folder }}/{{ item }}/{{ item }}.key" register: result_privatekey loop: "{{ domain_names }}" failed_when: 0 - name: Check cert and key match << DOES NOT WORK >>> assert: that: - result_certs[ {{ item }} ].public_key == result_privatekey[ {{ item }} ].public_key # - ... other checks ... - not result[ {{ item }} ].expired loop: "{{ domain_names }}" So I get two variables result_certs and result_privatekey, each has a element result which is , if I understand correctly, an array of dicts: "result_certs": { "changed": false, "msg": "All items completed", "results": [ { "expired": false, "item": "domain.org", "public_key": "<<PUBLIC KEY>>" }, { "expired": false, "item": "domain.com", "public_key": "<<PUBLIC KEY>>" } ], "skipped": false } "result_privatekey": { "changed": false, "msg": "All items completed", "results": [ { "item": "domain.org", "public_key": "<< PUBLIC KEY >>" }, { "item": "domain.com", "public_key": "<< PUBLIC KEY >>" } ], "skipped": false } How can I refer to each of the dicts elements like result_privatekey.results[the dict where item ='domain.org'].public_key in the assert task? I feel like I'm missing something, or a documentation page to make it clear to me. I noticed that I particularly struggle with arrays of dicts, and I run into those objects quite often... I found those resources useful, but not sufficient to get this job done: https://jmespath.org/tutorial.html https://jinja.palletsprojects.com/en/3.0.x/templates/ EDIT: map and selectattr are the filters required to solve this problem, although the documentation (including the official ansible doc) is not that clear to me... This is very useful to get many tutorial examples on those two filters if one is struggling as I do. | Given the simplified data for testing result_certs: changed: false msg: All items completed results: - expired: false item: domain.org public_key: <<PUBLIC KEY domain.org>> - expired: false item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false result_privatekey: changed: false msg: All items completed results: - item: domain.org public_key: <<PUBLIC KEY domain.org>> - item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false Declare the list of the domains domains: "{{ result_certs.results| map(attribute='item')|list }}" gives domains: - domain.org - domain.com Q: "How can I refer to each dictionary element?" A: select the item(s) and map the attribute - debug: var: pk loop: "{{ domains }}" vars: pk: "{{ result_privatekey.results| selectattr('item', '==', item)| map(attribute='public_key')|list }}" gives TASK [debug] ********************************************************************************* ok: [localhost] => (item=domain.org) => ansible_loop_var: item item: domain.org pk: - <<PUBLIC KEY domain.org>> ok: [localhost] => (item=domain.com) => ansible_loop_var: item item: domain.com pk: - <<PUBLIC KEY domain.com>> In the same way, you can compare the keys in the loop - debug: msg: "{{ pk1 }} == {{ pk2 }}: {{ pk1 == pk2 }}" loop: "{{ domains }}" vars: pk1: "{{ result_privatekey.results| selectattr('item', '==', item)| map(attribute='public_key')|first }}" pk2: "{{ result_certs.results| selectattr('item', '==', item)| map(attribute='public_key')|first }}" gives TASK [debug] ********************************************************************************* ok: [localhost] => (item=domain.org) => msg: '<<PUBLIC KEY domain.org>> == <<PUBLIC KEY domain.org>>: True' ok: [localhost] => (item=domain.com) => msg: '<<PUBLIC KEY domain.com>> == <<PUBLIC KEY domain.com>>: True' Q: "Extract and manipulate dict data to check certificates." A: There are many options: For example, create a unique list of all public keys pkeys: "{{ result_certs.results| zip(result_privatekey.results)| map('map', attribute='public_key')| map('unique')|flatten }}" gives pkeys: - <<PUBLIC KEY domain.org>> - <<PUBLIC KEY domain.com>> To find the redundant keys compare the lists pkeys and domains. Compare the lengths to briefly find out if there are any pkeys|length == domains|length To find the expired domains declare variables expired: "{{ result_certs.results| map(attribute='expired')|list }}" expired_domains: "{{ result_certs.results| selectattr('expired')| map(attribute='item')|list }}" give expired: - false - false expired_domains: [] Then the assert task should look like - assert: that: - expired is not any - pkeys|length == domains|length Example of a complete playbook for testing - hosts: localhost vars: result_certs: changed: false msg: All items completed results: - expired: false item: domain.org public_key: <<PUBLIC KEY domain.org>> - expired: false item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false result_privatekey: changed: false msg: All items completed results: - item: domain.org public_key: <<PUBLIC KEY domain.org>> - item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false domains: "{{ result_certs.results| map(attribute='item')|list }}" pkeys: "{{ result_certs.results| zip(result_privatekey.results)| map('map', attribute='public_key')| map('unique')|flatten }}" expired: "{{ result_certs.results| map(attribute='expired')|list }}" expired_domains: "{{ result_certs.results| selectattr('expired')| map(attribute='item')|list }}" tasks: - debug: var: domains - debug: var: pkeys # How can I refer to each of the dicts elements? - debug: var: pk loop: "{{ domains }}" vars: pk: "{{ result_privatekey.results| selectattr('item', '==', item)| map(attribute='public_key')|list }}" # How can I compare private keys? - debug: msg: "{{ pk1 }} == {{ pk2 }}: {{ pk1 == pk2 }}" loop: "{{ domains }}" vars: pk1: "{{ result_privatekey.results| selectattr('item', '==', item)| map(attribute='public_key')|first }}" pk2: "{{ result_certs.results| selectattr('item', '==', item)| map(attribute='public_key')|first }}" - debug: var: expired - debug: var: expired_domains - assert: that: - expired is not any - pkeys|length == domains|length gives PLAY [localhost] ***************************************************************************** TASK [debug] ********************************************************************************* ok: [localhost] => domains: - domain.org - domain.com TASK [debug] ********************************************************************************* ok: [localhost] => pkeys: - <<PUBLIC KEY domain.org>> - <<PUBLIC KEY domain.com>> TASK [debug] ********************************************************************************* ok: [localhost] => (item=domain.org) => ansible_loop_var: item item: domain.org pk: - <<PUBLIC KEY domain.org>> ok: [localhost] => (item=domain.com) => ansible_loop_var: item item: domain.com pk: - <<PUBLIC KEY domain.com>> TASK [debug] ********************************************************************************* ok: [localhost] => (item=domain.org) => msg: '<<PUBLIC KEY domain.org>> == <<PUBLIC KEY domain.org>>: True' ok: [localhost] => (item=domain.com) => msg: '<<PUBLIC KEY domain.com>> == <<PUBLIC KEY domain.com>>: True' TASK [debug] ********************************************************************************* ok: [localhost] => expired: - false - false TASK [debug] ********************************************************************************* ok: [localhost] => expired_domains: [] TASK [assert] ******************************************************************************** ok: [localhost] => changed=false msg: All assertions passed PLAY RECAP *********************************************************************************** localhost: ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 The next option is to create a structure for this purpose. For example, put into the lists the attributes which might have different values (i.e. public_key in this case). Merge the dictionaries by the domain and append unique attributes. Declare the variables l1: "{{ result_certs.results|json_query('[].{item: item, expired: expired, pk: [public_key]}') }}" l2: "{{ result_privatekey.results|json_query('[].{item: item, pk: [public_key]}') }}" lm: "{{ [l1, l2]|community.general.lists_mergeby('item', list_merge='append_rp') }}" gives lm: - expired: false item: domain.com pk: - <<PUBLIC KEY domain.com>> - expired: false item: domain.org pk: - <<PUBLIC KEY domain.org>> Use this structure to compare any attributes. For example, declare exprd: "{{ lm|map(attribute='expired')|list }}" pkeys: "{{ lm|map(attribute='pk')| map('length')|sum }}" gives exprd: - false - false pkeys: '2' Then, use it in the conditions - assert: that: - exprd is not any - pkeys|int == lm|length Example of a complete playbook for testing - hosts: localhost vars: result_certs: changed: false msg: All items completed results: - expired: false item: domain.org public_key: <<PUBLIC KEY domain.org>> - expired: false item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false result_privatekey: changed: false msg: All items completed results: - item: domain.org public_key: <<PUBLIC KEY domain.org>> - item: domain.com public_key: <<PUBLIC KEY domain.com>> skipped: false l1: "{{ result_certs.results|json_query('[].{item: item, expired: expired, pk: [public_key]}') }}" l2: "{{ result_privatekey.results|json_query('[].{item: item, pk: [public_key]}') }}" lm: "{{ [l1, l2]| community.general.lists_mergeby('item', list_merge='append_rp') }}" exprd: "{{ lm|map(attribute='expired')|list }}" pkeys: "{{ lm|map(attribute='pk')| map('length')|sum }}" tasks: - debug: var: l1 - debug: var: l2 - debug: var: lm - debug: var: exprd - debug: var: pkeys - assert: that: - exprd is not any - pkeys|int == lm|length In addition to the structure created in option 2) create dictionaries to test the attributes selectively. For example, domain_exprd: "{{ lm|items2dict(key_name='item', value_name='expired') }}" domain_pkeys: "{{ lm|items2dict(key_name='item', value_name='pk') }}" gives domain_exprd: domain.com: false domain.org: false domain_pkeys: domain.com: - <<PUBLIC KEY domain.com>> domain.org: - <<PUBLIC KEY domain.org>> | 4 | 2 |
74,838,882 | 2022-12-18 | https://stackoverflow.com/questions/74838882/how-to-get-pixel-rgb-values-using-matplotlib | I need to find all red pixels in an image and create my own image with just the red pixels. So far I have been experimenting with matplotlib as I am very new to it. def find_red_pixels(map, upper_threshold=100, lower_threshold =50): """Finds all red pixels of the "map" image and outputs a binary image file "map-red-pixels.jpg" """ red_map = [] for row in map: for pixel in row: print(pixel) I am brand new to image manipulating and have tried this, however I do not understand what the values [0.0.01] etc. mean which is outputted as pixels. Is there an easy way to do this? | The first thing to understand is the way the image is represented in an array when it is read in. Here I read in a 320x160 image of a rainbow: >>> img = plt.imread('rainbow.png') >>> img.shape (160, 320, 4) This shows the dimensions of the array - note the last element of the tuple is 4. These values represent the red pixel values, the green pixel values, the blue pixel values, and the transparency or alpha values. We want to find pixels which have a certain red value, so we're just interested in the first of these. Let's split these out so we can just work with them: >>> r = img[:,:,0] This isolates the portion of the array relating to the redness of the pixel values, represented by a value between 0 and 1. We now want to isolate the locations of the pixels which have a certain redness value. We can use np.where for this, finding the elements in our new array r which have values of over 0.5 (this value can be altered to fit your requirements of upper_threshold and lower_threshold), and replacing these with a value of 1, and 0 otherwise; creating a binary image. >>> binary_img = np.where(r > 0.5, 1, 0) Finally, we just need to save this image, using plt.imsave: >>> plt.imsave('rainbow_red.png', binary_img , cmap='gray') Giving the following result: | 4 | 4 |
74,848,349 | 2022-12-19 | https://stackoverflow.com/questions/74848349/pytorch-runtime-error-input-type-double-and-bias-type-float-should-be-the-s | The error came out when I try to train the CNN model using pytorch This is the model I create The model import torch class NNnet(torch.nn.Module): def __init__(self, channels = 19, samples = 1000.0, outputs = 4): super(NNnet, self).__init__() #Sequential 1 self.seq1 = torch.nn.Sequential( torch.nn.Conv2d(in_channels = 1, out_channels = 32, kernel_size = (1,20), stride = 1), torch.nn.Conv2d(in_channels = 32, out_channels = 32, kernel_size = (3,1), stride = 1), torch.nn.BatchNorm2d(32, eps = 0.001, momentum = 0.99), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size = [1,5], stride = [1,2]) ) #calculate output of sample at each opeartion samples = (samples - 20) + 1 samples = (samples - 1) + 1 channels = channels - 3 + 1 samples = floor((samples - 5) / 2 + 1) #Sequential 2 self.seq2 = torch.nn.Sequential( torch.nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size = (1,20)), torch.nn.BatchNorm2d(64, eps = 0.001, momentum = 0.99), #tensorflow duplicate torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size = [1,7], stride = [1,2]) ) samples = (samples - 20) + 1 samples = floor((samples- 7) / 2 + 1) #Sequential 3 self.seq3 = torch.nn.Sequential( torch.nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = (1,10)), torch.nn.BatchNorm2d(64, eps = 0.001, momentum = 0.99), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size = [1,5], stride = [1,2]) ) samples = (samples - 10) + 1 samples = floor((samples - 5) / 2 + 1) #fully connect self.fc = torch.nn.Sequential( torch.nn.Dropout1d(p = 0.5), #cal from (initla_ch - 2) * last_layer_fmap * final datapoint from conv torch.nn.Linear(in_features = channels * 64 * samples, out_features = 32), torch.nn.BatchNorm1d(32, eps = 0.001, momentum = 0.99), torch.nn.ReLU(), torch.nn.Dropout1d(p = 0.3), torch.nn.Linear(in_features = 32, out_features = outputs), torch.nn.Softmax() ) def forward(self, x): x = self.seq1(x) x = self.seq2(x) x = self.seq3(x) x = torch.flatten(x, start_dim = 1, end_dim = -1) x = self.fc(x) return x Training loop import torch import numpy device = torch.device("cuda" if torch.cuda.is_available() else "cpu") #dummy data of 540 instances, 19 channel and 1000 sample test_data_x = np.ones(shape = (540,19,1000)) #dummy label test_data_y = np.ones(shape = (540,4)) train_dat = torch.utils.data.TensorDataset(torch.tensor(test_data_x).to(device), torch.tensor(test_data_y).to(device)) train_loader = torch.utils.data.DataLoader(train_dat, batch_size = 16, shuffle = True) test_model = NNnet(channels = 19, samples = 1000, outputs = 4) # optimizer and the loss function definition optimizer = torch.optim.Adam(test_model.parameters(), lr = 0.001, weight_decay = 0.0001) criterion = torch.nn.CrossEntropyLoss() #pin to gpu test_model.to(device) criterion.to(device) #train loop----------------------------------------------- for epoch in range(10): running_loss = 0.0 for i, data in enumerate(train_loader,0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) #zero grad optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) #<<< ERROR HERE loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() Error message (test on Google Colab) output error When I try to train the model using the following program, It raised the runtime error I kinda confuse because I have put everything on the GPU but still there's a conflict between type. Moreover Is there a mistake on model definition or on the data preparation? | Default float in Numpy is float64, you must convert the Numpy tensor to np.float32 before converting it to Pytorch. train_dat = torch.utils.data.TensorDataset(torch.tensor(test_data_x).to(device), torch.tensor(test_data_y).to(device)) | 4 | 12 |
74,796,947 | 2022-12-14 | https://stackoverflow.com/questions/74796947/how-to-extract-rss-links-from-website-with-python | I am trying to extract all RSS feed links from some websites. Ofc if RSS exists. These are some website links that have RSS, and below is list of RSS links from those websites. website_links = ["https://www.diepresse.com/", "https://www.sueddeutsche.de/", "https://www.berliner-zeitung.de/", "https://www.aargauerzeitung.ch/", "https://www.luzernerzeitung.ch/", "https://www.nzz.ch/", "https://www.spiegel.de/", "https://www.blick.ch/", "https://www.berliner-zeitung.de/", "https://www.ostsee-zeitung.de/", "https://www.kleinezeitung.at/", "https://www.blick.ch/", "https://www.ksta.de/", "https://www.tagblatt.ch/", "https://www.srf.ch/", "https://www.derstandard.at/"] website_rss_links = ["https://www.diepresse.com/rss/Kunst", "https://rss.sueddeutsche.de/rss/Kultur", "https://www.berliner-zeitung.de/feed.id_kultur-kunst.xml", "https://www.aargauerzeitung.ch/leben-kultur.rss", "https://www.luzernerzeitung.ch/kultur.rss", "https://www.nzz.ch/technologie.rss", "https://www.spiegel.de/kultur/literatur/index.rss", "https://www.luzernerzeitung.ch/wirtschaft.rss", "https://www.blick.ch/wirtschaft/rss.xml", "https://www.berliner-zeitung.de/feed.id_abgeordnetenhauswahl.xml", "https://www.ostsee-zeitung.de/arc/outboundfeeds/rss/category/wissen/", "https://www.kleinezeitung.at/rss/politik", "https://www.blick.ch/wirtschaft/rss.xml", "https://feed.ksta.de/feed/rss/politik/index.rss", "https://www.tagblatt.ch/wirtschaft.rss", "https://www.srf.ch/news/bnf/rss/1926", "https://www.derstandard.at/rss/wirtschaft"] My approach is to extract all links, and then check if some of them has RSS in them, but that is just a first step: for url in all_links: response = requests.get(url) print(response) soup = BeautifulSoup(response.content, 'html.parser') list_of_links = soup.select("a[href]") list_of_links = [link["href"] for link in list_of_links] print("Number of links", len(list_of_links)) for l in list_of_links: if "rss" in l: print(url) print(l) print() I have heard that I can look for RSS links like this, but I do not know how to incorporate this in my code. type=application/rss+xml My goal is to get working RSS urls at the end. Maybe it is an issue because I am sending request on the first page, and maybe I should crawl different pages in order to extract all RSS Links, but I hope that there is a faster/better way for RSS extraction. You can see that RSS links have or end up with (for example): .rss /rss /rss/ rss.xml /feed/ rss-feed etc. | Don't reinvent the wheel, there are many curated directories and collections that can serve you well and give you a nice introduction. However, to follow your approach, you should first collect all the links on the page that could point to an rss feed: soup.select('a[href*="rss"],a[href*="/feed"],a:-soup-contains-own("RSS")') and then verify again whether it is one or just a collection page: soup.select('[type="application/rss+xml"],a[href*=".rss"]') or checking the content-type: if 'xml' in requests.get(rss).headers.get('content-type'): Note: This is just to point in a direction, cause there a lot of pattern that are used to mark such feeds - rss, feed, feed/, news, xml,... and also the content-type is provided differently by servers Example import requests, re from bs4 import BeautifulSoup website_links = ["https://www.diepresse.com/", "https://www.sueddeutsche.de/", "https://www.berliner-zeitung.de/", "https://www.aargauerzeitung.ch/", "https://www.luzernerzeitung.ch/", "https://www.nzz.ch/technologie/", "https://www.spiegel.de/", "https://www.blick.ch/wirtschaft/"] rss_feeds = [] def check_for_real_rss(url): base_url = re.search('^(?:https?:\/\/)?(?:[^@\/\n]+@)?(?:www\.)?([^:\/\n]+)',url).group(0) r = requests.get(url) soup = BeautifulSoup(r.text) for e in soup.select('[type="application/rss+xml"],a[href*=".rss"],a[href$="feed"]'): if e.get('href').startswith('/'): rss = (base_url+e.get('href')) else: rss = (e.get('href')) if 'xml' in requests.get(rss).headers.get('content-type'): rss_feeds.append(rss) for url in website_links: soup = BeautifulSoup(requests.get(url).text) for e in soup.select('a[href*="rss"],a[href*="/feed"],a:-soup-contains-own("RSS")'): if e.get('href').startswith('/'): check_for_real_rss(url.strip('/')+e.get('href')) else: check_for_real_rss(e.get('href')) set(rss_feeds) Output {'https://rss.sueddeutsche.de/app/service/rss/alles/index.rss?output=rss','https://rss.sueddeutsche.de/rss/Topthemen', 'https://www.aargauerzeitung.ch/aargau/aarau.rss', 'https://www.aargauerzeitung.ch/aargau/baden.rss', 'https://www.aargauerzeitung.ch/leben-kultur.rss', 'https://www.aargauerzeitung.ch/schweiz-welt.rss', 'https://www.aargauerzeitung.ch/sport.rss', 'https://www.bzbasel.ch/basel.rss', 'https://www.grenchnertagblatt.ch/solothurn/grenchen.rss', 'https://www.jetzt.de/alle_artikel.rss', 'https://www.limmattalerzeitung.ch/limmattal.rss', 'https://www.luzernerzeitung.ch/international.rss', 'https://www.luzernerzeitung.ch/kultur.rss', 'https://www.luzernerzeitung.ch/leben.rss', 'https://www.luzernerzeitung.ch/leben/ratgeber.rss',...} | 5 | 8 |
74,847,117 | 2022-12-19 | https://stackoverflow.com/questions/74847117/unable-to-import-cartopy | After Installing cartopy on google-colab, I was not able to import it: !pip install cartopy import cartopy ImportError: cannot import name lgeos | The best way to install Cartopy in a Colab is by using the Conda environment. So we need to install the following: #1|Install Conda environment on Colab !pip install -q condacolab import condacolab condacolab.install() Then, #2|Install cartopy !mamba install -q -c conda-forge cartopy After that, #3|imoprt cartopy import cartopy | 4 | 3 |
74,846,232 | 2022-12-19 | https://stackoverflow.com/questions/74846232/search-part-of-tuple-list | I am trying to do a list comprehension of a tuples list based on another list of tuples with partial match. x = [((1,1),(1,1),(1,2)), ((2,1),(1,3),(2,9)), ((2,1),(2,3),(2,9))] y = [(2,1),(1,3)] [i for i in x for k in y if k in i] e.g. in this x is a list of tuples & y is the desired list of tuples. If y is found in any of the items in the list of tuples in x, it should return that item. Result is: [((2, 1), (1, 3), (2, 9)), ((2, 1), (1, 3), (2, 9)), ((2, 1), (2, 3), (2, 9))] But I want only : [((2, 1), (1, 3), (2, 9))] I tried with single tuple & it gave the desired result. But not sure why it doesn't work when I pass a tuple list. x = [((1,1),(1,1),(1,2)), ((2,1),(1,3),(2,9)), ((2,1),(2,3),(2,9))] y = (2,1) [i for i in x if y in i] Result: [((2, 1), (1, 3), (2, 9)), ((2, 1), (2, 3), (2, 9))] | You can use: x = [ ((1, 1), (1, 1), (1, 2)), ((2, 1), (1, 3), (2, 9)), ((2, 1), (2, 3), (2, 9)), ] y = [(2, 1), (1, 3)] print([t for t in x if not set(y).difference(t)]) output: [((2, 1), (1, 3), (2, 9))] If you subtract tuples inside x from y with set operations, if all the sub tuples are present, you'll end up with an empty set, so you want that tuple. You could also write if set(y).issubset(t) instead of if not set(y).difference(t) part.(borrowed from @kellyBundy's answer) What if I want any tpl of y to be true but not all. print([t for t in x if set(y).intersection(t)]) | 3 | 1 |
74,805,849 | 2022-12-15 | https://stackoverflow.com/questions/74805849/package-publishing-python-failing-through-poetry | I am new to this, trying to publish a package to pypi.org using Poetry package. on my local the build is working, I am able to import the package test run it, it's all good. but when I try to publish it to pypi.org, I get below error - as per the article I was following Link, it was supposed to prompt me for my pypi account ID and password, but it doesn't and then gives the error: Publishing gsst (0.2.2) to PyPI - Uploading gsst-0.2.2-py3-none-any.whl 0% - Uploading gsst-0.2.2-py3-none-any.whl 100% and then this error shows up -- HTTP Error 403: Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information. | b'<html>\n <head>\n <title>403 Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n \n <body>\n <h1>403 Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n Access was denied to this resource.<br/><br/>\nInvalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information.\n\n\n \n' after i run the -- poetry publish command, the CLI should prompt me for pypi, ID and password. why does it skip it and then fails on authentication. | I was able to resolve the problem finally - Took help from poetry documentation here Issued the below command to setup my pypi.org account for auto authentication poetry config http-basic.pypi <username> <password> After that I ran the "poetry publish" command and was able to publish my package on pypi.org. It's really the quickest and easiest way to post your package to pypi.org | 4 | 3 |
74,844,719 | 2022-12-18 | https://stackoverflow.com/questions/74844719/does-a-bitwise-and-operation-prepend-zeros-to-the-binary-representation | When I use bitwise and operator(&) with the number of 1 to find out if a number x is odd or even (x & 1), does the interpreter change the binary representation of 1 according to the binary representation of x? For example: 2 & 1 -> 10 & 01 -> then perform comparison bitwise 5 & 1 -> 101 & 001 -> then perform comparison bitwise 100 & 1 -> 1100100 & 0000001 -> then perform comparison bitwise Does it append zeros to the binary representation of 1 to perform bitwise and operation? Looking at the cpython implementation, it looks like that it compares the digits according to size of the right argument. So in this case the example above works actually: 2 & 1 -> 10 & 1 -> 0 & 1 -> then perform comparison bitwise 5 & 1 -> 101 & 1 -> 1 & 1 -> then perform comparison bitwise 100 & 1 -> 1100100 & 1 -> 0 & 1 -> then perform comparison bitwise Is my understanding right? I'm confused because of this image from Geeks for Geeks. | Conceptually, adding zeros to the shorter number gives the same result as ignoring excess digits in the longer number. They both do the same thing. Padding, however, is inefficient, so in practice you wouldn't want to do it. The reason is because anything ANDed with 0 is 0. If you pad the short number to match the longer one and then AND the extra bits they're all going to result in 0. It works, but since you know the padded bits will just result in extra zeros, it's more efficient to ignore them and only iterate over the length of the shorter number. Python only processes the overlapping digits. First, it conditionally swaps a and b to ensure b is the smaller number: /* Swap a and b if necessary to ensure size_a >= size_b. */ if (size_a < size_b) { z = a; a = b; b = z; size_z = size_a; size_a = size_b; size_b = size_z; negz = nega; nega = negb; negb = negz; } Then it iterates over the smaller size_b: /* Compute digits for overlap of a and b. */ switch(op) { case '&': for (i = 0; i < size_b; ++i) z->ob_digit[i] = a->ob_digit[i] & b->ob_digit[i]; break; So my understanding is right, the image is just for intuition? Yep, correct. The image is for conceptual understanding. It doesn't reflect how it's actually implemented in code. | 5 | 7 |
74,842,741 | 2022-12-18 | https://stackoverflow.com/questions/74842741/why-is-a-combination-of-numpy-functions-faster-than-np-mean | I am wondering what the fastest way for a mean computation is in numpy. I used the following code to experiment with it: import time n = 10000 p = np.array([1] * 1000000) t1 = time.time() for x in range(n): np.divide(np.sum(p), p.size) t2 = time.time() print(t2-t1) 3.9222593307495117 t3 = time.time() for x in range(n): np.mean(p) t4 = time.time() print(t4-t3) 5.271147012710571 I would assume that np.mean would be faster or at least equivalent in speed, however it looks like the combination of numpy functions is faster than np.mean. Why is the combination of numpy functions faster? | For integer input, by default, numpy.mean computes the sum in float64 dtype. This prevents overflow errors, but it requires a conversion for every element. Your code with numpy.sum only converts once, after the sum has been computed. | 8 | 10 |
74,839,133 | 2022-12-18 | https://stackoverflow.com/questions/74839133/a-more-efficient-solution-for-balanced-split-of-an-array-with-additional-conditi | I appreciate your help in advance. This is a practice question from Meta's interview preparation website. I have solved it, but I wonder if any optimization can be done. Question: Is there a way to solve the following problem with a time complexity of O(n)? Problem Description: You have been given an array nums of type int. Write a program that returns the bool type as the return value of the solution function to determine whether it is possible to split nums into two arrays A and B such that the following two conditions are satisfied. The sum of the respective array elements of A and B is equal. All the array elements in A are strictly smaller than all the array elements in B. Examples: nums = [1,5,7,1] -> true since A = [1,1,5], B = [7] nums = [12,7,6,7,6] -> false since A = [6,6,7], B = [7,12] failed the 2nd requirement What I have tried: I have used the sort function in Python, which has a time complexity of O(nlog(n)). from typing import List def solution(nums: List[int]) -> bool: total_sum = sum(nums) # If the total sum of the array is 0 or an odd number, # it is impossible to have array A and array B equal. if total_sum % 2 or total_sum == 0: return False nums.sort() curr_sum, i = total_sum, 0 while curr_sum > total_sum // 2: curr_sum -= nums[i] i += 1 if curr_sum == total_sum // 2 and nums[i] != nums[i - 1]: return True return False | For what it's worth, you can modify QuickSelect to get a with-high-probability and expected O(n)-time algorithm, though Python's sort is so fast that it hardly seems like a good idea. Deterministic O(n) is possible and left as an easy exercise to the reader familiar with selection algorithms (but the constant factor is even worse, so...). import random def can_partition(nums, a_sum=0, b_sum=0): if not nums: # True makes more sense here, but whatever return False pivot = random.choice(nums) less = sum(n for n in nums if n < pivot) equal = sum(n for n in nums if n == pivot) greater = sum(n for n in nums if n > pivot) a_ext = a_sum + less b_ext = greater + b_sum if abs(a_ext - b_ext) == equal: return True elif a_ext < b_ext: return can_partition([n for n in nums if n > pivot], a_ext + equal, b_sum) else: return can_partition([n for n in nums if n < pivot], a_sum, equal + b_ext) print(can_partition([1, 5, 7, 1])) print(can_partition([12, 7, 6, 7, 6])) | 4 | 2 |
74,811,931 | 2022-12-15 | https://stackoverflow.com/questions/74811931/interpolate-time-series-data-from-one-df-to-time-axis-of-another-df-in-python-po | I have time series data on different time axes in different dataframes. I need to interpolate data from one df to onto the time axis of another df, df_ref. Ex: import polars as pl # DataFrame with the reference time axis: df_ref = pl.DataFrame({"dt": ["2022-12-14T14:00:01.000", "2022-12-14T14:00:02.000", "2022-12-14T14:00:03.000", "2022-12-14T14:00:04.000", "2022-12-14T14:00:05.000", "2022-12-14T14:00:06.000"]}) df_ref = df_ref.with_columns(pl.col("dt").str.to_datetime()) # DataFrame with a different frequency time axis, to be interpolated onto the reference time axis: df = pl.DataFrame({ "dt": ["2022-12-14T14:00:01.500", "2022-12-14T14:00:03.500", "2022-12-14T14:00:05.500"], "v1": [1.5, 3.5, 5.5]}) df = df.with_columns(pl.col("dt").str.to_datetime()) I cannot join the dfs since keys don't match: print(df_ref.join(df, on="dt", how="left").interpolate()) shape: (6, 2) ┌─────────────────────┬──────┐ │ dt ┆ v1 │ │ --- ┆ --- │ │ datetime[μs] ┆ f64 │ ╞═════════════════════╪══════╡ │ 2022-12-14 14:00:01 ┆ null │ │ 2022-12-14 14:00:02 ┆ null │ │ 2022-12-14 14:00:03 ┆ null │ │ 2022-12-14 14:00:04 ┆ null │ │ 2022-12-14 14:00:05 ┆ null │ │ 2022-12-14 14:00:06 ┆ null │ └─────────────────────┴──────┘ So my 'iterative' approach would be to interpolate each column individually, for instance like from scipy.interpolate import interp1d f = interp1d(df["dt"].dt.timestamp(), df["v1"], kind="linear", bounds_error=False, fill_value="extrapolate") out = f(df_ref["dt"].dt.timestamp()) df_ref = df_ref.with_columns(pl.Series(out).alias("v1_interp")) print(df_ref.head(6)) shape: (6, 2) ┌─────────────────────┬───────────┐ │ dt ┆ v1_interp │ │ --- ┆ --- │ │ datetime[μs] ┆ f64 │ ╞═════════════════════╪═══════════╡ │ 2022-12-14 14:00:01 ┆ 1.0 │ │ 2022-12-14 14:00:02 ┆ 2.0 │ │ 2022-12-14 14:00:03 ┆ 3.0 │ │ 2022-12-14 14:00:04 ┆ 4.0 │ │ 2022-12-14 14:00:05 ┆ 5.0 │ │ 2022-12-14 14:00:06 ┆ 6.0 │ └─────────────────────┴───────────┘ Although this gives the result I need, I wonder if there is a more idiomatic approach? I hesitate to mention efficiency here since I haven't benchmarked this with real data yet ("measure, don't guess!"). However, I'd assume that a native implementation in the underlying Rust code could add some performance benefits. | The scipy.interpolate.interpol1d example ends up calling this function. You could use the same approach and process each column with .map() def polars_ip(df_ref, df): old = df["dt"].dt.timestamp().to_numpy() new = df_ref["dt"].dt.timestamp().to_numpy() hi = np.searchsorted(old, new).clip(1, len(old) - 1) lo = hi - 1 def _interp(column): column = column.to_numpy() slope = (column[hi] - column[lo]) / (old[hi] - old[lo]) return pl.Series(slope * (new - old[lo]) + column[lo]) values = ( pl.concat([df, df_ref], how="diagonal") .select(pl.exclude("dt").map(_interp)) ) values.columns = [f"{name}_ref_ip" for name in values.columns] return df_ref.hstack(values) >>> %time polars_ip(df_ref, df) CPU times: user 48.1 ms, sys: 20.4 ms, total: 68.5 ms Wall time: 22 ms shape: (85536, 11) >>> %time scipy_ip(df_ref, df) CPU times: user 75.5 ms, sys: 5.51 ms, total: 81 ms Wall time: 74.3 ms shape: (85536, 11) Check they return the same values: >>> polars_ip(df_ref, df).frame_equal(scipy_ip(df_ref, df)) True You can also generate the same values using: N_COLS = 10 names = list(map(str, range(N_COLS))) has_reading = pl.col(names[0]).is_not_null() has_no_reading = has_reading.is_not() ( pl.concat([df, df_ref], how="diagonal") .sort("dt") .with_columns([ pl.when(has_reading).then(pl.all()) .shift(-1).backward_fill().suffix("_hi"), pl.when(has_reading).then(pl.all()) .shift(+1).forward_fill().suffix("_lo") ]) .with_columns([ pl.when(has_reading).then(pl.col(r"^.+_hi$")) .forward_fill().backward_fill(), pl.when(has_reading).then(pl.col(r"^.+_lo$")) .backward_fill().forward_fill() ]) .filter(has_no_reading) .with_column( pl.col(r"^dt.*$").dt.timestamp().suffix("_ts")) .with_columns([ (((pl.col(f"{name}_hi") - pl.col(f"{name}_lo")) / (pl.col("dt_hi_ts") - pl.col("dt_lo_ts"))) * (pl.col("dt_ts") - pl.col("dt_lo_ts")) + pl.col(f"{name}_lo")) .alias(f"{name}_ref_ip") for name in names ]) .select([ pl.col("dt"), pl.col("^.+_ref_ip$") ]) ) | 5 | 4 |
74,840,187 | 2022-12-18 | https://stackoverflow.com/questions/74840187/how-to-remove-duplicated-buy-signal | I'm testing my stock trading logic and I made a position column to check the buying / selling signal df = pd.DataFrame({'position': [1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 1.0, 0.0, -1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]}) I want to replace 1.0 value occurs between 1.0 and -1.0 with 0.0, and replace -1.0 value occurs between -1.0 and 1.0 with 0.0 here is the desired output: df = pd.DataFrame({'position': [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]}) NOTE: the output only keeps the initial signal of 1.0 or -1.0 | Here is a basic implementation based on the approach described by the previous answer: lastseen = 0 for n,el in enumerate(df["position"]): if lastseen == 0 and el == -1: raise Exception("Inconsistent data") if (el in [1, -1] and el != lastseen) or lastseen == 0: lastseen = el else: df["position"][n] = 0 I added the first check by considering the domain you described. If it's not correct for your problem feel free to remove it | 4 | 1 |
74,838,633 | 2022-12-18 | https://stackoverflow.com/questions/74838633/can-i-delete-env-file-from-server | I have a Django application where all the secret information (secret key and keys for encryption) are in the .env file as environment variables - I'm using the python-dotenv library. After starting the application, I removed the .env file from the server files and the application continues to work as it should. Can deleting this file cause any problems? Is there any other (or better) way to secure these secrets in a Django application? If it's relevant, I use pythonanywhere.com | You shouldn't need .env file if you instead set up Environment variables while initialising the server machine. Many Cloud Service Providers let you do that. If you're setting up a docker container in Google App run, you should be able to setup environment variables or when setting up virtual machine with predetermined environment variables. This should eliminate any requirement for having .env file. | 4 | 1 |
74,827,320 | 2022-12-16 | https://stackoverflow.com/questions/74827320/atributeerror-cant-set-attribute-for-python-list-property | I'm working with the python-docx library from a forked version, and I'm having an issue with editing the elements list as it is defined as a property. # docx.document.Document @property def elements(self): return self._body.elements I tried to go with the solution mentioned here but the error AtributeError: can't set attribute still popping out. Next thing I tried is adding the setter to the attribute derived from self._body and editing the code: # docx.blkcntnr.BlockItemContainer @property def elements(self): """ A list containing the elements in this container (paragraph and tables), in document order. """ return [element(item,self.part) for item in self._element.getchildren()] I've tried to add the setter in both levels but ended up again with the error AtributeError: can't set attribute The setter I wrote: @elements.setter def elements(self, value): return value The implementation I tired: elements_list = docx__document.elements elem_list = [] docx__document.elements = elements_list = elem_list The main problem with that code is docx__document.elements still contains all the elements that are supposed to have been deleted! Editing the library was like this: # Inside docx.document.Document @property def elements(self): return self._body.elements @elements.setter def elements(self, value=None): self._body.elements = value gc.collect() return value The other part: # Inside docx.blkcntnr.BlockItemContainer @property def elements(self): """ A list containing the elements in this container (paragraph and tables), in document order. """ return [element(item,self.part) for item in self._element.getchildren()] @elements.setter def elements(self, value): """ A list containing the elements in this container (paragraph and tables), in document order. """ return value Related question [Update] If I did add a setter for this property : # docx.document.Document @property def elements(self): return self._body.elements Should I add also a setter for the property: # docx.blkcntnr.BlockItemContainer @property def elements(self): """ A list containing the elements in this container (paragraph and tables), in document order. """ return [element(item,self.part) for item in self._element.getchildren()] Because the value of document.elemetns is actually the value from document._body.elements, am I right? Any help would appreciate it! | The main "Attribute Error" issue, @Jasmijn already covered... the setter actually needs to set something. In regards to how to provide a setter for elements: First we need to figure out where elements comes from: Document.elements comes from [Document]._body.elements [Document]._body.elements comes from _Body, which inherits BlockItemContainer.elements BlockItemContainer.elements builds its elements list dynamically from [BlockItemContainer]._element.getchildren() [BlockItemContainer]._element is equal to [Document]._element.body [Document]._element comes from extending ElementProxy, and is the first argument passed to Document's constructor In a very round-a-bout way, given element passed to Document, the document's elements are derived from: element.body.getchildren(). (A bit tricky tracking down the lookup chain, but that's just what you get when there's a lot of abstraction, or perhaps poor object oriented design) Now to track down what exactly getchildren() does: Looks like the element passed to Document is from the included oxml package oxml is itself a wrapper around lxml Looks like the relevant classes are actually in Cython. As far as I can tell, the _Element class is where getchildren() is ultimately defined (etree.pyx) getchildren() calls _collectChildren (apihelpers.pxi), which gives you an idea of how the internal element structure is setup Given that the root implementation is Cython is going to complicate things, but I see that the _Element class implements some additional methods which you could make use of, in particular: clear() and extend(). So a possible implementation (which I've tested and appears to work): # inside docx.document.Document @elements.setter def elements(self, lst): cython_el = self._element.body cython_el.clear() cython_el.extend(lst) I'll disagree with @Jasmijn here and say you don't need to provide a setter for BlockItemContainer as well, since that's a private class. You could also expose other _Element methods directly on the Document object if desired, like clear(). | 4 | 2 |
74,836,347 | 2022-12-17 | https://stackoverflow.com/questions/74836347/sqlalchemy-session-with-autocommit-true-does-not-commit | I'm trying to use a session with autocommit=true to create a row in a table, and it does not seem to be working. The row is not saved to the table. import os import sqlalchemy from sqlalchemy import Table from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from sqlalchemy import Column, create_engine, String db_hostname = os.environ['DB_HOSTNAME'] db_username = os.environ['DB_USERNAME'] db_password = os.environ['DB_PASSWORD'] db_servicename = os.environ['DB_SERVICENAME'] engine_string = f"postgresql://{db_username}:{db_password}@{db_hostname}:5432/{db_servicename}" engine = create_engine(engine_string, isolation_level='REPEATABLE READ', poolclass=sqlalchemy.pool.NullPool ) base = declarative_base() class Testing(base): __tablename__ = 'testing' name = Column(String, primary_key=True) comment = Column(String) base.metadata.create_all(engine) S1 = sessionmaker(engine) with S1() as session: test1 = Testing(name="Jimmy", comment="test1") session.add(test1) session.commit() S2 = sessionmaker(engine, autocommit=True) with S2() as session: test2 = Testing(name="Johnny", comment="test2") session.add(test2) In this code, the first row with name="Jimmy" and an explicit session.commit() is saved to the table. But the second row with name="Johnny" is not saved. Specifying autocommit=True on the session appears to have no effect. What is the cause? | If you enable the SQLALCHEMY_WARN_20=1 environment variable you will see RemovedIn20Warning: The Session.autocommit parameter is deprecated and will be removed in SQLAlchemy version 2.0. … The "2.0 way" to accomplish that "autocommit" behaviour is to do S2 = sessionmaker(engine) with S2() as session, session.begin(): test2 = Testing(name="Johnny", comment="test2") session.add(test2) # no explicit session.commit() required The changes will automatically be committed when the context manager (with block) exits, provided that no errors have occurred. | 5 | 9 |
74,823,596 | 2022-12-16 | https://stackoverflow.com/questions/74823596/type-a-function-that-takes-a-tuple-and-returns-a-tuple-of-the-same-length-with-e | I want to write a function that takes a tuple and returns a tuple of the same size but with each element wrapped in optional. Pseudo code: from typing import TypeVar T = TypeVar("T", bound=tuple[dict[str, str], ...]) def f(tup: T) -> Map[Optional, T]: # Dummy implementation return [None if ... else el for el in tup] Here Map is a made-up, type-level function that wraps each returned element's type in Optional. Concretely, if the input type was e.g. tuple[dict[str, str], dict[str, str]] I want the return type to be tuple[Optional[dict[str, str]], Optional[dict[str, str]]]. | TLDR: Use @overload to define a practically sufficient number of items and a less strictly typed catch-all case. There is currently no way to unpack type variables from a tuple and transform each of them. However, for practical usage it is usually sufficient to explicitly type cases for a low number of items. This is done using @overload to collect several explicitly typed cases. A less well-typed catch-all case for an arbitrary number of items retains some type-checking capabilities in any case. from typing import overload, Optional, TypeVar T1 = TypeVar("T1") T2 = TypeVar("T2") T3 = TypeVar("T3") TN = TypeVar("TN") # manually defined cases for usually expected item counts @overload def g(t: tuple[T1]) -> tuple[Optional[T1]]: ... @overload def g(t: tuple[T1, T2]) -> tuple[Optional[T1], Optional[T2]]: ... @overload def g(t: tuple[T1, T2, T3]) -> tuple[Optional[T1], Optional[T2], Optional[T3]]: ... # catch all case when none of the explicit item counts apply @overload def g(t: tuple[TN, ...]) -> tuple[Optional[TN], ...]: ... # actual runtime implementation def g(tup): return tuple(None if ... else el for el in tup) The number of item count overloads is a trade-off between covering actually needed case and maintaining the largely duplicated signatures; it is common to define up to about half a dozen. The standard library (via typeshed) currently defines up to 5 explicit element types, for example for zip. | 3 | 2 |
74,823,526 | 2022-12-16 | https://stackoverflow.com/questions/74823526/pydantic-from-orm-to-load-django-model-with-related-list-field | I have the following Django models: from django.db import models class Foo(models.Model): id: int name = models.TextField(null=False) class Bar(models.Model): id: int foo = models.ForeignKey( Foo, on_delete=models.CASCADE, null=False, related_name="bars", ) And Pydantic models (with orm_mode set to True): from pydantic import BaseModel class BarPy(BaseModel): id: int foo_id: int class FooPy(BaseModel): id: int name: str bars: list[BarPy] Now I want to perform a query on the model Foo and load it into FooPy, so i wrote this query: foo_db = Foo.objects.prefetch_related("bars").all() pydantic_model = FooPy.from_orm(foo_db) But it gives me this error: pydantic.error_wrappers.ValidationError: 1 validation error for FooPy bars value is not a valid list (type=type_error.list) I am able to do it when explicitly using the FooPy constructor and assigning the values manually but i want to use from_orm. | The bars attribute on your Foo model is a ReverseManyToOneDescriptor that just returns a RelatedManager for the Bar model. As with any manager in Django, to get a queryset of all the instances managed by it, you need to call the all method on it. Typically you would do something like foo.bars.all(). You can add your own custom validator to FooPy and make it pre=True to grab all the related Bar instances and pass a sequence of them along to the default validators: from django.db.models.manager import BaseManager from pydantic import BaseModel, validator ... class FooPy(BaseModel): id: int name: str bars: list[BarPy] @validator("bars", pre=True) def get_all_from_manager(cls, v: object) -> object: if isinstance(v, BaseManager): return list(v.all()) return v Note that it is not enough to just do .all() because that will return a queryset, which will not pass the default sequence validator built into Pydantic models. You would get the same error. You need to give it an actual sequence (e.g. list or tuple). A QuerySet is not a sequence, but an iterable. But you can consume it and turn it into a sequence, by calling for example list on it. More generalized version You could make an attempt at generalizing that validator and add it to your own (Pydantic) base model. Something like this should work on any field you annotate as list[Model], with Model being a subclass of pydantic.BaseModel: from django.db.models.manager import BaseManager from pydantic import BaseModel, validator from pydantic.fields import ModelField, SHAPE_LIST ... class CustomBaseModel(BaseModel): @validator("*", pre=True) def get_all_from_manager(cls, v: object, field: ModelField) -> object: if not (isinstance(field.type_, type) and issubclass(field.type_, BaseModel)): return v if field.shape is SHAPE_LIST and isinstance(v, BaseManager): return list(v.all()) return v I have not thoroughly tested this, but I think you get the idea. Side note It is worth mentioning that prefetch_related has nothing to do with the problem. The problem and its solution are the same, whether you do that or not. The difference is that without prefetch_related, you'll trigger additional database queries, when calling from_orm and thus executing the validator that consumes the queryset of .bars.all(). | 4 | 1 |
74,825,382 | 2022-12-16 | https://stackoverflow.com/questions/74825382/should-i-type-something-as-optional-if-none-breaks-the-logic-of-the-function-bu | Sometimes at the beginning of my Python functions I check whether the correct variables types were used, or whether something was passed as None. For example: def fails_with_none(x: int): if x is None: raise TypeError('Function fails with None!') return x + 1 I am hesitating whether x should be typed as int or as Optional[int]. The reason for using just int is that, semantically, the function requires an int. However, if I think about this from a programming perspective, the function handles both integer and None inputs. Is there a recommended way? For example, according to this answer the Optional hint means "either an object of the specific type is required, or None is required". However, the question still remains: Required by what? If we take it to mean "required by the logic of the function", then it should be typed as int. If we take it to mean "required by the code being executed", then since we check whether x is None, it should be included as a possible type hint. | TL;DR: If x being None has meaning in your function, then annotate x accordingly. Otherwise don't and don't check it. Sometimes at the beginning of my python functions I check whether the correct variables types were used, or whether something was passed as None. I am not saying this is absolutely wrong, but there is no doubt that this goes hardcore against the philosophy of Python. A main pillar of the paradigm of the language is dynamic typing. This is also why type annotations in Python are commonly referred to as hints. This is to signal that the language has no built in mechanism for enforcing this and that by convention functions do not force type conformity on users. The contract is this: "Hey user, my function f accepts an argument of type int. That is how I designed it. You may try and use it differently, but at your own peril. I don't vouch for it working as intended, if you pass anything other than int to it." If you insist on enforcing types, then the only consistent way IMHO is to do a negative check against the type you annotated with. If you have f(x: int), then it would be consistent to check like this: def f(x: int) -> None: if not isinstance(x, int): raise TypeError print(x**2) Because the alternative is arbitrarily checking against any other type that x might also be. Specifically, why would you check for it being None? Why not also check against it being a str? Or an empty tuple? Or literally anything else? Since you provided no additional context, I have to assume from your example that None has no special semantic meaning in your function other than that it triggers the TypeError. I am hesitating whether x should be typed as int or as Optional[int]. The reason for using just int is that, semantically, the function requires an int. There is your answer then. Semantics is king. If None has no other meaning in your function, then you should completely ignore its existence. It's up to the user of the function to adhere to your type annotations or disregard them at his own peril. In this concrete case, this would not even be particularly useful because the default built-in Python error for misusing numeric operands is this: TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' So you don't even change the error type, just the message in this particular case. Don't get me wrong, there are obviously cases, where passing None as an argument is meaningful. If an argument being None has meaning in a function, then obviously you should type it accordingly. Here is a crude example: def f(x: int | None) -> None: if x is None: print("Ok then.") else: print(f"{x**2=}") In this case the function behaves differently depending on whether you pass it an int or a NoneType. Thus, a union of those is the proper type annotation for x. And Optional[int] is just a (arguably poorly named) equivalent of that union. Thus, to me, your question title is a non-starter: Should I type something as Optional if None breaks the logic of the function, but I do check for it inside the body? If you check for None, then None does not break the logic of your function; it is specifically part of the logic and thus should be accounted for in the parameter type annotation. The bigger question (as I laid out above) is, whether you should check for it. If you only check, so you can yell at the programmer in your own customized way for using None, I don't see the value in that. | 3 | 3 |
74,824,553 | 2022-12-16 | https://stackoverflow.com/questions/74824553/how-to-left-align-column-values-in-pandas-to-string | I want to save a pandas dataframe to a file with to_string(), but want to left align the column values. With to_string(justify=left), only column labels are left aligned. For example with pd.DataFrame({'col1': [' 123 ', ' 1234'], 'col2': ['1', '444441234']}).to_string(index=False) I get the following result: I want to get rid of the whitespaces in the first row by left aligning the column values. | The to_string methods provides support for per column formatters. It allows you to use specific formats for all of some columns. A rather simple way is to create a format and then apply it with a lambda. The only picky part is that to use left formatting, you will have to know the width of the column. For your provided data, you could left align everything with: df = pd.DataFrame({'col1': [' 123 ', ' 1234'], 'col2': ['1', '444441234']}) widths = [4, 9] formats = ['{' + f':<{i}' + '}' for i in widths] print(df.to_string(index=None, col_space=widths, formatters= [(lambda x: fmt.format(x)) for fmt in formats], justify='left')) to get: col1 col2 123 1 1234 444441234 You could also left align only some columns by using a dict for the formatters parameter: print(df.to_string(index=None, formatters= {'col2': (lambda x: '{:<9}'.format(x))}, justify='left')) gives: col1 col2 123 1 1234 444441234 | 4 | 5 |
74,822,097 | 2022-12-16 | https://stackoverflow.com/questions/74822097/purpose-of-stop-gradient-in-jax-nn-softmax | jax.nn.softmax is defined as: def softmax(x: Array, axis: Optional[Union[int, Tuple[int, ...]]] = -1, where: Optional[Array] = None, initial: Optional[Array] = None) -> Array: x_max = jnp.max(x, axis, where=where, initial=initial, keepdims=True) unnormalized = jnp.exp(x - lax.stop_gradient(x_max)) return unnormalized / jnp.sum(unnormalized, axis, where=where, keepdims=True) I'm particularly interested in the lax.stop_gradient(x_max) part. I would love an explanation for why it's needed. From a practical standpoint, it seems that stop_gradient doesn't change the gradient calculation: import jax import jax.numpy as jnp def softmax_unstable(x): return jnp.exp(x) / jnp.sum(jnp.exp(x)) def softmax_stable(x): x = x - jnp.max(x) return jnp.exp(x) / jnp.sum(jnp.exp(x)) def softmax_stop_gradient(x): x = x - jax.lax.stop_gradient(jnp.max(x)) return jnp.exp(x) / jnp.sum(jnp.exp(x)) # example input x = jax.random.normal(jax.random.PRNGKey(123), (100,)) # make sure all forward passes are equal a = softmax_unstable(x) b = softmax_stable(x) c = softmax_stop_gradient(x) d = jax.nn.softmax(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) # make sure all gradient calculations are the same a = jax.grad(lambda x: -jnp.log(softmax_unstable(x))[2])(x) b = jax.grad(lambda x: -jnp.log(softmax_stable(x))[2])(x) c = jax.grad(lambda x: -jnp.log(softmax_stop_gradient(x))[2])(x) d = jax.grad(lambda x: -jnp.log(jax.nn.softmax(x))[2])(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) # make sure all gradient calculations are the same, this time we use softmax functions twice a = jax.grad(lambda x: -jnp.log(softmax_unstable(softmax_unstable(x)))[2])(x) b = jax.grad(lambda x: -jnp.log(softmax_stable(softmax_stable(x)))[2])(x) c = jax.grad(lambda x: -jnp.log(softmax_stop_gradient(softmax_stop_gradient(x)))[2])(x) d = jax.grad(lambda x: -jnp.log(jax.nn.softmax(jax.nn.softmax(x)))[2])(x) assert jnp.allclose(a, b) and jnp.allclose(b, c) and jnp.allclose(c, d) ^ all implementations are equal, even the one where we apply the x - x_max trick but WITHOUT stop_gradient. | First off, the reason for subtracting x_max at all is because it prevents overflow for large inputs. For example: x = jnp.array([1, 2, 1000]) print(softmax_unstable(x)) # [ 0. 0. nan] print(softmax_stable(x)) # [0. 0. 1.] print(softmax_stop_gradient(x)) # [0. 0. 1.] As for why we use stop_gradient here, we can show analytically that the max(x) term cancels-out in the gradient computation, and so we know a priori that its gradient cannot affect the gradient of the overall function. Marking it as stop_gradient communicates this to JAX's autodiff machinery, leading to a more efficient gradient computation. You can see this efficiency in action by printing the jaxpr for each version of the gradient function: x = jnp.float32(1) print(jax.make_jaxpr(jax.grad(softmax_stable))(x)) { lambda ; a:f32[]. let b:f32[] = reduce_max[axes=()] a c:f32[] = reshape[dimensions=None new_sizes=()] b d:bool[] = eq a c e:f32[] = convert_element_type[new_dtype=float32 weak_type=False] d f:f32[] = reduce_sum[axes=()] e g:f32[] = sub a b h:f32[] = exp g i:f32[] = exp g j:f32[] = reduce_sum[axes=()] i _:f32[] = div h j k:f32[] = integer_pow[y=-2] j l:f32[] = mul 1.0 k m:f32[] = mul l h n:f32[] = neg m o:f32[] = div 1.0 j p:f32[] = mul n i q:f32[] = mul o h r:f32[] = add_any p q s:f32[] = neg r t:f32[] = div s f u:f32[] = mul t e v:f32[] = add_any r u in (v,) } print(jax.make_jaxpr(jax.grad(softmax_stop_gradient))(x)) { lambda ; a:f32[]. let b:f32[] = reduce_max[axes=()] a c:f32[] = reshape[dimensions=None new_sizes=()] b d:bool[] = eq a c e:f32[] = convert_element_type[new_dtype=float32 weak_type=False] d _:f32[] = reduce_sum[axes=()] e f:f32[] = stop_gradient b g:f32[] = sub a f h:f32[] = exp g i:f32[] = exp g j:f32[] = reduce_sum[axes=()] i _:f32[] = div h j k:f32[] = integer_pow[y=-2] j l:f32[] = mul 1.0 k m:f32[] = mul l h n:f32[] = neg m o:f32[] = div 1.0 j p:f32[] = mul n i q:f32[] = mul o h r:f32[] = add_any p q in (r,) } The second version requires fewer computations to achieve the same result, because we've essentially told the autodiff machinery it does not have to worry about differentiating max(x). | 3 | 3 |
74,818,864 | 2022-12-16 | https://stackoverflow.com/questions/74818864/does-mypy-require-init-to-have-none-annotation | Seeing kind of contradictory results: class A: def __init__(self, a: int): pass The snippet above passes a mypy test, but the one below doesn't. class A: def __init__(self): pass Any idea why? | This is documented here. When you have at least one (annotated) argument for a function, it is considered (at least partially) typed. When neither arguments nor return values are annotated, the function is considered untyped. This distinction is important because by default, mypy does not check the bodies of untyped functions at all. This behavior is configurable via check_untyped_defs. Note that the complaint about the missing return type only arises, if you set disallow_untyped_defs (or strict). Otherwise neither of your examples will trigger an error. The __init__ method receives special treatment because people complained that always returns None and they did not want to explicitly write out the return type accordingly. This is why that behavior is so inconsistent. class A: def __init__(self, x: int): # this is fine with `mypy` pass def foo(self, x: int): print("hi") class B: def __init__(self): pass def foo(self): print("hi") Out of all these methods, only A.__init__ is considered fully typed (because of the implicit None return by __init__). All the other methods will trigger errors with disallow_untyped_defs set: error: Function is missing a return type annotation [no-untyped-def] I don't particularly like this approach, but that is the way they decided to handle it. | 7 | 9 |
74,822,475 | 2022-12-16 | https://stackoverflow.com/questions/74822475/iterate-through-nested-dict-check-bool-values-to-get-indexes-of-array | I have a nested dict with boolean values, like: assignments_dict = {"first": {'0': True, '1': True}, "second": {'0': True, '1': False}, } and an array, with a number of elements equal to the number of True values in the assignments_dict: results_array = [10, 11, 12] and, finally, a dict for results structured this way: results_dict = {"first": {'0': {'output': None}, '1': {'output': None}}, "second": {'0': {'output': None}, '1': {'output': None}}, } I need to go through the fields in assignment_dict, check if they are True, and if they are take the next element of results_array and substitute it to the corresponding field in results_dict. So, my final output should be: results_dict = {'first': {'0': {'output': 10}, '1': {'output': 11}}, 'second': {'0': {'output': 12}, '1': {'output': None}}} I did it in a very simple way: # counter used to track position in array_outputs counter = 0 for outer_key in assignments_dict: for inner_key in assignments_dict[outer_key]: # check if every field in assignments_dict is True/false if assignments_dict[outer_key][inner_key]: results_dict[outer_key][inner_key]["output"] = results_array[counter] # move on to next element in array_outputs counter += 1 but I was wondering if there's a more pythonic way to solve this. | results_iter = iter(results_array) for key, value in assignments_dict.items(): for inner_key, inner_value in value.items(): if inner_value: results_dict[key][inner_key]['output'] = next(results_iter) print(results_dict) Output: {'first': {'0': {'output': 10}, '1': {'output': 11}}, 'second': {'0': {'output': 12}, '1': {'output': None}}} | 3 | 3 |
74,819,091 | 2022-12-16 | https://stackoverflow.com/questions/74819091/single-after-dependency-version-specifier-in-setup-py | I'm looking at a setup.py with this syntax: from setuptools import setup setup( ... tests_require=["h5py>=2.9=mpi*", "mpi4py"] ) I understand the idea of the ">=" where h5py should be at least version 2.9, but I cannot for the life of me understand the =mpi* afterwards. Is it saying the version should somehow match the mpi version, while also being at least 2.9? I can't find anything that explains specifying python package versions that also explains the use of a single =. The only other place I've found it used is some obscure blog post that seemed to imply it was sort of like importing the package with an alias, which doesn't make much sense to me; and also the mpi4py docs that include a command line snippet conda install -c conda-forge h5py=*=mpi* netcdf4=*=mpi* but don't really explain it. | Short answer The =mpi* qualifier says that you want to install h5py pre-compiled with MPI support. Details If you look at the documentation for h5py, you'll see references to having to build it with or without MPI explicitly (e.g., see https://docs.h5py.org/en/latest/build.html). When you look at the conda-forge download files (https://anaconda.org/conda-forge/h5py/files) you'll also see that there are a bunch of nompi variants and a bunch of mpi variants. Adding =mpi* triggers getting a version that's been compiled with MPI so that you get parallel MPI support, while I suspect the default version would come without MPI support. Experimentation with and without When I do conda install -c conda-forge h5py=3.7, conda proposes to download this bundle: h5py-3.7.0-nompi_py39hd4deaf1_100 But when I did conda install =c conda-forget h5py=3.7=mpi*, I expected a ...-mpi_py... bundle but instead it just failed because I'm on Windows and MPI is not supported on Windows as far as I can tell. (And that makes sense, HPC clusters with MPI run on Linux.) | 6 | 2 |
74,804,358 | 2022-12-14 | https://stackoverflow.com/questions/74804358/combining-large-xml-files-efficiently-with-python | I have about 200 xml files ranging from 5MB to 50MB, with 80% being <10MB. These files contain multiple elements with both overlapping and unique data. My goal is to combine all this files, by performing a logical union over all the elements. The code seems to work but gets exponentially slower the more files it has to process. For example, it takes about 20sec to process the first 5 files, about a minute to process next five, about 5 min next five and so on, while also taking significantly more memory then the sum total of all the files. With the overall process running on the 4th hour as I type this. This is, obviously a 'to be expected' effect, considering that lookup needs to happen on an ever larger tree. Still, I wonder if there are ways to at least diminish this effect. I have tried implementing a form of simple caching, but I didnt notice any significant improvement. I also tried multiprocessing, which does help, but adds extra complexity and pushed the problem to the hardware level, which does not feel very optimal. Is there something I can do to improve the performance in any way? Note: I had to obfuscate parts of code and data due to confidentiality reasons. Please dont hesitate to inform if it breaks the example code: import lxml.etree from lxml.etree import Element # Edit2: added timing prints def process_elements(files: list[str], indentifier: int) -> lxml.etree._Element | None: base_el = Element('BASE') i = 0 cache = {} # Edit1. Missed this line start = time.time() time_spent_reading = 0 lookup_time = [0, 0] append_new_el_time = [0, ] cache_search_time = [0, 0] recursive_calls_counter = [0, ] for file in files: i += 1 print(f"Process: {indentifier}, File {i} of {len(files)}: {file}") print("Reading file...") start_read = time.time() tree = lxml.etree.parse(f'data/{file}').getroot() print(f"Reading file took {time.time() - start_read} seconds") print("Since start: ", time.time() - start) packages = tree.find('BASE') print("Starting walk...") sart_walked = time.time() for package in packages: walk(package, base_el, cache, lookup_time, append_new_el_time, cache_search_time) print(f"Walk took {time.time() - sart_walked} seconds") print("Since start: ", time.time() - start) if indentifier == -1: return base_el else: print("Timing results:") print("Time spent reading: ", time_spent_reading) print("Time spent on lookup: ", lookup_time[0]) print("Time spent on append: ", append_new_el_time[0]) print("Time spent on cache search: ", cache_search_time[0]) base_el.getroottree().write( f'temp{indentifier}.xml', encoding='utf-8') return None def walk(element: lxml.etree._Element, reference: lxml.etree._Element, cache: dictlookup_time, append_new_el_time, cache_search_time, recursive_calls_counter) -> None: recursive_calls_counter[0] += 1 children = element.iterchildren() elid = f"{element.tag}" element_name = element.get('some-id-i-need') if element_name is not None: elid += f'[@some-id-i-need="{element_name}"]' cache_id = str(id(reference)) + "_" + elid cache_search_time_start = time.time() relevant_data = cache.get(cache_id) cache_search_time[0] += time.time() - cache_search_time_start # if element is found either in cache or in the new merged object # continue to its children # otherwise, element does not exist in merged object. # Add it to the merged object and to cache if relevant_data is None: # I believe this lookup may be what takes the most time # hence my attempt to cache this lookup_time_start = time.time() relevant_data = reference.find(elid) lookup_time[0] += time.time() - lookup_time_start lookup_time[1] += 1 else: # cache hit cache_search_time[1] += 1 if relevant_data is None: append_new_el_time_start = time.time() reference.append(element) append_new_el_time[0] += time.time() - append_new_el_time_start return else: cache.setdefault(cache_id, relevant_data) # if element has no children, loop will not run for child in children: walk(child, relevant_data, cache, lookup_time, append_new_el_time, cache_search_time, recursive_calls_counter) # to run: process_elements(os.listdir("data"), -1) example data: file1 <BASE> <elem id="1"> <data-tag id="1"> <object id="23124"> <POS Tag="V" /> <grammar type="STEM" /> <Aspect type="IMPV" /> <Number type="S" /> </object> <object id="128161"> <POS Tag="V" /> <grammar type="STEM" /> <Aspect type="IMPF" /> </object> </data-tag> </elem> </BASE> file2 <BASE> <elem id="1"> <data-tag id="1"> <object id="23124"> <concept type="t1" /> </object> <object id="128161"> <concept type="t2" /> </object> </data-tag> <data-tag id="2"> <object id="128162"> <POS Tag="P" /> <grammar type="PREFIX" /> <Tag Tag="bi+" /> <concept type="t3" /> </object> </data-tag> </elem> </BASE> result: <BASE> <elem id="1"> <data-tag id="1"> <object id="23124"> <POS Tag="V" /> <grammar type="STEM" /> <Aspect type="IMPV" /> <Number type="S" /> <concept type="t1" /> </object> <object id="128161"> <POS Tag="V" /> <grammar type="STEM" /> <Aspect type="IMPF" /> <concept type="t2" /> </object> </data-tag> <data-tag id="2"> <object id="128162"> <POS Tag="P" /> <grammar type="PREFIX" /> <Tag Tag="bi+" /> <concept type="t3" /> </object> </data-tag> </elem> </BASE> Edit2: Timing results after processing 10 files (about 60MB, 1m 24.8s): Starting process... Process: 102, File 1 of 10: Reading file... Reading file took 0.1326887607574463 seconds Since start: 0.1326887607574463 preprocesing... merging... Starting walk... Walk took 0.8433401584625244 seconds Since start: 1.0600318908691406 Process: 102, File 2 of 10: Reading file... Reading file took 0.04700827598571777 seconds Since start: 1.1070401668548584 preprocesing... merging... Starting walk... Walk took 1.733034610748291 seconds Since start: 2.8680694103240967 Process: 102, File 3 of 10: Reading file... Reading file took 0.041702985763549805 seconds Since start: 2.9097723960876465 preprocesing... merging... ... Time spent on lookup: 79.53011083602905 Time spent on append: 1.1502337455749512 Time spent on cache search: 0.11017322540283203 Cache size: 30176 # Edit3: extra data Number of cache hits: 112503 Cache size: 30177 Number of recursive calls: 168063 As an observation, I do expect significant overlap between the files, maybe the small cache search time indicates that something is wrong with how I implemented caching? Edit3: It does seem that I do get a lot of hits. but the strange part is that if I comment out the cache search part, it makes almost no difference in performance. In fact, it ran marginally faster without it (although not sure if a few seconds is a significant difference or just random chance in this case) relevant_data = None # cache.get(cache_id) log with cache commented out: Time spent on lookup: 71.13456320762634 Number of lookups: 168063 Time spent on append: 3.9656710624694824 Time spent on cache search: 0.020023584365844727 Number of cache hits: 0 Cache size: 30177 Number of recursive calls: 168063 | Caching all identifiers while proceeding seems to work well and doesn't significantly slow down as more data is added. The following code does this: def xml_union(files, loader): existing = {} path = [] def populatewalk(elem): pid = elem.get('id') ident = (elem.tag, pid) path.append(ident) if pid is not None: existing[tuple(path)] = elem for child in elem: populatewalk(child) popped = path.pop() assert popped is ident def walk(existing_parent, elem): pid = elem.get('id') if pid is None: existing_parent.append(elem) # make sure children are populated return populatewalk(elem) ident = (elem.tag, pid) path.append(ident) tpath = tuple(path) existing_elem = existing.get(tpath) if existing_elem is None: existing_parent.append(elem) existing[tpath] = elem for child in elem: populatewalk(child) else: existing_elem.attrib.update(elem.items()) for child in elem: walk(existing_elem, child) popped = path.pop() assert popped is ident first, *remain = files root = loader(first) for elem in root: populatewalk(elem) for text in remain: ri = loader(text) if root.tag != ri.tag: raise ValueError(f"root tag {root.tag!r} does not equal {ri.tag!r}") for elem in ri: walk(root, elem) return root The above code assumes that you always want to use an id attribute to identify elements, but that should be easy to change. It also is slightly more general in that it keeps track of the element hierarchy of when doing the union, while your code only seems to care that an element with a given ID can be found. Not sure if that matters! This can be tested with the following line, with f1 and f2 set as strings containing the tests you sent above. print(etree.tostring(xml_union([f1, f2], etree.fromstring)).decode()) Writing this didn't take too long, but convincing myself it's somewhat correct and performant took longer. I ended up writing a test harness that generates 10 files that are ~12MiB, runs the above code on these files, then writes the result to a ~87MiB file, then makes sure that file is exactly the union of what was generated. The part that uses xml_union looks like: from time import time def fileloader(path): print(f"loading {path}") return etree.parse(path).getroot() t0 = time() new_root = xml_union( [f'large-{i:02}.xml' for i in range(10)], fileloader, ) t1 = time() with open(f'merged.xml', 'wb') as fd: print("writing merged") etree.ElementTree(new_root).write(fd, pretty_print=True) t2 = time() print(f"union={t1-t0:.2f} write={t2-t1:.2f}") My 1.6GHz laptop takes ~23 seconds to merge these files, with no slowdown noticed with later files. Writing the resulting object takes Python ~2 seconds. The test harness is much more fiddly, and looks like: from itertools import product from random import choices def randomizer(): num = 1 def next(n, rb): nonlocal num for _ in range(n): yield num, choices(rb, k=len(terminals)) num += 1 return next rootids = list(range(10)) roots = [etree.Element('BASE') for _ in rootids] obj_elems = {} dt_elems = {} el_elems = {} def get_obj(root_id, el_id, dt_id, obj_id): obj = obj_elems.get((root_id, obj_id)) if obj is not None: return obj obj = obj_elems[(root_id, obj_id)] = etree.Element('object', id=str(obj_id)) dt = dt_elems.get((root_id, dt_id)) if dt is not None: dt.append(obj) return obj dt = dt_elems[(root_id, dt_id)] = etree.Element('data-tag', id=str(dt_id)) dt.append(obj) el = el_elems.get((root_id, el_id)) if el is not None: el.append(dt) return obj el = el_elems[(root_id, el_id)] = etree.Element('elem', id=str(el_id)) el.append(dt) roots[root_id].append(el) return obj elmaker = randomizer() dtmaker = randomizer() objmaker = randomizer() for el_id, el_roots in elmaker(1000, rootids): for dt_id, dt_roots in dtmaker(100, el_roots): for obj_id, obj_roots in objmaker(len(terminals), dt_roots): for key, root_id in zip(terminals, obj_roots): get_obj(root_id, el_id, dt_id, obj_id).append( etree.Element(key, an='val') ) for root_id, root in zip(rootids, roots): with open(f'large-{root_id:02}.xml', 'wb') as fd: et = etree.ElementTree(root) et.write(fd, pretty_print=True) nelem = 1000 ndt = 100 nterm = len(terminals) expected_elems = set(str(i+1) for i in range(nelem)) expected_dts = set(str(i+1) for i in range(nelem*ndt)) expected_objs = set(str(i+1) for i in range(nelem*ndt*nterm)) expected_terms = set(product(expected_objs, terminals)) elem_seen = set() dt_seen = set() obj_seen = set() terms_seen = set() def check(el, tag, seen): assert el.tag == tag aid = el.attrib['id'] assert aid not in seen seen.add(aid) return aid for elem in etree.parse('merged.xml').getroot(): check(elem, 'elem', elem_seen) for dt in elem: check(dt, 'data-tag', dt_seen) for obj in dt: obj_id = check(obj, 'object', obj_seen) for term in obj: assert term.tag in terminals term_id = (obj_id, term.tag) assert term_id not in terms_seen terms_seen.add(term_id) assert elem_seen == expected_elems assert dt_seen == expected_dts assert obj_seen == expected_objs assert terms_seen == expected_terms Hopefully that test harness is useful to somebody else! | 4 | 2 |
74,799,676 | 2022-12-14 | https://stackoverflow.com/questions/74799676/how-to-run-a-hello-world-python-script-with-google-cloud-run | Forgive my ignorance.. I'm trying to learn how to schedule python scripts with Google Cloud. After a bit of research, I've seen many people suggest Docker + Google Cloud Run + Cloud Scheduler. I've attempted to get a "hello world" example working, to no avail. Code hello.py print("hello world") Dockerfile # For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3.8-slim # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 WORKDIR /app COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD ["python", "hello.py"] Steps Create a repo with Google Cloud Artifact Registry gcloud artifacts repositories create test-repo --repository-format=docker \ --location=us-central1 --description="My test repo" Build the image docker image build --pull --file Dockerfile --tag 'testdocker:latest' . Configure auth gcloud auth configure-docker us-central1-docker.pkg.dev Tag the image with a registry name docker tag testdocker:latest \ us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest Push the image to Artifact Registry docker push us-central1-docker.pkg.dev/gormanalysis/test-repo/testdocker:latest Deploy to Google Cloud Run Error At this point, I get the error The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. I've seen posts like this which say to add app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True) but this looks like a flask thing, and my script doesn't use flask. I feel like i have a fundamental misunderstanding of how this is supposed to work. Any help would be appreciated it. | UPDATE I've documented my problem and solution in much more detail here » I had been trying to deploy my script as a Cloud Run Service. I should've tried deploying it as a Cloud Run Job. The difference is that cloud run services require your script to listen for a port. jobs do not. Confusingly, you cannot deploy a cloud run job directly from Artifact Registry. You have to start from the cloud run dashboard. | 4 | 6 |
74,811,255 | 2022-12-15 | https://stackoverflow.com/questions/74811255/pylint-ignore-rules-on-git-action | I've used the default pylint from git actions to check my project for any errors. There are some errors that I want to ignore though. If it was in vscode you could ignore them in settings.json. How do I ignore them in git actions? name: Pylint on: [push] jobs: build: runs-on: ubuntu-latest strategy: matrix: python-version: ["3.10"] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m pip install --upgrade pip pip install pylint - name: Analysing the code with pylint run: | pylint $(git ls-files '*.py') | you could use noqa comments for ignore specific lines you can run probably pylint with option disable or something like that: pylint -d C0114,C0116 $(git ls-files '*.py') this would disable warnings with the codes C0114 and C0116 | 3 | 5 |
74,812,049 | 2022-12-15 | https://stackoverflow.com/questions/74812049/is-there-a-guarantee-that-code-after-yield-will-be-executed | Let's say we have a fixture that allocated some unmanaged resources and releases them like in the following example: @pytest.fixture def resource(): res = driver.Instance() yield res res.close() Is there a guarantee that the resource will be released even if something bad happens during the test that utilizes that fixture? If there is no such guarantee, maybe the following pattern will be better? @pytest.fixture def resource(q): res = driver.Instance() def finalize(): res.close() q.addfinalizer(finalize()) return res | It depends on what happens before yield: If a yield fixture raises an exception before yielding, pytest won’t try to run the teardown code after that yield fixture’s yield statement. But, for every fixture that has already run successfully for that test, pytest will still attempt to tear them down as it normally would. You say, "even if something bad happens during the test that utilizes that fixture", which implies that the yield has executed. As long as the fixture yields, pytest will attempt to perform the teardown. I don't think manually finalizing offers any further guarantees: While yield fixtures are considered to be the cleaner and more straightforward option, there is another choice, and that is to add “finalizer” functions directly to the test’s request-context object. It brings a similar result as yield fixtures, but requires a bit more verbosity. The section on safe fixtures offers some good tips on how to handle teardown code safely, nicely summarized by this line: The safest and simplest fixture structure requires limiting fixtures to only making one state-changing action each, and then bundling them together with their teardown code | 4 | 5 |
74,810,441 | 2022-12-15 | https://stackoverflow.com/questions/74810441/timezone-obtained-via-timezonefinder-america-ciudad-juarez-generates-error-u | I retrieve timezones from airports of the world, using TimezoneFinder().timezone_at applied on longitude and latitude of airports. And when I want to use these timezones (to compute times of departure ad arrival of flights) everything works except America/Ciudad_Juarez. This simple code: from timezonefinder import TimezoneFinder from pytz import timezone tz = TimezoneFinder().timezone_at(lng=-106.48333, lat=31.73333) # retrieves 'America/Ciudad_Juarez' timezone(tz) Generates this error: UnknownTimeZoneError: 'America/Ciudad_Juarez' I checked in this excellent wikipedia page, this timezone is correct, canonical they say. I am surprised that a timezone obtained through timezonefinder is not recognised by pytz. How can I cleanly solve this? | This timezone was only very recently added, as Ciudad Juárez decided to align it's time with the US for DST. See http://mm.icann.org/pipermail/tz-announce/2022-November/000076.html for the announcement. It looks like pytz has not been updated to include that change just yet. | 3 | 5 |
74,795,811 | 2022-12-14 | https://stackoverflow.com/questions/74795811/get-version-of-python-poetry-project-from-inside-the-project | I have a python library packaged using poetry. I have the following requirements Inside the poetry project # example_library.py def get_version() -> str: # return the project version dynamically. Only used within the library def get_some_dict() -> dict: # meant to be exported and used by others return { "version": get_version(), "data": #... some data } In the host project, I want the following test case to pass no matter which version of example_library I'm using from example_library import get_some_dict import importlib.metadata version = importlib.metadata.version('example_library') assert get_some_dict()["version"] == version I have researched ideas about reading the pyproject.toml file but I'm not sure how to make the function read the toml file regardless of the library's location. The Poetry API doesn't really help either, because I just want to read the top level TOML file from within the library and get the version number, not create one from scratch. | Use importlib.metadata.version() from Python's own standard library: import importlib.metadata version = importlib.metadata.version('ProjectName') This is not specific to Poetry and will work for any project (library or application) whose metadata is readable. Make sure that ProjectName is installed. Where ProjectName can be your own library or a 3rd party library, it does not matter. And note that a so-called "editable" installation is also good enough. Documentation for importlib.metadata.version(). See also: https://stackoverflow.com/a/65594622 | 4 | 3 |
74,808,652 | 2022-12-15 | https://stackoverflow.com/questions/74808652/cant-get-the-changed-global-variable | t.py value = 0 def change_value(): global value value = 10 s.py import t from t import value t.change_value() print(f'test1: {t.value}') print (f'test2: {value}') Output test1: 10 test2: 0 Why isn't it not returning the changed value in the test2 ? | This is how import works in Python. You should read what the documentation says on the import statement carefully. Specifically, when you use the from import syntax on a module attribute, Python looks up the value of that attribute, and then "a reference to that value is stored in the local namespace". So, if value is 10, then after from t import value you have a reference to 10. That 10 stays a 10 no matter what happens to t.value. Just to drive home the point about reference semantics vs. value semantics, consider what would happen if you had this in t.py instead: value = [] def change_value(): global value # not actually needed anymore value.append(10) def change_reference(): global value value = value + [10] Contrast the differences between calling t.change_value() and t.change_reference(). In the first case, both prints would be the same, while in the second, they would be different. This is because after calling change_value() we only have one list, and the name value in s.py is referring to it. With change_reference() we have two lists, the original (empty) one and a new one, and value in s.py is still referring to the original one. | 4 | 5 |
74,802,561 | 2022-12-14 | https://stackoverflow.com/questions/74802561/how-to-create-regex-to-match-a-string-that-contains-only-hexadecimal-numbers-and | I am using a string that uses the following characters: 0-9 a-f A-F - > The mixture of the greater than and hyphen must be: -> --> Here is the regex that I have so far: [0-9a-fA-F\-\>]+ I tried these others using exclusion with ^ but they didn't work: [^g-zG-Z][0-9a-fA-F\-\>]+ ^g-zG-Z[0-9a-fA-F\-\>]+ [0-9a-fA-F\-\>]^g-zG-Z+ [0-9a-fA-F\-\>]+^g-zG-Z [0-9a-fA-F\-\>]+[^g-zG-Z] Here are some samples: "0912adbd->12d1829-->218990d" "ab2c8d-->82a921->193acd7" | Firstly, you don't need to escape - and > Here's the regex that worked for me: ^([0-9a-fA-F]*(->)*(-->)*)*$ Here's an alternative regex: ^([0-9a-fA-F]*(-+>)*)*$ What does the regex do? ^ matches the beginning of the string and $ matches the ending. * matches 0 or more instances of the preceding token Created a big () capturing group to match any token. [0-9a-fA-F] matches any character that is in the range. (->) and (-->) match only those given instances. Putting it into a code: import re regex = "^([0-9a-fA-F]*(->)*(-->)*)*$" re.match(re.compile(regex),"0912adbd->12d1829-->218990d") re.match(re.compile(regex),"ab2c8d-->82a921->193acd7") re.match(re.compile(regex),"this-failed->so-->bad") You can also convert it into a boolean: print(bool(re.match(re.compile(regex),"0912adbd->12d1829-->218990d"))) print(bool(re.match(re.compile(regex),"ab2c8d-->82a921->193acd7"))) print(bool(re.match(re.compile(regex),"this-failed->so-->bad"))) Output: True True False I recommend using regexr.com to check your regex. | 4 | 2 |
74,803,526 | 2022-12-14 | https://stackoverflow.com/questions/74803526/why-dont-i-get-faster-run-times-with-threadpoolexecutor | In order to understand how threads work in Python, I wrote the following simple function: def sum_list(thelist:list, start:int, end:int): s = 0 for i in range(start,end): s += thelist[i]**3//10 return s Then I created a list and tested how much time it takes to compute its sum: LISTSIZE = 5000000 big_list = list(range(LISTSIZE)) start = time.perf_counter() big_sum=sum_list(big_list, 0, LISTSIZE) print(f"One thread: sum={big_sum}, time={time.perf_counter()-start} sec") It took about 2 seconds. Then I tried to partition the computation into threads, such that each thread computes the function on a subset of the list: THREADCOUNT=4 SUBLISTSIZE = LISTSIZE//THREADCOUNT start = time.perf_counter() with concurrent.futures.ThreadPoolExecutor(THREADCOUNT) as executor: futures = [executor.submit(sum_list, big_list, i*SUBLISTSIZE, (i+1)*SUBLISTSIZE) for i in range(THREADCOUNT)] big_sum = 0 for res in concurrent.futures.as_completed(futures): # return each result as soon as it is completed: big_sum += res.result() print(f"{THREADCOUNT} threads: sum={big_sum}, time={time.perf_counter()-start} sec") Since I have a 4-cores CPU, I expected it to run 4 times faster. But it did not: it ran in about 1.8 seconds on my Ubuntu machine (on my Windows machine, with 8 cores, it ran even slower than the single-thread version: about 2.2 seconds). Is there a way to use ThreadPoolExecutor (or another threads-based mechanism in Python) so that I can compute this function faster? | The problem is that the function you are trying to make faster is CPU-bound and the Python Global Interpreter Lock (GIL) prevents any performance gain from parallelisation of such code. In Python, threads are wrapper around genuine OS thread. However, in order to avoid race conditions due to concurrent execution, only one thread can access the Python interpreter to execute bytecode at a time. This restriction is enforced by a lock called the GIL. Thus in Python, true multithreading cannot be achieved and multiprocessing should be used instead. However, note that the GIL is not locked by IO operations (file reading, networking, etc.) and some library code (numpy, etc.) so these operations can still benefit from Python multithreading. The function sum_list used neither of those operations so it will not benefit from Python multithreading. You can use ProcessPoolExecutor to effectively get parallelism but this may copy the input list in your case. Multiprocessing is equivalent to launching multiple independent Python interpreters, thus the GIL's (one per intepreter) is not an issue anymore. However, multiprocessing incurs performance penalties during inter-process communication. | 6 | 7 |
74,797,716 | 2022-12-14 | https://stackoverflow.com/questions/74797716/how-do-i-get-fastapi-to-do-ssr-for-vue-3 | According to this documentation for Vue's SSR, it is possible to use node.js to render an app and return it using an express server. Is is possible to do the same with FastAPI? Or is using Jinja2 templates or SPA the only solution? Problems: No SPA: To help with SEO No SSG: Too many pages will be generated. Some need to be generated dynamically. No Jinja2/Python Templates: Node modules aren't built, bundled and served. All modules have to served from a remote package CDN. I have a feeling that maybe changing the Vue 3 delimiters and then building the project and serving the files as Jinja2 templates is the solution, but I'm not sure how it would work with Vue's routers. I know the /dist folder can be served on the default route and then use a catchall can be used to display files that do exist. Possible Solution @app.get("/", response_class=FileResponse) def read_index(request: Request): index = f"{static_folder}/index.html" return FileResponse(index) @app.get("/{catchall:path}", response_class=FileResponse) def read_index(request: Request): path = request.path_params["catchall"] file = static_folder + path if os.path.exists(file): return FileResponse(file) index = f"{static_folder}/index.html" return FileResponse(index) Questions If there is a way to do SSR with FastAPI and Vue 3, what is it? If there is no direct way, how do I combine Vue's built /dist with Jinja2 templates to serve dynamic pages? | There are several options available, such as Nuxt.js, Quasar, and Gridsome, which provide support for SSR with FastAPI and Vue 3. | 4 | 1 |
74,795,315 | 2022-12-14 | https://stackoverflow.com/questions/74795315/processing-large-number-of-jsons-12tb-with-databricks | I am looking for guidance/best practice to approach a task. I want to use Azure-Databricks and PySpark. Task: Load and prepare data so that it can be efficiently/quickly analyzed in the future. The analysis will involve summary statistics, exploratory data analysis and maybe simple ML (regression). Analysis part is not clearly defined yet, so my solution needs flexibility in this area. Data: session level data (12TB) stored in 100 000 single line JSON files. JSON schema is nested, includes arrays. JSON schema is not uniform but new fields are added over time - data is a time-series. Overall, the task is to build an infrastructure so the data can be processed efficiently in the future. There will be no new data coming in. My initial plan was to: Load data into blob storage Process data using PySpark flatten by reading into data frame save as parquet (alternatives?) Store in a DB so the data can be quickly queried and analyzed I am not sure which Azure solution (DB) would work here Can I skip this step when data is stored in efficient format (e.g. parquet)? Analyze the data using PySpark by querying it from DB (or from blob storage when in parquet) Does this sound reasonable? Does anyone has materials/tutorials that follow similar process so I could use them as blueprints for my pipeline? | Yes, it's sound reasonable, and in fact it's quite standard architecture (often referred as lakehouse). Usual implementation approach is following: JSON data loaded into blob storage are consumed using Databricks Auto Loader that provides efficient way of ingesting only new data (since previous run). You can trigger pipeline regularly, for example, nightly, or run it continuously if data arriving all the time. Auto Loader is also handling schema evolution of input data. Processed data is better to store as Delta Lake tables that provide better performance than "plain" Parquet due use of additional information in the transaction log so it's possible to efficiently access only necessary data. (Delta Lake is built on top of Parquet, but has more capabilities). Processed data then could be accessed via Spark code, or via Databricks SQL (it could be more efficient for reporting, etc., as it's heavily optimized for BI workloads). Due the big amount of data, storing them in some "traditional" database may not be very efficient or be very costly. P.S. I would recommend to look on implementing this with Delta Live Tables that may simplify development of your pipelines. Also, you may have access to Databricks Academy that has introductory courses about lakehouse architecture and data engineering patterns. If you don't have access to it, you can at least look to Databricks courses published on GitHub. | 4 | 2 |
74,747,889 | 2022-12-9 | https://stackoverflow.com/questions/74747889/polars-map-elements-performance-for-custom-functions | I've enjoyed with Polars significant speed-ups over Pandas, except one case. I'm newbie to Polars, so it could be just my wrong usage. Anyway here is the toy-example: on single column I need to apply custom function in my case it is parse from probablepeople library (https://github.com/datamade/probablepeople) but problem is generic. Plain pandas apply has similar runtime like Polars, but pandas with parallel_apply from (https://github.com/nalepae/pandarallel) gets speed-up proportional to number of cores. It looks for me that Polars uses only single core for custom functions,or I miss something? If I use Polars correctly, maybe there is a possibility to create tool like pandaralell for Polars? !pip install probablepeople !pip install pandarallel import pandas as pd import probablepeople as pp import polars as pl from pandarallel import pandarallel AMOUNT = 1_000_000 #Pandas: df = pd.DataFrame({'a': ["Mr. Joe Smith"]}) df = df.loc[df.index.repeat(AMOUNT)].reset_index(drop=True) df['b'] = df['a'].apply(pp.parse) #Pandarallel: pandarallel.initialize(progress_bar=True) df['b_multi'] = df['a'].parallel_apply(pp.parse) #Polars: dfp = pl.DataFrame({'a': ["Mr. Joe Smith"]}) dfp = dfp.select(pl.all().repeat_by(AMOUNT).explode()) dfp = dfp.with_columns(pl.col('a').map_elements(pp.parse).alias('b')) | It seems that pandarallel uses multiprocessing (Pool.map_async) to run tasks. It also has its own custom progress bar implementation. A "simple" way I've found to do this is: Pool.imap() (map_async cannot be used with track() as it consumes the iterable) rich.progress.track() (which is also bundled with pip) for a progress bar (tqdm is also popular) import multiprocessing import polars as pl import probablepeople as pp from pip._vendor.rich.progress import track def map_elements_parallel(expr, function, chunksize=8): def _run(function, values): with multiprocessing.get_context("spawn").Pool() as pool: return pl.Series(pool.imap(function, track(values), chunksize=chunksize)) return expr.map_batches(lambda col: _run(function, col)) if __name__ == "__main__": df = pl.DataFrame({ "name": ["Mr. Joe Smith", "Mrs. I & II Alice Random"] }) df = pl.concat([df] * 500_000) df = df.with_columns(pp = pl.col("name").pipe(map_elements_parallel, pp.parse) ) print(df) Working... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 shape: (1_000_000, 2) ┌──────────────────────────┬───────────────────────────────────┐ │ name ┆ pp │ │ --- ┆ --- │ │ str ┆ list[list[str]] │ ╞══════════════════════════╪═══════════════════════════════════╡ │ Mr. Joe Smith ┆ [["Mr.", "PrefixMarital"], ["Joe… │ │ Mrs. I & II Alice Random ┆ [["Mrs.", "PrefixMarital"], ["I"… │ │ … ┆ … │ Notes: spawn must be used to start multiprocessing with Polars .pipe() is used to run our "helper function" .map_batches() is used to pass the "column" to the "custom function" Performance multiprocessing is quite a complex topic. multiprocessing.Pool: What's the difference between map_async and imap? multiprocessing: Understanding logic behind `chunksize` Timings seem to vary depending on input size, the specific task being performed, and your specific hardware. These are the times I got for the example above (on an 8-core system). name duration (sec) map_elements 77.6496 pandarallel 13.5538 imap (chunksize=1) 52.4519 imap (chunksize=8) 33.1072 map_async 31.5265 As pp.parse returns nested lists, I tried dumping to json and using .str.json_decode() to see if there was any difference. def dumps(value): return json.dumps(pp.parse(value)) .pipe(map_elements_parallel, dumps).str.json_decode() name duration (sec) imap (chunksize=1) 36.0517 imap (chunksize=8) 15.0513 map_async 14.0317 | 4 | 8 |
74,720,194 | 2022-12-7 | https://stackoverflow.com/questions/74720194/polars-counting-elements-in-list-column | I've have dataframe with column b with list elements, I need to create column c that counts number elements in list for every row. Here is toy example in Pandas: import pandas as pd df = pd.DataFrame({'a': [1,2,3], 'b':[[1,2,3], [2], [5,0]]}) a b 0 1 [1, 2, 3] 1 2 [2] 2 3 [5, 0] df.assign(c=df['b'].str.len()) a b c 0 1 [1, 2, 3] 3 1 2 [2] 1 2 3 [5, 0] 2 Here is my equivalent in Polars: import polars as pl dfp = pl.DataFrame({'a': [1,2,3], 'b':[[1,2,3], [2], [5,0]]}) dfp.with_columns(pl.col('b').map_elements(lambda x: len(x)).alias('c')) I've a feeling that .map_elements(lambda x: len(x)) is not optimal. Is a better way to do it in Polars? | You can use .list.len() df.with_columns(c = pl.col("b").list.len()) shape: (3, 3) ┌─────┬───────────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ list[i64] ┆ u32 │ ╞═════╪═══════════╪═════╡ │ 1 ┆ [1, 2, 3] ┆ 3 │ │ 2 ┆ [2] ┆ 1 │ │ 3 ┆ [5, 0] ┆ 2 │ └─────┴───────────┴─────┘ | 8 | 12 |
74,714,300 | 2022-12-7 | https://stackoverflow.com/questions/74714300/paramspec-for-a-pre-defined-function-without-using-generic-callablep | I want to write a wrapper function for a known function, like def wrapper(*args, **kwargs) foo() return known_function(*args, **kwargs) How can i add type-annotations to wrapper, such that it exactly follows the type annotations of known_function I have looked at ParamSpec, but it appears to only work when the wrapper-function is generic and takes the inner function as argument. P = ParamSpec("P") T = TypeVar('T') def wrapper(func_arg_that_i_dont_want: Callable[P,T], *args: P.args, **kwargs: P.kwargs) foo() return known_function(*args, **kwargs) Can i force the P to only be valid for known_function, without linking it to a Callable-argument? | PEP 612 as well as the documentation of ParamSpec.args and ParamSpec.kwargs are pretty clear on this: These “properties” can only be used as the annotated types for *args and **kwargs, accessed from a ParamSpec already in scope. - Source: PEP 612 ("The components of a ParamSpec" -> "Valid use locations") Both attributes require the annotated parameter to be in scope. - Source: python.typing module documentation (class typing.ParamSpec -> args/kwargs) They [parameter specifications] are only valid when used in Concatenate, or as the first argument to Callable, or as parameters for user-defined Generics. - Source: python.typing module documentation (class typing.ParamSpec, second paragraph) So no, you cannot use parameter specification args/kwargs, without binding it a concrete Callable in the scope you want to use them in. I question why you would even want that. If you know that wrapper will always call known_function and you want it to (as you said) have the exact same arguments, then you just annotate it with the same arguments. Example: def known_function(x: int, y: str) -> bool: return str(x) == y def wrapper(x: int, y: str) -> bool: # other things... return known_function(x, y) If you do want wrapper to accept additional arguments aside from those passed on to known_function, then you just include those as well: def known_function(x: int, y: str) -> bool: return str(x) == y def wrapper(a: float, x: int, y: str) -> bool: print(a ** 2) return known_function(x, y) If your argument is that you don't want to repeat yourself because known_function has 42 distinct and complexly typed parameters, then (with all due respect) the design of known_function should be covered in copious amounts gasoline and set ablaze. If you insist to dynamically associate the parameter specifications (or are curious about possible workarounds for academic reasons), the following is the best thing I can think of. You write a protected decorator that is only intended to be used on known_function. (You could even raise an exception, if it is called with anything else to protect your own sanity.) You define your wrapper inside that decorator (and add any additional arguments, if you want any). Thus, you'll be able to annotate its *args/**kwargs with the ParamSpecArgs/ParamSpecKwargs of the decorated function. In this case you probably don't want to use functools.wraps because the function you receive out of that decorator is probably intended not to replace known_function, but stand alongside it. Here is a full working example: from collections.abc import Callable from typing import Concatenate, ParamSpec, TypeVar P = ParamSpec("P") T = TypeVar("T") def known_function(x: int, y: str) -> bool: """Does thing XY""" return str(x) == y def _decorate(f: Callable[P, T]) -> Callable[Concatenate[float, P], T]: if f is not known_function: # type: ignore[comparison-overlap] raise RuntimeError("This is an exclusive decorator.") def _wrapper(a: float, /, *args: P.args, **kwargs: P.kwargs) -> T: """Also does thing XY, but first does something else.""" print(a ** 2) return f(*args, **kwargs) return _wrapper wrapper = _decorate(known_function) if __name__ == "__main__": print(known_function(1, "2")) print(wrapper(3.14, 10, "10")) Output as expected: False 9.8596 True Adding reveal_type(wrapper) to the script and running mypy gives the following: Revealed type is "def (builtins.float, x: builtins.int, y: builtins.str) -> builtins.bool" PyCharm also gives the relevant suggestions regarding the function signature, which it infers from having known_function passed into _decorate. But again, just to be clear, I don't think this is good design. If your "wrapper" is not generic, but instead always calls the same function, you should explicitly annotate it, so that its parameters correspond to that function. After all: Explicit is better than implicit. - Zen of Python, line 2 | 9 | 3 |
74,716,259 | 2022-12-7 | https://stackoverflow.com/questions/74716259/the-seaborn-styles-shipped-by-matplotlib-are-deprecated-since-3-6 | The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead. I have tried this: # use seaborn style plt.style.use("seaborn") but it is deprecated, and I want to remove this warning when I use the cmd in windows | This warning is telling you that seaborn styles in matplotlib do not match current seaborn styles, since the latest have been updated. This is why you should set the style as follow: plt.style.use("seaborn-v0_8") You can specify a theme by replacing <style> with one the following: bright colorblind dark dark-palette darkgrid deep muted notebook paper pastel poster talk ticks white whitegrid Just like this: plt.style.use("seaborn-v0_8-whitegrid") Alternatively, if you want to use the latest seaborn styles, use their library directly. Edit: adding missing themes for completeness. | 14 | 30 |
74,751,254 | 2022-12-10 | https://stackoverflow.com/questions/74751254/removing-all-duplicate-images-with-different-filenames-from-a-directory | I am trying to iterate through a folder and delete any file that is a duplicate image (but different name). After running this script all files get deleted except for one. There are at least a dozen unique ones out of about 5,000. Any help understanding why this is happening would be appreciated. import os import cv2 directory = r'C:\Users\Grid\scratch' for filename in os.listdir(directory): a=directory+'\\'+filename n=(cv2.imread(a)) q=0 for filename in os.listdir(directory): b=directory+'\\'+filename m=(cv2.imread(b)) comparison = n == m equal_arrays = comparison.all() if equal_arrays==True and q==1: os.remove(b) q=1 | There are a number of issues with your code, but I am going to suggest an alternate strategy. A more efficient way would be to collect md5 or sha1 hashes of the files, into a set or some other container, while iterating the directory. Then when you calculate the hashes you can check if that particular hash already exists in the collection which would indicate the file is a duplicate of one that you have already seen and therefore should be removed. Additional pointers: You only need to get the directory contents once. So there should be no reason to call os.listdir() more than once on the same directory. for example: import hashlib import os hashes = set() for filename in os.listdir(directory): path = os.path.join(directory, filename) digest = hashlib.sha1(open(path,'rb').read()).digest() if digest not in hashes: hashes.add(digest) else: os.remove(path) You can use a more secure hash if you would like but the chances of encountering a collision are astronomically low. | 7 | 9 |
74,788,529 | 2022-12-13 | https://stackoverflow.com/questions/74788529/notimplementederror-you-should-not-call-an-overloaded-function | @overload def setSize(self,size:tuple[int|str])->None: ''' Set image size (width,height) ''' try:self.options.append(f"width=\"{str(size[0])}\" height=\"{str(size[1])}\"") except IndexError:print("Error reading the size, aborting") @overload def setSize(self,width:int|str,height:int|str)->None: ''' Set image Size ''' self.setSize((width,height)) This is my code and I called this function as var.setSize((500,500)) which would normally call the top one but I got this error: NotImplementedError: You should not call an overloaded function. A series of @overload-decorated functions outside a stub module should always be followed by an implementation that is not @overload-ed. | When writing code with function overloading in Python, it is important to remember that [y]ou should not call an overloaded function. A series of @overload-decorated functions outside a stub module should always be followed by an implementation that is not @overload-ed. This is because @overload-ed functions are intended to simply declare the types of the arguments without providing an actual implementation. Therefore, an implementation must be provided following the @overload-ed functions in order for the code to be valid. | 3 | 4 |
74,724,128 | 2022-12-8 | https://stackoverflow.com/questions/74724128/is-there-a-way-to-get-p-d-q-p-d-q-params-from-statsforecast-autoarima | minimal example: from statsforecast import StatsForecast from statsforecast.models import AutoARIMA import pandas as pd df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/air-passengers.csv') sf = StatsForecast( models = [AutoARIMA(season_length = 12)], freq = 'M', n_jobs=-1, verbose=True ) sf.fit(df) How to get the parameters of the fitted model ? I know this is possible using pmdarima package, but pmdarima is way too slow and runs out of memory on large data. statsforecast seems promising, but only if there is a way to get the params | From this answer, the solution would be: sf.fitted_[0][0].model_['arma'] which will output a tuple of 7 values. I don't know the exact mapping of parameters to tuple values, but from this line it appears to be: (p, d, q, P, D, Q, constant) | 4 | 5 |
74,740,640 | 2022-12-9 | https://stackoverflow.com/questions/74740640/install-postgresql-extension-before-pytest-set-up-database-for-django | I need to install citext extension to my postgresql database for django project. For the project itself it went smoothly and works great via migrations, but my pytest is configured with option --no-migrations, so pytest create database without running migrations. How can i make pytest to install citext postgres extension before tables are created? Currently i'm getting - django.db.utils.ProgrammingError: type "citext" does not exist while pytest trying to create table auth_users sql = 'CREATE TABLE "auth_user" ("id" serial NOT NULL PRIMARY KEY, "password" varchar(128) NOT NULL, "last_login" timestamp ...T NULL, "is_active" boolean NOT NULL, "date_joined" timestamp with time zone NOT NULL, "email" citext NOT NULL UNIQUE)', params = None ignored_wrapper_args = (False, {'connection': <django.contrib.gis.db.backends.postgis.base.DatabaseWrapper object at 0x7fb313bb0100>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0x7fb30d9f8580>}) I tried to use django_db_setup fixture, but i did not figure out how to change it, because something like this @pytest.fixture(scope="session") def django_db_setup( request, django_test_environment, django_db_blocker, django_db_use_migrations, django_db_keepdb, django_db_createdb, django_db_modify_db_settings, ): """Top level fixture to ensure test databases are available""" from django.test.utils import setup_databases, teardown_databases setup_databases_args = {} if not django_db_use_migrations: from pytest_django.fixtures import _disable_native_migrations _disable_native_migrations() if django_db_keepdb and not django_db_createdb: setup_databases_args["keepdb"] = True with django_db_blocker.unblock(): from django.db import connection cursor = connection.cursor() cursor.execute("CREATE EXTENSION IF NOT EXISTS citext;") db_cfg = setup_databases( verbosity=request.config.option.verbose, interactive=False, **setup_databases_args ) def teardown_database(): with django_db_blocker.unblock(): try: teardown_databases(db_cfg, verbosity=request.config.option.verbose) except Exception as exc: request.node.warn( pytest.PytestWarning( "Error when trying to teardown test databases: %r" % exc ) ) if not django_db_keepdb: request.addfinalizer(teardown_database) did not help me | Annoyingly, there are no appropriate hooks between setting up the database and loading the appropriate postgresql extensions. You can work around the issue by copying/modifying the pytest-django code that disables migrations and running your code instead of the upstream code. @pytest.fixture(scope="session") def django_migration_disabler() -> None: """Disable migrations when running django tests. This copies/alters the behavior of pytest_django.fixtures._disable_migrations, which is called when pytest is invoked with --no-migrations. See: https://github.com/pytest-dev/pytest-django/blob/v4.5.2/pytest_django/fixtures.py#L260 We do this instead of invoking with --no-migrations because constructing the database without migrations fails to create necessary postgres extensions like citext and uuid-ossp. We then override the django_db_setup fixture and ensure that this fixture is called before the parent django_db_setup fixture. """ from django.conf import settings from django.core.management.commands import migrate from django.db import connections class DisableMigrations: def __contains__(self, item: str) -> bool: return True def __getitem__(self, item: str) -> None: return None settings.MIGRATION_MODULES = DisableMigrations() class MigrateSilentCommand(migrate.Command): def handle(self, *args, **options): options["verbosity"] = 0 database = options["database"] connection = connections[database] with connection.cursor() as cursor: cursor.execute('CREATE EXTENSION IF NOT EXISTS "citext";') return super().handle(*args, **options) migrate.Command = MigrateSilentCommand # type: ignore @pytest.fixture(scope="session") def django_db_use_migrations() -> bool: """Force pytest-django to use migrations. This is necessary because we disable migrations in the django_migration_disabler and the existing pytest-django mechanisms would override our disabling mechanism with their own, which would fail to create the necessary extensions in postgres. """ return True @pytest.fixture(scope="session") def django_db_setup(django_migration_disabler, django_db_setup, django_db_blocker): """Override django_db_setup ensuring django_migration_disabler is loaded.""" pass | 3 | 4 |
74,711,405 | 2022-12-7 | https://stackoverflow.com/questions/74711405/importerror-cannot-import-name-getargspec-from-inspect-c-users-swapn-appd | File "f:\drug-traceability-blockchain-maddy\src\app.py", line 2, in <module> from web3 import Web3,HTTPProvider File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\web3\__init__.py", line 6, in <module> from eth_account import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_account\__init__.py", line 1, in <module> from eth_account.account import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_account\account.py", line 59, in <module> from eth_account.messages import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_account\messages.py", line 26, in <module> from eth_account._utils.structured_data.hashing import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_account\_utils\structured_data\hashing.py", line 9, in <module> from eth_abi import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\__init__.py", line 6, in <module> from eth_abi.abi import ( # NOQA File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\abi.py", line 1, in <module> from eth_abi.codec import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\codec.py", line 16, in <module> from eth_abi.decoding import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\decoding.py", line 14, in <module> from eth_abi.base import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\base.py", line 7, in <module> from .grammar import ( File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\eth_abi\grammar.py", line 4, in <module> import parsimonious File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\parsimonious\__init__.py", line 9, in <module> from parsimonious.grammar import Grammar, TokenGrammar File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\parsimonious\grammar.py", line 14, in <module> from parsimonious.expressions import (Literal, Regex, Sequence, OneOf, File "C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\site-packages\parsimonious\expressions.py", line 9, in <module> from inspect import getargspec ImportError: cannot import name 'getargspec' from 'inspect' (C:\Users\Swapn\AppData\Local\Programs\Python\Python311\Lib\inspect.py) please help me .. how to solve this error. this is blockchain project wgich i downloaded from github. after that i was npm install npm start truffle complile truffle migrate and for homepage of project run app.py then i was receive this error.Image of running errors | Try uninstalling web3 using pip uninstall web3.py and install the latest version from github using pip install git+https://github.com/ethereum/web3.py.git | 6 | 5 |
74,785,215 | 2022-12-13 | https://stackoverflow.com/questions/74785215/how-to-yield-a-db-connection-in-a-python-sqlalchemy-function-similar-to-how-it-i | In FastAPI I had the following function that I used to open and close a DB session: def get_db(): try: db = SessionLocal() yield db finally: db.close() And within the routes of my API I would do something like that: @router.get("/") async def read_all_events(user: dict = Depends(get_current_user), db: Session = Depends(get_db)): logger.info("API read_all_events") if user is None: raise http_user_credentials_not_valid_exception() return db.query(models.Events).all() You can see that I am injectin the session in the api call. So now i want to do something similar within a python function: def do_something(): #get person data from database #play with person data #save new person data in database #get cars data from database So i am wondering if I should use the same approach than in FastAPI (i do not know how) or if i just should be openning and clossing the connection manually like that: def do_something(): try: db = SessionLocal() yield db #get person data from database #play with person data #save new person data in database #get cars data from database finally: db.close() Thanks | The usage of yield in this case is so that Depends(get_db) returns the db session instance, so that it can be used in the fastapi route, and as soon as the fastapi route returns response to user, the finally clause (db.close()) will be executed. This is good because every request will be using a separate db session, and db connections will be closed after every route response. If you want to use the db session normally in a function, just get the db instance using db = SessionLocal(), and proceed to use the db instance in the function. Example: def do_something(): db = SessionLocal() event = db.query(models.Events).first() db.delete(event) db.commit() db.close() | 3 | 3 |
74,783,807 | 2022-12-13 | https://stackoverflow.com/questions/74783807/making-tqdm-write-to-log-files | tqdm is a nice python library to keep track of progress through an iterable. It's default mode of operation is to repeatedly clear a line and redraw with a carriage but this produced quite nasty output when combined with logging. Is there a way I can get this to write to log files periodically rather than using this print? This is the best I've got is my own hacky implementation: def my_tqdm(iterable): "Like tqdm but us logging. Include estimated time and time taken." start = time.time() for i, item in enumerate(iterable): elapsed = time.time() - start rate = elapsed / (i + 1) estimated = rate * len(iterable) - elapsed num_items = len(iterable) LOGGER.info( "Processed %d of %d items (%.1f%%) in %.1fs (%.1fs remaining, %.1f s/item)", i, num_items, i / num_items * 100, elapsed, estimated, rate, ) yield item But it'd be better if I could do this with tqdm itself so that people don't moan at me in code reviews. | You could redirect the outputs of the TQDM progress bar to a null device (e.g. /dev/null), and manually print/log the status bar whenever you want - either on every iteration, or at a certain interval. For example: import os import time import logging from tqdm import tqdm LOG_INTERVAL = 5 logging.basicConfig(level=logging.INFO) logger = logging.getLogger('tqdm_logger') progress_bar = tqdm(range(20), file=open(os.devnull, 'w')) for i in progress_bar: # do something meaningful instead... time.sleep(0.1) if progress_bar.n % LOG_INTERVAL == 0: logger.info(str(progress_bar)) This code block will produce the following outputs: INFO:tqdm_logger: 0%| | 0/20 [00:00<?, ?it/s] INFO:tqdm_logger: 25%|██▌ | 5/20 [00:00<00:01, 9.59it/s] INFO:tqdm_logger: 50%|█████ | 10/20 [00:01<00:01, 9.54it/s] INFO:tqdm_logger: 75%|███████▌ | 15/20 [00:01<00:00, 9.54it/s] | 4 | 9 |
74,748,826 | 2022-12-9 | https://stackoverflow.com/questions/74748826/how-to-visualize-cluster-boundaries | I generated several datasets, and using classifiers, I predicted the distribution of clusters. I need to draw boundaries between clusters on the chart. In the form of lines or in the form of filled areas - it does not matter. Please let me know if there is any way to do this. My code: import numpy as np import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import make_moons, make_circles from sklearn.model_selection import train_test_split n_sample = 2000 def make_square(n_sample): data=np.array([0,[]]) data[0] = np.random.sample((n_sample,2)) for i in range(n_sample): if data[0][i][0] > 0.5 and data[0][i][1] > 0.5 or data[0][i][0] < 0.5 and data[0][i][1] < 0.5: data[1].append(1) else: data[1].append(0) return data datasets = [ make_circles(n_samples=n_sample, noise=0.09, factor=0.5), make_square(n_sample), make_moons(n_samples=n_sample, noise=0.12), ] ks=[] for data in datasets: X,y = data[0],data[1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=33) classifier = KNeighborsClassifier(n_neighbors=1) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) acc = classifier.score(X_test, y_test) accs = [] for i in range(1, 8): knn = KNeighborsClassifier(n_neighbors=i) knn.fit(X_train, y_train) pred_i = knn.predict(X_test) acc0 = knn.score(X_test, y_test) accs.append(acc0) plt.figure(figsize=(12, 6)) plt.plot(range(1, 8), accs, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('accs Score K Value') plt.xlabel('K Value') plt.ylabel('accs Score') print("Max Score:", max(accs), "k=",accs.index(max(accs))+1) ks.append(accs.index(max(accs))+1) for i in range(3): data = datasets[i] k = ks[i] X,y = data[0],data[1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=33) classifier = KNeighborsClassifier(n_neighbors=k) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) plt.figure(figsize=(9,9)) plt.title("Test") plt.scatter(X_test[:,0], X_test[:,1], c=y_test) plt.figure(figsize=(9,9)) plt.title("Predict") plt.scatter(X_test[:,0], X_test[:,1], c=y_pred) Example output: enter image description here enter image description here | scikit-learn 1.1 introduced the DecisionBoundaryDisplay to assist with this sort of task. Following the use of make_moons and the KNeighborsClassifier in the question, we can fit the classifier on the dataset, invoke the DecisionBoundaryDisplay.from_estimator() method, then scatter the X data on the returned axis: import matplotlib.pyplot as plt from sklearn.datasets import make_moons from sklearn.neighbors import KNeighborsClassifier from sklearn.inspection import DecisionBoundaryDisplay X, y = make_moons(noise=0.2) clf = KNeighborsClassifier().fit(X, y) disp = DecisionBoundaryDisplay.from_estimator(clf, X, response_method="predict", alpha=0.3) disp.ax_.scatter(X[:, 0], X[:, 1], c=y) plt.show() Resulting in something like this: | 4 | 4 |
74,718,716 | 2022-12-7 | https://stackoverflow.com/questions/74718716/how-to-get-my-vim-and-macvim-to-find-python3 | When I use a plugin that requires python, it can't find it and barfs. The places that seem to being searched are: Using -version I see both: +python/dyn +python3/dyn However :echo has("python3") returns 0. I'm not sure if this is compile time config, or runtime-configurable via .vimrc. I'm not a python developer, and the few times I've ventured into that world were in the middle of the python2/python3 mess that turned me off completely. I've played around enough to have configured pyenv it seems, and get ╰─$ which python /Users/benlieb/.pyenv/shims/python ╰─$ python --version Python 3.10.3 Can anyone help shed light on what to do to get python3 findable/usable in my vim? Update: Following @romainl's suggestion below I set in my .vimrc set pythonthreedll=/Users/benlieb/.pyenv/shims/python But getting the following error: | After some time, I found the following works, thought it was not a fun path of discovery. let &pythonthreedll = trim(system("pyenv which python")) | 4 | 2 |
74,744,899 | 2022-12-9 | https://stackoverflow.com/questions/74744899/how-does-tensorflows-decision-forests-handle-categorical-data | I'm evaluating two different unsupervised ML algorithms, Isolation Forest and LSTM Autoencoder model, to identify anomalies in a large time series data. This dataset includes mostly categorical data such as Ip Adresses, cloud subscription Ids,tenant Ids, userAgents, and client Application Ids. When reading a tutorial on an implementation of a Tensorflow's Decision Tree (TF-DF) model, it mentions that the model handles non-label categorical values natively and there is no need for preprocessing in the form of one-hot encoding, normalization or extra is_present feature. Does anybody know how Tensorflow handles the categorical features behind the scenes (assuming they do some transformation into a numeric representation)? | Tl;dr: There is a natural way of using categorical features in decision trees/forests that requires no encoding. Tensorflow Decision Forests uses this and a number of standard transformations to handle categorical features. Tensorflow Decision Forest (TF-DF) constructs decision tree / decision forest models. A single decision tree recursively splits the dataset along its features. Splits along categorical features can naturally be performed through so-called in-set conditions. For instance, a tree can express a condition like userAgents \in \{“Mozilla/5.0”, “InternetExplorer/10.0”\}. Other types of conditions are also possible. Tensorflow Decision Forests (TF-DF) can construct in-set conditions if the dataset contains categorical features. More specifically, Tensorflow Decision Forests uses the C++ library Yggdrasil Decision Forests (YDF) under the hood for any advanced computations. YDF offers three different algorithms for finding a good categorical split of the data. For example, the Random algorithm will just try out many possible splits at random and pick the best one. For performance and quality reasons, YDF also preprocesses categorical features: If a categorical value is very rare, YDF may consider it “out-of-dictionary”, the threshold for “rare” being user-configurable. Furthermore, YDF maps the categorical features to integers by decreasing item frequency, with the mapping stored as part of the model. Note that this is purely an internal encoding; the algorithms are aware that a feature is categorical, hence typical issues with integer encodings do not apply. Finally, Tensorflow Decision Forests (TF-DF) uses Keras, which expects classification tasks to have an integer label. Therefore, TF-DF users have to encode the label themselves or use the built-in pd_dataframe_to_tf_dataset. Note that this answer only applies to Tensorflow Decision Forests. Other parts of Tensorflow may need manual encoding. | 3 | 3 |
74,736,220 | 2022-12-8 | https://stackoverflow.com/questions/74736220/importing-smote-raise-attributeerror-module-sklearn-metrics-dist-metrics-has | Running from imblearn.over_sampling import SMOTE will raise following error. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) d:\A\OneDrive - UBC\ENGR\518 Machine Learning\Project\codes\model_training_laptop - Copy.ipynb Cell 2 in <cell line: 1>() ----> 1 from imblearn.over_sampling import SMOTE File e:\Anaconda\lib\site-packages\imblearn\__init__.py:52, in <module> 48 sys.stderr.write("Partial import of imblearn during the build process.\n") 49 # We are not importing the rest of scikit-learn during the build 50 # process, as it may not be compiled yet 51 else: ---> 52 from . import combine 53 from . import ensemble 54 from . import exceptions File e:\Anaconda\lib\site-packages\imblearn\combine\__init__.py:5, in <module> 1 """The :mod:`imblearn.combine` provides methods which combine 2 over-sampling and under-sampling. 3 """ ----> 5 from ._smote_enn import SMOTEENN 6 from ._smote_tomek import SMOTETomek 8 __all__ = ["SMOTEENN", "SMOTETomek"] File e:\Anaconda\lib\site-packages\imblearn\combine\_smote_enn.py:10, in <module> 7 from sklearn.base import clone 8 from sklearn.utils import check_X_y ---> 10 from ..base import BaseSampler 11 from ..over_sampling import SMOTE 12 from ..over_sampling.base import BaseOverSampler File e:\Anaconda\lib\site-packages\imblearn\base.py:15, in <module> 12 from sklearn.preprocessing import label_binarize 13 from sklearn.utils.multiclass import check_classification_targets ---> 15 from .utils import check_sampling_strategy, check_target_type 16 from .utils._validation import ArraysTransformer 17 from .utils._validation import _deprecate_positional_args File e:\Anaconda\lib\site-packages\imblearn\utils\__init__.py:7, in <module> 1 """ 2 The :mod:`imblearn.utils` module includes various utilities. 3 """ 5 from ._docstring import Substitution ----> 7 from ._validation import check_neighbors_object 8 from ._validation import check_target_type 9 from ._validation import check_sampling_strategy File e:\Anaconda\lib\site-packages\imblearn\utils\_validation.py:15, in <module> 12 import numpy as np 14 from sklearn.base import clone ---> 15 from sklearn.neighbors._base import KNeighborsMixin 16 from sklearn.neighbors import NearestNeighbors 17 from sklearn.utils import column_or_1d File e:\Anaconda\lib\site-packages\sklearn\neighbors\__init__.py:6, in <module> 1 """ 2 The :mod:`sklearn.neighbors` module implements the k-nearest neighbors 3 algorithm. 4 """ ----> 6 from ._ball_tree import BallTree 7 from ._kd_tree import KDTree 8 from ._distance_metric import DistanceMetric File sklearn\neighbors\_ball_tree.pyx:1, in init sklearn.neighbors._ball_tree() AttributeError: module 'sklearn.metrics._dist_metrics' has no attribute 'DistanceMetric32' | This is probably a case where upgrading scikit-learn and imbalanced-learn will resolve the problem. pip install --upgrade scikit-learn pip install --upgrade imbalanced-learn Not all versions of scikit-learn and imbalanced-learn are compatible with one another. Version 0.10.0 should be compatible with scikit-learn>=1.0.0 (e.g. discussion here). | 3 | 3 |
74,783,071 | 2022-12-13 | https://stackoverflow.com/questions/74783071/reading-binary-file-to-find-a-sequences-of-ints-little-endian-permutations | Try to read a binary file (firmware) with a sequences like \x01\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00 Little endian integer 1,3,2,4 Attempt: with open("firm.bin", 'rb') as f: s = f.read() N = 16 allowed = set(range(4)) for val in allowed: val = bytes(val)+b'\x00\x00\x00' for index, b in enumerate(s): print(b) i = b.hex() b= b'\x00\x00\x00'+bytes(bytes.fromhex(f'{i:x}')) if b in allowed and set(s[index:index + N]) == allowed: print(f'Found sequence {s[index:index + N]} at offset {index}') Above does not seem to work with error: ValueError: Unknown format code 'x' for object of type 'str' Why? Problem I am trying to solve: How can I find in binary file sequences like this being 16 ints little endian with values from 0 to 15 i.e [0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15,0-15] Update 1: Tried proposed answer, but no results, where it should: import numpy as np import sys # Synthesize firmware with 100 integers, all equal to 1 #firmware = np.full(100, 1, dtype=np.uint32) #firmware = np.fromfile('firm.ori', dtype='uint32') a1D = np.array([1, 2, 3, 4, 6, 5, 7, 8, 10, 9, 11, 13, 12, 14, 15, 0],dtype='uint32') print(a1D) r = np.convolve(a1D, [1]*16, mode='same')[8:-8] np.set_printoptions(threshold=sys.maxsize) print(r) r = np.where(r < (16*15)) print(r) print(a1D[r]) Ideally it should say offset 0, but values would be also fine i.e to print [ 1 2 3 4 6 5 7 8 10 9 11 13 12 14 15 0] Now it outputs: [ 1 2 3 4 6 5 7 8 10 9 11 13 12 14 15 0] [] (array([], dtype=int64),) [] | You refer to the values in the firmware as 32-bit integers so I've assumed that the file can be converted to integers. I've used the Python struct lib to do this. I've also understood that you want to find a sequence of 16 unique integers in the range 0 to 15. My test below iterated over the integers in the firmware file, looking ahead each time and converting that list of 16 integers to a set to check the length was still 16. I then iterated over the set to check all values where below 16. Here is my test I did: from secrets import token_bytes import struct # Create test data firmware_ints = 200_000 int_len = 4 data = token_bytes(firmware_ints * int_len) to_find = struct.pack('<16L', *range(16)) print(f"To find [{len(to_find)}]: {to_find}\n") hide_idx = 20 * int_len * -1 # find 20 ints from the end data = b''.join([data[:hide_idx], to_find, data[hide_idx:]]) # End of creating test data search_max = 16 search_len = 16 # Convert firmware to integers words = [x[0] for x in struct.iter_unpack('<L', data)] # Iterate through to find sequence for idx in range(len(words) - search_len): this_seq = words[idx:idx + search_len] if len(set(this_seq)) == search_len: if all([x < search_max for x in this_seq]): print(f'Found sequence {this_seq} at offset {idx}') which gave the output of: Hidden bytes [64]: b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\t\x00\x00\x00\n\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\r\x00\x00\x00\x0e\x00\x00\x00\x0f\x00\x00\x00' Found sequence [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] at offset 199980 | 4 | 2 |
74,786,867 | 2022-12-13 | https://stackoverflow.com/questions/74786867/subtract-vignetting-template-from-image-in-opencv-python | I have 750+ images, like this 'test.png', that I need to subtract the vignetting in 'vig-raw.png' from. I just started using opencv-python, so "I don't even know what I don't know". Using GIMP, I desaturated 'vig-raw.png' to create 'vig-desat.png', which I then converted with Color to Alpha to create 'vig-alpha.png'. This is my attempt to subtract 'vig-alpha.png' from 'test.png'. import cv2 as cv import numpy as np img1 = cv.imread('test.png',0) img1 = cv.cvtColor(img1, cv.COLOR_BGR2BGRA) # add alpha channel to RGB image print(img1[0][0]) # show alpha img2 = cv.imread('vig-alpha.png',flags=cv.IMREAD_UNCHANGED) # read RGBA image print(img2[0][0]) #show alpha img3 = cv.subtract(img1, img2) img3 = cv.resize(img3, (500,250)) print(img3[0][0]) # show alpha cv.imshow('result',img3) cv.waitKey() cv.destroyAllWindows() However, this is the 'result'. I need to produce a uniform shading throughout the image while leaving the original colors intact. I don't know the correct terminology for this sort of thing, and it's hard to search for a solution with what I do know. Thanks in advance. EDIT: As per Rotem's answer, image file format matters. StackOverflow converted the PNG files I posted to JPEG, which did effect results while checking their answer. See the comment I left on Rotem's answer below for more information. | Vignette template is not supposed to be subtracted, it supposed to be scaled. The vignette correction process is known as Flat-field correction applies: G = m / (F - D) C = (R - D) * G When D is dark field or dark frame. We don't have dark frame sample - we may assume that the dark frame is all zeros. Assuming D=zeros, the correction formula is: G = m / F C = R * G m = mean(F), and F applies vig-alpha. R is test.png. For computing G (name it inv_vig_norm, we may use the following stages): Read vig-alpha.png as grayscale, and convert it to float in range [0, 1] (vig_norm applies F): vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE) vig_norm = vig.astype(np.float32) / 255 Divide m by F: vig_mean_val = cv2.mean(vig_norm)[0] inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F Compute C = R * G - scale img1 by inv_vig_norm: inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR) img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G For removing noise and artifacts, we may apply Median Blur and Gaussian Blur over vig (it may be required because the site converted vig-alpha.png to JPEG format). Code sample: import cv2 import numpy as np img1 = cv2.imread('test.png') vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE) # Read vignette template as grayscale vig = cv2.medianBlur(vig, 15) # Apply median filter for removing artifacts and extreem pixels. vig_norm = vig.astype(np.float32) / 255 # Convert vig to float32 in range [0, 1] vig_norm = cv2.GaussianBlur(vig_norm, (51, 51), 30) # Blur the vignette template (because there are still artifacts, maybe because SO convered the image to JPEG). #vig_max_val = vig_norm.max() # For avoiding "false colors" we may use the maximum instead of the mean. vig_mean_val = cv2.mean(vig_norm)[0] # vig_max_val / vig_norm inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR) # Convert inv_vig_norm to 3 channels before using cv2.multiply. https://stackoverflow.com/a/48338932/4926757 img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G cv2.imshow('inv_vig_norm', cv2.resize(inv_vig_norm / inv_vig_norm.max(), (500, 250))) # Show inv_vig_norm for testing cv2.imshow('img1', cv2.resize(img1, (500, 250))) cv2.imshow('result', cv2.resize(img2, (500, 250))) cv2.waitKey() cv2.destroyAllWindows() Results: img1: inv_vig_norm: img2: | 3 | 4 |
74,771,032 | 2022-12-12 | https://stackoverflow.com/questions/74771032/how-to-test-an-element-from-a-generator-without-consuming-it | I have a generator gen, with the following properties: it's quite expensive to make it yield (more expensive than creating the generator) the elements take up a fair amount of memory sometimes all of the __next__ calls will throw an exception, but creating the generator doesn't tell you when that will happen I didn't implement the generator myself. Is there a way to make the generator yield its first element (I will do this in a try/except), without having the generator subsequently start on the second element if I loop through it afterwards? I thought of creating some code like this: try: first = next(gen) except StopIterator: return None except Exception: print("Generator throws exception on a yield") # looping also over the first element which we yielded already for thing in (first, *gen): do_something_complicated(thing) Solutions I can see which are not very nice: Create generator, test first element, create a new generator, loop through the second one. Put the entire for loop in a try/except; not so nice because the exception thrown by the yield is very general and it would potentially catch other things. Yield first element, test it, then reform a new generator from the first element and the rest of gen (ideally without extracting all of gen's elements into a list, since this could take a lot of memory). For 3, which seems like the best solution, a nearly-there example would be the example I gave above, but I believe that would just extract all the elements of gen into a tuple before we start iterating, which I would like to avoid. | I think I have what you are looking for using more_itertools library: import more_itertools if __name__ == "__main__": generator = range(100) peekable_generator = more_itertools.peekable(generator) print(f"peek {peekable_generator.peek()}") print(f"next {next(peekable_generator)}") print(f"next {next(peekable_generator)}") output: peek 0 next 0 next 1 See documentation here: https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.peekable If I'm not mistaken the ability to peek at the first item is the key thing you need. | 6 | 4 |
74,782,602 | 2022-12-13 | https://stackoverflow.com/questions/74782602/how-to-add-a-constant-to-negative-values-in-array | Given the xarray below, I would like to add 10 to all negative values (i.e, -5 becomes 5, -4 becomes 6 ... -1 becomes 9, all values remain unchanged). a = xr.DataArray(np.arange(25).reshape(5, 5)-5, dims=("x", "y")) I tried: a[a<0]=10+a[a<0], but it returns 2-dimensional boolean indexing is not supported. Several attempts with a.where, but it seems that the other argument can only replace the mapped values with a constant rather than with indexed values. I also considered using numpy as suggested here, but my actual dataset is ~ 80 Gb and loaded with dask and using numpy crashes my Jupyter console. Is there any way to achieve this using only xarray? Update I updated the code using @SpaceBurger and this. However my initial example was using a DataArray whereas my true problem is using a Dataset: a = xr.DataArray(np.arange(25).reshape(5, 5)-5, dims=("x", "y")) a = a.to_dataset(name='variable') Now, if I do this: a1 = a['variable'] a2 = 10+a1.copy() a['variable'] = dask.array.where(a['variable'] < 0, a2, a1) I get this error: MissingDimensionsError: cannot set variable 'variable' with 2-dimensional data without explicit dimension names. Pass a tuple of (dims, data) instead. Can anyone suggest a proper syntax? | xarray’s where method is the way to go here - you can provide any other argument which can be broadcast against the condition argument and the original array: a['variable'] = a['variable'].where( a['variable'] >= 0, (a['variable'] + 10), ) This will work fine with dask and will handle your coordinates seamlessly. Note that if you do try this with a dataset, all of the variables in the dataset will be broadcast against the condition and other. so if you have some data vars that don't include all these dimensions they'll end up being repeated and weird. generally I recommend doing math/operations on DataArrays or variables as I have it in my answer. | 3 | 3 |
74,788,063 | 2022-12-13 | https://stackoverflow.com/questions/74788063/python-enum-equality-performance | Enum datatypes are good abstractions for enumerable datatype like days of week, months etc. Nevertheless the simplest tests show that we pay 2.5 slower performance for such datatypes. Do we have any explanation for such behavior? Consider two simple enums in Python import enum import timeit class IntDow(enum.Enum): MONDAY=enum.auto() TUESDAY=enum.auto() WEDNESDAY=enum.auto() THURSDAY=enum.auto() FRIDAY=enum.auto() SATURDAY=enum.auto() SUNDAY=enum.auto() class StrDow(str, enum.Enum): MONDAY="MONDAY" TUESDAY="TUESDAY" WEDNESDAY="WEDNESDAY" THURSDAY="THURSDAY" FRIDAY="FRIDAY" SATURDAY="SATURDAY" SUNDAY="SUNDAY" int_mon = IntDow["MONDAY"] int_tue = IntDow["TUESDAY"] str_mon = StrDow["MONDAY"] str_tue = StrDow["TUESDAY"] raw_mon = "MONDAY" raw_tue = "TUESDAY" and carry out the simplest equality tests >>> timeit.timeit(lambda: int_mon == IntDow.MONDAY, number=100000) 0.017555099999299273 >>> timeit.timeit(lambda: int_mon == int_tue, number=100000) 0.00824300000022049 >>> timeit.timeit(lambda: int_mon == IntDow.TUESDAY, number=100000) 0.018771999999444233 >>> timeit.timeit(lambda: str_mon == StrDow.MONDAY, number=100000) 0.01836639999964973 >>> timeit.timeit(lambda: str_mon == StrDow.TUESDAY, number=100000) 0.01744440000038594 >>> timeit.timeit(lambda: str_mon == str_tue, number=100000) 0.007430400000885129 >>> timeit.timeit(lambda: raw_mon == "MONDAY", number=100000) 0.007222599999295198 >>> timeit.timeit(lambda: raw_mon == "TUESDAY", number=100000) 0.00726819999908912 >>> timeit.timeit(lambda: raw_mon == raw_tue, number=100000) 0.007780600000842242 We can see that variable to constant comparison for enum is approximately 2.5 times slower than for str. Do we have any explanation for such behavior? | The bulk of the extra time is not spent in the equality test, but in looking up the member from the enum (i.e. IntDow.MONDAY). In those cases where performance is critical, export the members from the enum first: MONDAY, TUESDAY, ... = IntDow Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 4 | 3 |
74,772,785 | 2022-12-12 | https://stackoverflow.com/questions/74772785/what-are-the-differences-among-mambaforge-mambaforge-pypy3-miniforge-miniforg | there have been explanations about the different between miniforge and miniconda miniforge is the community (conda-forge) driven minimalistic conda installer. Subsequent package installations come thus from conda-forge channel. miniconda is the Anaconda (company) driven minimalistic conda installer. Subsequent package installations come from the anaconda channels (default or otherwise). as for mambaforge, mambaforge-pypy3, miniforge, miniforge-pypy3, how do we choose which package to install? | mamba* use the c/c++ implementation of the conda protocol "mamba" instead of the python implementation which is called conda. The *pypy3 variants ship with PyPy as the python implementation in the base environment instead of CPython. | 7 | 5 |
74,785,680 | 2022-12-13 | https://stackoverflow.com/questions/74785680/how-to-format-a-dataframe-having-many-nan-values-join-all-rows-to-those-not-sta | I have the follwing df: df = pd.DataFrame({ 'col1': [1, np.nan, np.nan, np.nan, 1, np.nan, np.nan, np.nan], 'col2': [np.nan, 2, np.nan, np.nan, np.nan, 2, np.nan, np.nan], 'col3': [np.nan, np.nan, 3, np.nan, np.nan, np.nan, 3, np.nan], 'col4': [np.nan, np.nan, np.nan, 4, np.nan, np.nan, np.nan, 4] }) It has the following display: col1 col2 col3 col4 0 1.0 NaN NaN NaN 1 NaN 2.0 NaN NaN 2 NaN NaN 3.0 NaN 3 NaN NaN NaN 4.0 4 5.0 NaN NaN NaN 5 NaN 6.0 NaN NaN 6 NaN NaN 7.0 NaN 7 NaN NaN NaN 8.0 My goal is to keep all rows begining with float (not NaN value) and join to them the remaining ones. The new_df I want to get is: col1 col2 col3 col4 0 1 2 3 4 4 5 6 7 8 Any help form your side will be highly appreciated (I upvote all answers). Thank you! | If need join first values per groups defined by non missing values in df['col1'] use: df = (df.reset_index() .groupby(df['col1'].notna().cumsum()) .first() .set_index('index')) | 3 | 3 |
74,779,645 | 2022-12-13 | https://stackoverflow.com/questions/74779645/is-there-any-way-to-get-list-the-unconnected-inputs-of-an-openmdao-group | Considering the following problem import openmdao.api as om class Sys(om.Group): def setup(self): self.add_subsystem('sys1', om.ExecComp('v1 = a + b'), promotes=['*']) self.add_subsystem('sys2', om.ExecComp('v2 = v1 + c'), promotes=['*']) if __name__ == '__main__': prob = om.Problem() model = prob.model comp = model.add_subsystem('comp', Sys(), promotes=['*']) prob.setup() prob.run_model() comp.list_inputs() the list_inputs command gives the following 4 Input(s) in 'comp' varname val ------- ---- sys1 a [1.] b [1.] sys2 c [1.] v1 [2.] However we can clearly see that v1 in an 'internal' input to the system. If we were to attach this to an IndepVarComp or another system, we would not have to provide v1 since it is already internally connected. Is there a function that can list the inputs that are unconnected or that must be provided to a Group? | Generally, I prefer to rely on the visual tools such as the N2. However, here is a scriptable solution that I use on occation. Fair warning, it requires the use of one non-public attribute of system... but this is how I do it: import openmdao.api as om class Sys(om.Group): def setup(self): self.add_subsystem('sub_sys1', om.ExecComp('v1 = a + b'), promotes=['*']) self.add_subsystem('sub_sys2', om.ExecComp('v2 = v1 + c'), promotes=['*']) if __name__ == '__main__': prob = om.Problem() model = prob.model comp = model.add_subsystem('comp', Sys(), promotes=['*']) prob.setup() prob.run_model() comp_inputs = prob.model.list_inputs(out_stream=None, prom_name=True) # filter the inputs into connected and unconnected sets connect_dict = comp._conn_global_abs_in2out unconnected_inputs = set() connected_inputs = set() for abs_name, in_data in comp_inputs: if abs_name in connect_dict and (not 'auto_ivc' in connect_dict[abs_name]): connected_inputs.add(in_data['prom_name']) else: unconnected_inputs.add(in_data['prom_name']) print(connected_inputs) print(unconnected_inputs) | 3 | 2 |
74,782,862 | 2022-12-13 | https://stackoverflow.com/questions/74782862/fancy-indexing-in-numpy | I am basically trying to do something like this but without the for-loop... I tried with np.put_along_axis but it requires times to be of dimension 10 (same as last index of src). import numpy as np src = np.zeros((5,5,10), dtype=np.float64) ix = np.array([4, 0, 0]) iy = np.array([1, 3, 4]) times = np.array([1 ,2, 4]) values = np.array([25., 10., -65.]) for i, time in enumerate(times): src[ix, iy, time] += values[i] | One approach is to use np.add.at, preparing the indices first (as below): r = len(values) indices = (np.tile(ix, r), np.tile(iy, r), np.repeat(times, r)) np.add.at(src, indices, np.repeat(values, r)) print(src) Output [[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 25. 10. 0. -65. 0. 0. 0. 0. 0.] [ 0. 25. 10. 0. -65. 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 25. 10. 0. -65. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]] | 3 | 4 |
74,764,302 | 2022-12-11 | https://stackoverflow.com/questions/74764302/what-event-is-associated-with-zooming-an-interactive-matplotlib-plot | As I understand it, when a user interacts with an interactive matplotlib plot (i.e. by clicking, pressing a key, etc.), an Event is triggered, which can be linked to an arbitrary callback function, if desired. Interactive matplotlib plots often come with a navigation toolbar that includes certain features like zooming and rubberband selection. My question is, is there a way to watch for these things from the backend and react when a user performs one of these actions using the nav bar/mouse? I have gone through the list of event names on the event handling page of matplotlib's documentation, as well as looked over the API reference for the NavigationToolbar2 class, but I haven't been able to find any connection between the two. Is an event even the thing to be looking for, or is there some other way to detect these kinds of interactions? | Resolved on my own. In addition to the event types and the fig.canvas.mpl_connect() syntax shown on the "event handling" documentation page, you can also associate a callback function with an Axes instance directly, and this way has some different kinds of events that can be used as triggers. The API reference for the Axes class has this to say: The Axes instance supports callbacks through a callbacks attribute which is a CallbackRegistry instance. The events you can connect to are 'xlim_changed' and 'ylim_changed' and the callback will be called with func(ax) where ax is the Axes instance. ...and then the syntax to connect these axis events to a user-defined callback func on an existing axis instance ax might look something like this: def func(axes): print("New axis y-limits are", axes.get_ylim()) cb_registry = ax.callbacks cid = cb_registry.connect('ylim_changed', func) The same approach could be used to watch for x-axis changes as well. | 4 | 3 |
74,775,348 | 2022-12-12 | https://stackoverflow.com/questions/74775348/asyncio-as-completed-supposedly-accepting-iterable-but-crashes-if-input-is | So, essentially, in Python 3.7 (as far as I know) if you try to do this, import asyncio async def sleep(): asyncio.sleep(1) async def main(): tasks = (sleep() for _ in range(5)) for task in asyncio.as_completed(tasks): result = await task if __name__ == "__main__": asyncio.run(main()) It crashes with TypeError: expect a list of futures, not generator But the type hints clearly specify that it accepts an Iterable, which a Generator is. If you turn tasks into a list, it works, of course, but... what am I missing? And why would it be subjected to lists? I don't see why it should not allow generators. | You are right. The documentation here is not consistent with the actual behavior. The official documentation refers to the first argument as an "iterable". And typeshed as of today also annotates the first argument with Iterable[...]. However, in the CPython code for as_completed the first argument is passed to coroutines.iscoroutine, which checks, if it is an instance of types.GeneratorType. Obviously, that is what it is, which has it return True and cause the TypeError. And of course a generator is also an iterable. Which means the function does not in fact accept an iterable as the docs claim, but only a non-generator iterable. Maybe someone else here can shine additional light on the background or thought process here. In any case, I would argue this is worth opening an issue over, if one addressing this does not exist yet. EDIT: Apparently (and unsurprisingly) we were not the first ones to notice this. Thanks to @KellyBundy for pointing it out. | 3 | 3 |
74,774,598 | 2022-12-12 | https://stackoverflow.com/questions/74774598/pandas-group-by-column-and-convert-to-keyvalue-pair | I want to convert DataFrame using pandas. I would like to convert it into dictionary format like {'Plant Delivering ID': [Ship-To ID]} there are multiple 'Ship-To ID' for single 'Plant Delivering ID' My Original Data Format: I would like to convert it to: How do I convert it? | Use pandas.DataFrame.groupby and then get the result of groupby with apply(list) at the end convert the result to dict with pandas.Series.to_dict. df.groupby('Plant Delivering ID')['Ship-To-ID'].apply(list).to_dict() | 3 | 2 |
74,762,158 | 2022-12-11 | https://stackoverflow.com/questions/74762158/how-can-staticmethods-be-called-as-regular-functions | A static method can be called either on the class (such as C.f()) or on an instance (such as C().f()). Moreover, they can be called as regular functions (such as f()). Could someone elaborate on the bold part of the extract from the documentation for Python static methods? Reading this description one would expect to be able to do something like this: class C: @staticmethod def f(): print('f') def g(self): f() print('g') C().g() But this generates: NameError: name 'f' is not defined My question is not about the use-cases where the static method call is name-qualified either with an instance or a class name. My question is about the correct interpretation of the bold part of the documentation. | The subsection 4.2.2. Resolution of names of the CPython documentation specifies the rules of visibility for variables defined in different type of nested code blocks. In particular it specifies the working of the visibility of the local class scope's variables inside its methods: The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods | 4 | 1 |
74,752,610 | 2022-12-10 | https://stackoverflow.com/questions/74752610/how-to-use-argparse-to-create-command-groups-like-git | I'm trying to figure out how to use properly builtin argparse module to get a similar output than tools such as git where I can display a nice help with all "root commands" nicely grouped, ie: $ git --help usage: git [--version] [--help] [-C <path>] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] [--super-prefix=<path>] [--config-env=<name>=<envvar>] <command> [<args>] These are common Git commands used in various situations: start a working area (see also: git help tutorial) clone Clone a repository into a new directory init Create an empty Git repository or reinitialize an existing one work on the current change (see also: git help everyday) add Add file contents to the index mv Move or rename a file, a directory, or a symlink restore Restore working tree files rm Remove files from the working tree and from the index examine the history and state (see also: git help revisions) bisect Use binary search to find the commit that introduced a bug diff Show changes between commits, commit and working tree, etc grep Print lines matching a pattern log Show commit logs show Show various types of objects status Show the working tree status grow, mark and tweak your common history branch List, create, or delete branches commit Record changes to the repository merge Join two or more development histories together rebase Reapply commits on top of another base tip reset Reset current HEAD to the specified state switch Switch branches tag Create, list, delete or verify a tag object signed with GPG collaborate (see also: git help workflows) fetch Download objects and refs from another repository pull Fetch from and integrate with another repository or a local branch push Update remote refs along with associated objects 'git help -a' and 'git help -g' list available subcommands and some concept guides. See 'git help <command>' or 'git help <concept>' to read about a specific subcommand or concept. See 'git help git' for an overview of the system. Here's my attempt: from argparse import ArgumentParser class FooCommand: def __init__(self, subparser): self.name = "Foo" self.help = "Foo help" subparser.add_parser(self.name, help=self.help) class BarCommand: def __init__(self, subparser): self.name = "Bar" self.help = "Bar help" subparser.add_parser(self.name, help=self.help) class BazCommand: def __init__(self, subparser): self.name = "Baz" self.help = "Baz help" subparser.add_parser(self.name, help=self.help) def test1(): parser = ArgumentParser(description="Test1 ArgumentParser") root = parser.add_subparsers(dest="command", description="All Commands:") # Group1 FooCommand(root) BarCommand(root) # Group2 BazCommand(root) args = parser.parse_args() print(args) def test2(): parser = ArgumentParser(description="Test2 ArgumentParser") # Group1 cat1 = parser.add_subparsers(dest="command", description="Category1 Commands:") FooCommand(cat1) BarCommand(cat1) # Group2 cat2 = parser.add_subparsers(dest="command", description="Category2 Commands:") BazCommand(cat2) args = parser.parse_args() print(args) If you run test1 you'd get: $ python mcve.py --help usage: mcve.py [-h] {Foo,Bar,Baz} ... Test1 ArgumentParser options: -h, --help show this help message and exit subcommands: All Commands: {Foo,Bar,Baz} Foo Foo help Bar Bar help Baz Baz help Obviously this is not what I want, in there I just see all commands in a flat list, no groups or whatsoever... so the next logical attempt would be trying to group them. But if I run test2 I'll get: $ python mcve.py --help usage: mcve.py [-h] {Foo,Bar} ... mcve.py: error: cannot have multiple subparser arguments Which obviously means I'm not using properly argparse to accomplish the task at hand. So, is it possible to use argparse to achieve a similar behaviour than git? In the past I've relied on "hacks" so I thought the best practice here would be using the concept of add_subparsers but it seems I didn't understand properly that concept. | This isn't supported natively by argparse -- you can't nest subparsers, so if you want this sort of cli using argparse you're going to need to build a lot of logic on top of argparse. You can set nargs=argparse.REMAINDER to collect a subcommand and arguments without having them parsed by argparse, which means we can build something like this: import argparse import copy class Command: def __init__(self): self.subcommands = {} self.parser = argparse.ArgumentParser() def add_subcommand(self, name, sub): self.subcommands[name] = sub def add_argument(self, *args, **kwargs): return self.parser.add_argument(*args, **kwargs) def parse_args(self, args=None): if not self.subcommands: args = self.parser.parse_args(args) return args p = copy.deepcopy(self.parser) p.add_argument("subcommand") p.add_argument("args", nargs=argparse.REMAINDER) args = p.parse_args(args) try: sub = self.subcommands[args.subcommand] except KeyError: return self.parser.parse_args(args) sub_args = sub.parse_args(args.args) for attr in dir(sub_args): if attr.startswith("_"): continue setattr(args, attr, getattr(sub_args, attr)) return args def main(): root = Command() root.add_argument("-v", "--verbose", action="count") cmd1 = Command() cmd1_foo = Command() cmd1_foo.add_argument("-n", "--name") cmd1.add_subcommand("foo", cmd1_foo) root.add_subcommand("cmd1", cmd1) cmd2 = Command() cmd2_bar = Command() cmd2_bar.add_argument("-s", "--size", type=int) cmd2.add_subcommand("bar", cmd2_bar) root.add_subcommand("cmd2", cmd2) print(root.parse_args()) if __name__ == "__main__": main() This is horrible and ugly and poorly structured, but it means we can do this: $ python argtest.py --verbose cmd1 foo --name lars Namespace(verbose=1, subcommand='foo', args=['--name', 'lars'], name='lars') Or this: $ python argtest.py --verbose cmd2 bar --size 10 Namespace(verbose=1, subcommand='bar', args=['--size', '10'], size=10) If you're willing to look beyond argparse, libraries like Click and Typer make things much easier. For example, the above command could be implemented using Click like this: import click @click.group() def main(): pass @main.group() def cmd1(): pass @cmd1.command() @click.option('-n', '--name') def foo(name): pass @main.group() def cmd2(): pass @cmd2.command() @click.option('-s', '--size', type=int) def bar(): pass if __name__ == '__main__': main() So much nicer! | 6 | 8 |
74,741,268 | 2022-12-9 | https://stackoverflow.com/questions/74741268/installing-python-extension-module-understanding-skbuildsetuptools | I am one of the devs of a (fairly large) C++ simulation tool. Disclaimer : I'm more of a physicist than a dev. I wrote Python bindings for that project using pybind11. I managed to get the Python module to compile with cmake. I then managed to write a setup.py file using skbuild that does compile the Python module : python3 setup.py sdist bdist_wheel In _skbuild/linux-x86_64-3.9/cmake-build/lib/ (and in the tar archive dist/cytosim-0.0.0.tar.gz) there is indeed a compiled library : cytosim.cpython-39-x86_64-linux-gnu.so. However, when I want to install the module : pip3 install dist I get an error : gcc: error: src/py3/dist.c: No such file or directory I am very confused because I do not have an directory called py3 in src. Any pointer ? Anything I'm doing wrong ? Thanks ! | The command pip3 install dist tries (and fails) to install the dist package from the pypi repository. Maybe try pip3 install dist/cytosim-0.0.0.tar.gz instead. | 5 | 4 |
74,765,215 | 2022-12-11 | https://stackoverflow.com/questions/74765215/make-pip-install-option-install-less-packages-than-the-default-pip-install | First of all let's assume the following: I am building a python package mypackage and want to make it available broadly My package has the following python dependencies: "A","B","C" and "D" and we assume further that each dependency covers an independent use-case of the package (i.e. A is needed for users wanting to do A-type stuff, B is needed for B-type stuff, etc.) A, B, C and D are all pretty heavy and take each tons of times to install. The majority of the package's users are not developers and actually do not even know which type of stuff they will be interested in (whether it is one letter-stuff or any multiple letters simultaneously) and do not know how to do install with options Some users are power-users and know from the get-go that they will only use C and D-stuff so they will only need C or D as dependencies. In fact some of the non developer users might actually turn into power users given enough time to practice From reading all hypotheses above then it makes perfect sense to have a default install installing A,B, C and D and having options available for power-users to install only C or D (or any package combination such as A and D). Aka: pip install mypackage => installs A, B, C and D pip install mypackage[C, D] => installs C and D but not A and B This exact same problem is stated in this other question under the name negative extra_requires. Because indeed the desired behavior is that extra_requires should install fewer packages than the default install. It is also connected to discussions and issues in several places. I wanted to know 1. has the situation changed or is it planned ? 2. what would be a way to circumvent this issue/go about this if not? | has the situation changed…? No. or is it planned? Nobody knows. Most probably no. what would be a way to circumvent this issue…? Do not install dependencies with plain pip install mypackage. Declare separate extras A, B, C and D. Declare a combined extra all that includes all dependencies: extras_require={ 'A': ['A'], 'B': ['B'], 'C': ['C'], 'D': ['D'], 'all': ['A', 'B', 'C', 'D'], } Document and teach your users to install the package using pip install "mypackage[all]" Teach your powerful users to install the package using pip install "mypackage[A, C]" or in any combination of dependencies they need. | 3 | 3 |
74,767,068 | 2022-12-12 | https://stackoverflow.com/questions/74767068/python-double-underscore-prefixed-parameter-in-function | belows is in builtins.pyi def max(__arg1: _T, __arg2: _T, *_args: _T, key: Callable[[_T], SupportsLessThan]) -> _T: I do know what the name mangling means and know that the name mangling will influence every "__xxx" identifier as long as in class definition field. So I have three questions: why there paramters "__arg1" "__arg2" used the "__xxx" pattern and parameter "key" not; what is the function and purpose of "__xxx" pattern specially used in the function's parameter or in this case; I feel awkward to find the implementation of name mangling, and could you tell me where it is in cpython source code. Many thanks. further work: def wat1(__a): def wat2(__b): return __b, __a return __a print(wat1.__code__.co_varnames) print(wat1.__code__.co_cellvars) dis.dis(wat1) output(linux python==3.10.8): ('__a', 'wat2') ('__a',) 21 0 LOAD_FAST 0 (__a) 2 RETURN_VALUE 22 0 LOAD_FAST 0 (__b) 2 LOAD_DEREF 0 (__a) 4 BUILD_TUPLE 2 6 RETURN_VALUE | Name mangling applies only to names used in a class definition, not function parameters. In this case the leading underscores are only a naming convention to indicate that parameters with such names are not to be passed with keyword arguments, but rather only positional ones. This is to say that you should call max with: max(1, 2) rather than: max(__arg1=1, __arg2=2) The key argument on the other hand, are named without an underscore prefix, indicating that it is meant to be passed with a keyword argument: max(1, -2, key=abs) | 4 | 5 |
74,763,554 | 2022-12-11 | https://stackoverflow.com/questions/74763554/how-to-use-subprocess-run-method-in-python | I wanted to run external programs using python but I receive an error saying I don't have the file the code I wrote: import subprocess subprocess.run(["ls", "-l"]) Output: Traceback (most recent call last): File "C:\Users\hahan\desktop\Pythonp\main.py", line 3, in <module> subprocess.run(["ls", "-l"]) File "C:\Users\hahan\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 501, in run with Popen(*popenargs, **kwargs) as process: File "C:\Users\hahan\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 969, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\hahan\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1438, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified I expected it to return the files in that directory | The stack trace suggests you're using Windows as the operating system. ls not something that you will typically find on a Windows machine unless using something like CygWin. Instead, try one of these options: # use python's standard library function instead of invoking a subprocess import os os.listdir() # invoke cmd and call the `dir` command import subprocess subprocess.run(["cmd", "/c", "dir"]) # invoke PowerShell and call the `ls` command, which is actually an alias for `Get-ChildItem` import subprocess subprocess.run(["powershell", "-c", "ls"]) | 8 | 5 |
74,740,448 | 2022-12-9 | https://stackoverflow.com/questions/74740448/how-to-wait-for-the-user-to-click-a-point-in-a-figure-in-ipython-notebook | I took the following steps to setup an IPython backend in Google Colab notebook: !pip install ipympl from google.colab import output output.enable_custom_widget_manager() Then I log the (x,y) location of the user's click on a figure: %matplotlib ipympl import matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots() def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) cid = fig.canvas.mpl_connect('button_press_event', onclick) I need the code to wait here until the user selects at least one data point. However, the code will run if I have another command in the same notebook, for example: print ('done!') will run without waiting for the user to pick a data point. I tried using this before: plt.waitforbuttonpress() print('done!') However, the compiler gets stuck at plt.waitforbuttonpress() and it doesn't let the user click the figure. Thanks for your help | You could, instead of putting the code you want to run after a button_press_event, after the event listener, you could instead put it in the onclick function. Something like this: %matplotlib ipympl import matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots() def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) print('done!') cid = fig.canvas.mpl_connect('button_press_event', onclick) Or: %matplotlib ipympl import matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots() def after_onclick(): print('done!') def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) after_onclick() cid = fig.canvas.mpl_connect('button_press_event', onclick) The reason for this is because mpl_connect works rather oddly (see this question). Instead of waiting for the event to be registered and pausing the code execution, it will run the entire file but keep the event listener open. The problem with this way is that it will run the code every time the event is registered. If that is not what you want and you only want it to run once try this: %matplotlib ipympl import matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots() def after_onclick(): print('done') fig.canvas.mpl_disconnect(cid) def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) after_onclick() cid = fig.canvas.mpl_connect('button_press_event', onclick) This version adds fig.canvas.mpl_disconnect(cid) so that the code will only run once. | 3 | 2 |
74,757,129 | 2022-12-10 | https://stackoverflow.com/questions/74757129/why-numpy-vectorization-is-slower-than-a-for-loop | The below code has two functions that does the same thing: checks to see if the line between two points intersects with a circle. from line_profiler import LineProfiler from math import sqrt import numpy as np class Point: x: float y: float def __init__(self, x: float, y: float): self.x = x self.y = y def __repr__(self): return f"Point(x={self.x}, y={self.y})" class Circle: ctr: Point r: float def __init__(self, ctr: Point, r: float): self.ctr = ctr self.r = r def __repr__(self): return f"Circle(r={self.r}, ctr={self.ctr})" def loop(p1: Point, p2: Point, circles: list[Circle]): m = (p1.y - p2.y) / (p1.x - p2.x) n = p1.y - m * p1.x max_x = max(p1.x, p2.x) min_x = min(p1.x, p2.x) for circle in circles: if sqrt((circle.ctr.x - p1.x) ** 2 + (circle.ctr.y - p1.y) ** 2) < circle.r \ or sqrt((circle.ctr.x - p2.x) ** 2 + (circle.ctr.y - p2.y) ** 2) < circle.r: return False a = m ** 2 + 1 b = 2 * (m * n - m * circle.ctr.y - circle.ctr.x) c = circle.ctr.x ** 2 + circle.ctr.y ** 2 + n ** 2 - circle.r ** 2 - 2 * n * circle.ctr.y # compute the intersection points discriminant = b ** 2 - 4 * a * c if discriminant <= 0: # no real roots, the line does not intersect the circle continue # two real roots, the line intersects the circle at two points x1 = (-b + sqrt(discriminant)) / (2 * a) x2 = (-b - sqrt(discriminant)) / (2 * a) # check if both points in range first = min_x <= x1 <= max_x second = min_x <= x2 <= max_x if first and second: return False return True def vectorized(p1: Point, p2: Point, circles): m = (p1.y - p2.y) / (p1.x - p2.x) n = p1.y - m * p1.x max_x = max(p1.x, p2.x) min_x = min(p1.x, p2.x) circle_ctr_x = circles['x'] circle_ctr_y = circles['y'] circle_radius = circles['r'] # Pt 1 inside circle if np.any(np.sqrt((circle_ctr_x - p1.x) ** 2 + (circle_ctr_y - p1.y) ** 2) < circle_radius): return False # Pt 2 inside circle if np.any(np.sqrt((circle_ctr_x - p2.x) ** 2 + (circle_ctr_y - p2.y) ** 2) < circle_radius): return False # Line intersects with circle in range a = m ** 2 + 1 b = 2 * (m * n - m * circle_ctr_y - circle_ctr_x) c = circle_ctr_x ** 2 + circle_ctr_y ** 2 + n ** 2 - circle_radius ** 2 - 2 * n * circle_ctr_y # compute the intersection points discriminant = b**2 - 4*a*c discriminant_bigger_than_zero = discriminant > 0 discriminant = discriminant[discriminant_bigger_than_zero] if discriminant.size == 0: return True b = b[discriminant_bigger_than_zero] # two real roots, the line intersects the circle at two points x1 = (-b + np.sqrt(discriminant)) / (2 * a) x2 = (-b - np.sqrt(discriminant)) / (2 * a) # check if both points in range in_range = (min_x <= x1) & (x1 <= max_x) & (min_x <= x2) & (x2 <= max_x) return not np.any(in_range) a = Point(x=-2.47496075130008, y=1.3609840363748935) b = Point(x=3.4637947060471084, y=-3.7779123453298817) c = [Circle(r=1.2587063082677084, ctr=Point(x=3.618533781361757, y=2.179925931180058)), Circle(r=0.7625751871124099, ctr=Point(x=-0.3173290200183132, y=4.256206636932641)), Circle(r=0.4926043225930364, ctr=Point(x=-4.626312261120341, y=-1.5754603504419196)), Circle(r=0.6026364956540792, ctr=Point(x=3.775240278691819, y=1.7381168262343072)), Circle(r=1.2804597877349562, ctr=Point(x=4.403273380178893, y=-1.6890127555343681)), Circle(r=1.1562415624767421, ctr=Point(x=-1.0675000352105801, y=-0.23952113329203994)), Circle(r=1.112718432321835, ctr=Point(x=2.500137075066017, y=-2.77748519509295)), Circle(r=0.979889574640609, ctr=Point(x=4.494971251199753, y=-1.0530995423779388)), Circle(r=0.7817624050358268, ctr=Point(x=3.2419454348696544, y=4.3303373486692465)), Circle(r=1.0271176198616367, ctr=Point(x=-0.9740272820753071, y=-4.282195116754338)), Circle(r=1.1585218836700681, ctr=Point(x=-0.42096876790888915, y=2.135161027254492)), Circle(r=1.0242603387003988, ctr=Point(x=2.2617850544260767, y=-4.59942951839469)), Circle(r=1.5704233297828027, ctr=Point(x=-1.1182365440831088, y=4.2411408333943506)), Circle(r=0.37137272043983655, ctr=Point(x=3.280499587987774, y=-4.87871834733383)), Circle(r=1.1829610109115543, ctr=Point(x=-0.27755604766113606, y=-3.68429580935016)), Circle(r=1.0993567600839198, ctr=Point(x=0.23602306761027925, y=0.47530122196024704)), Circle(r=1.3865045367147553, ctr=Point(x=-2.537565761732492, y=4.719766182202855)), Circle(r=0.9492796511909753, ctr=Point(x=-3.7047245796551973, y=-2.501817905967274)), Circle(r=0.9866916911482386, ctr=Point(x=1.3021813533479742, y=4.754952371169189)), Circle(r=0.9053004331885084, ctr=Point(x=-3.4912157984801784, y=-0.5269727600532836)), Circle(r=1.3058987272565075, ctr=Point(x=-1.6983878085276427, y=-2.2910189455221053)), Circle(r=0.5342716756987732, ctr=Point(x=4.948676886704507, y=-1.2467089784975183)), Circle(r=1.0603926633240575, ctr=Point(x=-4.390462974765324, y=0.785568745976325)), Circle(r=0.3448422804513971, ctr=Point(x=-1.6459756952994697, y=2.7608629057950362)), Circle(r=0.8521457455807724, ctr=Point(x=-4.503217369041699, y=3.93796926957188)), Circle(r=0.602438849989669, ctr=Point(x=-2.0703406576157493, y=0.6142570312870999)), Circle(r=0.6453692950682722, ctr=Point(x=-0.14802220452893144, y=4.08189682338989)), Circle(r=0.6983361689325062, ctr=Point(x=0.09362196694661651, y=-1.0953438275586391)), Circle(r=1.880331563921456, ctr=Point(x=0.23481661751521776, y=-4.09217120864087)), Circle(r=0.5766225363413416, ctr=Point(x=3.149434524126505, y=-4.639582956406762)), Circle(r=0.6177559628867022, ctr=Point(x=-1.6758918144661683, y=-0.7954935787503492)), Circle(r=0.7347952666955615, ctr=Point(x=-3.1907522890427575, y=0.7048509241855683)), Circle(r=1.2795003337464894, ctr=Point(x=-1.777244415863577, y=2.936422879898364)), Circle(r=0.9181024765780231, ctr=Point(x=4.212544425778317, y=-1.953546993038261)), Circle(r=1.7681384709020282, ctr=Point(x=-1.3702722387909405, y=-1.7013020424154368)), Circle(r=0.5420789771729688, ctr=Point(x=4.063803796292818, y=-3.7159871611415065)), Circle(r=1.3863651881788939, ctr=Point(x=0.7685002210812408, y=-3.994230705171357)), Circle(r=0.5739750223225826, ctr=Point(x=0.08779554290638258, y=4.879912451441914)), Circle(r=1.2019825386919343, ctr=Point(x=-4.206623233886995, y=-1.1617382464768689))] circle_dt = np.dtype('float,float,float') circle_dt.names = ['x', 'y', 'r'] np_c = np.array([(x.ctr.x, x.ctr.y, x.r) for x in c], dtype=circle_dt) lp1 = LineProfiler() loop_wrapper = lp1(loop) loop_wrapper(a, b, c) lp1.print_stats() lp2 = LineProfiler() vectorized_wrapper = lp2(vectorized) vectorized_wrapper(a, b, np_c) lp2.print_stats() One implementation is regular for loop implementation, and the other is vectorized implementation with numpy. From my small knowledge of vectorization, I would have guessed that the vectorized function would yield better result, but as you can see below that is not the case: Total time: 4.36e-05 s Function: loop at line 31 Line # Hits Time Per Hit % Time Line Contents ============================================================== 31 def loop(p1: Point, p2: Point, circles: list[Circle]): 32 1 9.0 9.0 2.1 m = (p1.y - p2.y) / (p1.x - p2.x) 33 1 5.0 5.0 1.1 n = p1.y - m * p1.x 34 35 1 19.0 19.0 4.4 max_x = max(p1.x, p2.x) 36 1 5.0 5.0 1.1 min_x = min(p1.x, p2.x) 37 38 6 30.0 5.0 6.9 for circle in circles: 39 6 73.0 12.2 16.7 if sqrt((circle.ctr.x - p1.x) ** 2 + (circle.ctr.y - p1.y) ** 2) < circle.r \ 40 6 62.0 10.3 14.2 or sqrt((circle.ctr.x - p2.x) ** 2 + (circle.ctr.y - p2.y) ** 2) < circle.r: 41 return False 42 43 6 29.0 4.8 6.7 a = m ** 2 + 1 44 6 32.0 5.3 7.3 b = 2 * (m * n - m * circle.ctr.y - circle.ctr.x) 45 6 82.0 13.7 18.8 c = circle.ctr.x ** 2 + circle.ctr.y ** 2 + n ** 2 - circle.r ** 2 - 2 * n * circle.ctr.y 46 47 # compute the intersection points 48 6 33.0 5.5 7.6 discriminant = b ** 2 - 4 * a * c 49 5 11.0 2.2 2.5 if discriminant <= 0: 50 # no real roots, the line does not intersect the circle 51 5 22.0 4.4 5.0 continue 52 53 # two real roots, the line intersects the circle at two points 54 1 7.0 7.0 1.6 x1 = (-b + sqrt(discriminant)) / (2 * a) 55 1 4.0 4.0 0.9 x2 = (-b - sqrt(discriminant)) / (2 * a) 56 57 # check if one point in range 58 1 5.0 5.0 1.1 first = min_x < x1 < max_x 59 1 3.0 3.0 0.7 second = min_x < x2 < max_x 60 1 2.0 2.0 0.5 if first and second: 61 1 3.0 3.0 0.7 return False 62 63 return True Total time: 0.0001534 s Function: vectorized at line 66 Line # Hits Time Per Hit % Time Line Contents ============================================================== 66 def vectorized(p1: Point, p2: Point, circles): 67 1 10.0 10.0 0.7 m = (p1.y - p2.y) / (p1.x - p2.x) 68 1 5.0 5.0 0.3 n = p1.y - m * p1.x 69 70 1 7.0 7.0 0.5 max_x = max(p1.x, p2.x) 71 1 4.0 4.0 0.3 min_x = min(p1.x, p2.x) 72 73 1 10.0 10.0 0.7 circle_ctr_x = circles['x'] 74 1 3.0 3.0 0.2 circle_ctr_y = circles['y'] 75 1 3.0 3.0 0.2 circle_radius = circles['r'] 76 77 # Pt 1 inside circle 78 1 652.0 652.0 42.5 if np.any(np.sqrt((circle_ctr_x - p1.x) ** 2 + (circle_ctr_y - p1.y) ** 2) < circle_radius): 79 return False 80 # Pt 2 inside circle 81 1 161.0 161.0 10.5 if np.any(np.sqrt((circle_ctr_x - p2.x) ** 2 + (circle_ctr_y - p2.y) ** 2) < circle_radius): 82 return False 83 # Line intersects with circle in range 84 1 13.0 13.0 0.8 a = m ** 2 + 1 85 1 120.0 120.0 7.8 b = 2 * (m * n - m * circle_ctr_y - circle_ctr_x) 86 1 77.0 77.0 5.0 c = circle_ctr_x ** 2 + circle_ctr_y ** 2 + n ** 2 - circle_radius ** 2 - 2 * n * circle_ctr_y 87 88 # compute the intersection points 89 1 25.0 25.0 1.6 discriminant = b**2 - 4*a*c 90 1 46.0 46.0 3.0 discriminant_bigger_than_zero = discriminant > 0 91 1 56.0 56.0 3.7 discriminant = discriminant[discriminant_bigger_than_zero] 92 93 1 6.0 6.0 0.4 if discriminant.size == 0: 94 return True 95 96 1 12.0 12.0 0.8 b = b[discriminant_bigger_than_zero] 97 98 # two real roots, the line intersects the circle at two points 99 1 77.0 77.0 5.0 x1 = (-b + np.sqrt(discriminant)) / (2 * a) 100 1 28.0 28.0 1.8 x2 = (-b - np.sqrt(discriminant)) / (2 * a) 101 102 # check if both points in range 103 1 96.0 96.0 6.3 in_range = (min_x <= x1) & (x1 <= max_x) & (min_x <= x2) & (x2 <= max_x) 104 1 123.0 123.0 8.0 return not np.any(in_range) For some reason the non vectorized function runs faster. My simple guess is that it is because the vectorized function runs over the whole array every time and the non vectorized one stops in the middle when it find a circle intersections. So my questions are: Is there a numpy function which doesn't iterate over the whole array but stops when the results are false? What is the reason the vectorized function takes longer to run? Any general optimization suggestions would be appreciated | Is there a numpy function which doesn't iterate over the whole array but stops when the results are false? No. This is a long standing feature requested by Numpy users but it will certainly never be added to Numpy. For simple cases, like returning the first index of a boolean array, Numpy could implement that, but the thing is the boolean array needs to be fully created in the first place. In order to support the general case, Numpy should merge multiple operations and do some kind of lazy computation. This basically means rewriting completely Numpy from scratch for an efficient implementation (which is a huge work). If you need to do that, there are two main solution: operating on chunks so for the computation to stop early (while computing up to len(chunk) additional items); writing your own fast compiled implementation using Numba or Cython (with views). What is the reason the vectorized function takes longer to run? The input is pretty small and Numpy is not optimized for small arrays. Indeed, each call to a Numpy function typically takes 0.4-4 us on a mainstream processor (like my i5-9600KF). This is because Numpy as many checks to do, new arrays to allocates, generic internal iterators to build, etc. As a result, a line like np.any(np.sqrt((circle_ctr_x - p1.x) ** 2 + (circle_ctr_y - p1.y) ** 2) < circle_radius) doing 8 Numpy calls and creating 7 temporary arrays takes about 8 us on my machine. The second similar line takes the same time. Together, they are already slower than the non-vectorized version. As pointed out in the question and the comments, the non-vectorized function can stop early and this can also help the non-vectorized version to be even faster than the other. Any general optimization suggestions would be appreciated Regarding your code, using Numba (with plain loops and Numpy arrays) is certainly a good idea for performance. Note the first call can be slower due to the compilation time (you can provide the signature to do this at loading time or just use an AOT compiler including Cython). Note that array of structure are generally not efficient since they prevent the efficient use of SIMD instructions. They are also certainly not efficiently computed by Numpy since the datatype is dynamically created and the Numpy code is already compiled ahead of time (so it cannot implement function for this specific datatype and has to use generic dynamic operation on each item of the array which is significantly slower than basic datatypes). Please consider using structure of arrays. For more information please read this post and more generally this post. | 3 | 3 |
74,755,994 | 2022-12-10 | https://stackoverflow.com/questions/74755994/closest-true-value-to-zero-in-python | A long time ago I read about the closest true value to zero, like zero = 0.000000001, something like that. In the article they mentioned about this value in Python and how to achieve it. Does anyone knows about this? I have look up here in SO but all the answers are about the closest value to zero of an array and that's not my point. | The minimum positive denormalized value in Python3.9 and up is given by math.ulp(0.0) which returns 5e-324, or 4.940656e-324 when printed with format(math.ulp(0.0), '.7'). | 4 | 2 |
74,747,965 | 2022-12-9 | https://stackoverflow.com/questions/74747965/loki-throws-unmarshalerdecoder-error-for-json-payload | I get this error loghttp.PushRequest.Streams: []*loghttp.Stream: unmarshalerDecoder: Value looks like Number/Boolean/None, but can't find its end: ',' or '}' symbol, error found in #10 byte of ...| ] } ] }|..., bigger context ...| } } ] } ] }|... when uploading the json { "streams":[ { "stream":{ "application":"fabric-sso", "job":"aws-lambda", "level":"info", "namespace":"oauth" }, "values":[ { "ts":"2022-12-10T01:36:44.971933+05:30", "message":{ "type":"h2", "timestamp":"2022-09-17T11:00:03.828554Z", "alb":"app/fabric-sso/sdjsjhdjhdshksdhf", "client_ip":"999.999.999.999", "client_port":"7392", "backend_ip":"", "backend_port":"", "request_processing_time":"-1", "backend_processing_time":"-1", "response_processing_time":"-1", "alb_status_code":"404", "backend_status_code":"-", "received_bytes":"706", "sent_bytes":"67", "request_verb":"GET", "request_url":"https://foozy.dev.gabbar.com:443/gabbar", "request_proto":"HTTP/2.0", "user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36", "ssl_cipher":"ECDHE-RSA-AES128-GCM-SHA256", "ssl_protocol":"TLSv1.2", "target_group_arn":"-", "trace_id":"Root=1-6325a8b3-1980ccbd244b83c35ec5b543", "domain_name":"foozy.dev.gabbar.com", "chosen_cert_arn":"arn:aws:acm:us-east-1:23232323:certificate/0000", "matched_rule_priority":"0", "request_creation_time":"2022-09-17T11:00:03.818000Z", "actions_executed":"waf,fixed-response", "redirect_url":"-", "lambda_error_reason":"-", "target_port_list":"-", "target_status_code_list":"-", "classification":"-", "classification_reason":"-", "application":"fabric-sso", "env":"dev" } } ] } ] } alb_log_data is a dictionary, I create a list of dictionaries with timestamp and the actual alb log message (as json) and post it like so def build_loki_request_payload(alb_log_data): entries = [] for entry in alb_log_data: curr_datetime = datetime.datetime.now(pytz.timezone('Asia/Kolkata')) curr_datetime = curr_datetime.isoformat('T') entries.append({'ts': curr_datetime, 'message': entry}) payload = { 'streams': [{ "stream": { "application": alb_log_data[0]['application'], "job": "aws-lambda", "level": "info", "namespace": "oauth" }, "values": entries }] } payload = json.dumps(payload) logger.debug('Created Payload %s', payload) return payload | I missed the right format, as in the example in Loki documentation https://grafana.com/docs/loki/latest/api/#push-log-entries-to-loki { "streams": [ { "stream": { "label": "value" }, "values": [ [ "<unix epoch in nanoseconds>", "<log line>" ], [ "<unix epoch in nanoseconds>", "<log line>" ] ] } ] } changed the method to def build_loki_request_payload(alb_log_data): entries = [] for entry in alb_log_data: entries.append([time.time_ns(), json.dumps(entry)]) payload = { 'streams': [{ "stream": { "application": alb_log_data[0]['application'], "job": "aws-lambda", "level": "info", "namespace": "oauth" }, "values": entries }] } payload = json.dumps(payload) logger.debug('Created Payload %s', payload) return payload which now gives { "streams":[ { "stream":{ "application":"fabric-sso", "job":"aws-lambda", "level":"info", "namespace":"oauth" }, "values":[ [ "1670645653278432000", "{\"type\": \"h2\", \"timestamp\": \"2022-09-17T11:00:03.828554Z\", \"alb\": \"app/fabric-sso/123456\", \"client_ip\": \"230.82.1.129\", \"client_port\": \"7392\", \"backend_ip\": \"\", \"backend_port\": \"\", \"request_processing_time\": \"-1\", \"backend_processing_time\": \"-1\", \"response_processing_time\": \"-1\", \"alb_status_code\": \"404\", \"backend_status_code\": \"-\", \"received_bytes\": \"706\", \"sent_bytes\": \"67\", \"request_verb\": \"GET\", \"request_url\": \"https://foozy.dev.gabbar.com:443/gabbar\", \"request_proto\": \"HTTP/2.0\", \"user_agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\", \"ssl_cipher\": \"ECDHE-RSA-AES128-GCM-SHA256\", \"ssl_protocol\": \"TLSv1.2\", \"target_group_arn\": \"-\", \"trace_id\": \"Root=1-6325a8b3-1980ccbd244b83c35ec5b543\", \"domain_name\": \"foozy.dev.gabbar.com\", \"chosen_cert_arn\": \"arn:aws:acm:us-east-1:3246283467784628:certificate/814e-d1e61e9e7f9b\", \"matched_rule_priority\": \"0\", \"request_creation_time\": \"2022-09-17T11:00:03.818000Z\", \"actions_executed\": \"waf,fixed-response\", \"redirect_url\": \"-\", \"lambda_error_reason\": \"-\", \"target_port_list\": \"-\", \"target_status_code_list\": \"-\", \"classification\": \"-\", \"classification_reason\": \"-\", \"application\": \"fabric-sso\", \"env\": \"dev\"}" ] ] } ] } | 5 | 3 |
74,748,563 | 2022-12-9 | https://stackoverflow.com/questions/74748563/how-do-i-download-pdf-files-using-pythons-reqests-httpx-module | I'm making a program that downloads PDFs from the internet. Here's a example of the code: import httpx # <-- This also happens with the requests module URL = "http://62.182.86.140/main/0/aee7239ffcf7871e1d6687ced1215e22/Markus%20Nix%20-%20Exploring%20Python-Entwickler%20%282005%29.djvu" r = httpx.get(URL, timeout=20.0).content.decode("ascii") with open(f"./example.pdf", "w") as f: f.write(str(content)) But when I write to a file, none of my pdf viewers (tried okular and zathura) can read them. But when I download it using a program like wget, there's no problems. Then when I compare the two files (one downloaded with python, and the other with wget), everything is encoded, and I can't figure out how to decode it (.decode() doesn't work). | import httpx def main(url): r = httpx.get(url, timeout=20) with open('file.djvu', 'wb') as f: f.write(r.content) main('http://62.182.86.140/main/0/aee7239ffcf7871e1d6687ced1215e22/Markus%20Nix%20-%20Exploring%20Python-Entwickler%20%282005%29.djvu') | 5 | 7 |
74,742,335 | 2022-12-9 | https://stackoverflow.com/questions/74742335/how-to-get-multiprocessing-queues-queue-qsize-on-macos | This is an old issue which suggested workaround does not work. Below is a complete example showing how the suggested approach fails. Uncomment L31 for error. import multiprocessing import os import time from multiprocessing import get_context from multiprocessing.queues import Queue class SharedCounter(object): def __init__(self, n=0): self.count = multiprocessing.Value('i', n) def increment(self, n=1): with self.count.get_lock(): self.count.value += n @property def value(self): return self.count.value class MyQueue(Queue): def __init__(self, *args, **kwargs): super(MyQueue, self).__init__(*args, ctx=get_context(), **kwargs) self.size = SharedCounter(0) def put(self, *args, **kwargs): self.size.increment(1) super(MyQueue, self).put(*args, **kwargs) def get(self, *args, **kwargs): # self.size.increment(-1) # uncomment this for error return super(MyQueue, self).get(*args, **kwargs) def qsize(self): return self.size.value def empty(self): return not self.qsize() def clear(self): while not self.empty(): self.get() def worker(queue): while True: item = queue.get() if item is None: break print(f'[{os.getpid()}]: got {item}') time.sleep(1) if __name__ == '__main__': num_processes = 4 q = MyQueue() pool = multiprocessing.Pool(num_processes, worker, (q,)) for i in range(10): q.put("hello") q.put("world") for i in range(num_processes): q.put(None) q.close() q.join_thread() pool.close() pool.join() For some reason, the newly defined MyQueue forgets about the size attribute. Process SpawnPoolWorker-1: Traceback (most recent call last): File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/pool.py", line 109, in worker initializer(*initargs) File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 47, in worker item = queue.get() ^^^^^^^^^^^ File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 31, in get self.size.increment(-1) # uncomment this for error ^^^^^^^^^ AttributeError: 'MyQueue' object has no attribute 'size' Process SpawnPoolWorker-2: Traceback (most recent call last): File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/pool.py", line 109, in worker initializer(*initargs) File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 47, in worker item = queue.get() ^^^^^^^^^^^ File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 31, in get self.size.increment(-1) # uncomment this for error ^^^^^^^^^ AttributeError: 'MyQueue' object has no attribute 'size' Process SpawnPoolWorker-4: Process SpawnPoolWorker-3: Traceback (most recent call last): Traceback (most recent call last): File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/pool.py", line 109, in worker initializer(*initargs) File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 47, in worker item = queue.get() ^^^^^^^^^^^ File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 31, in get self.size.increment(-1) # uncomment this for error ^^^^^^^^^ AttributeError: 'MyQueue' object has no attribute 'size' File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/Cellar/[email protected]/3.11.0/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/pool.py", line 109, in worker initializer(*initargs) File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 47, in worker item = queue.get() ^^^^^^^^^^^ File "/Users/user/Library/Application Support/JetBrains/PyCharm2022.3/scratches/scratch.py", line 31, in get self.size.increment(-1) # uncomment this for error ^^^^^^^^^ AttributeError: 'MyQueue' object has no attribute 'size' | well, you didn't to override __setstate__ and __getstate__ to include your variable, which are used by pickle to control the serialization Handling Stateful Objects ... so you should override them to add your variable to what's being serialized. import multiprocessing import os import time from multiprocessing import get_context from multiprocessing.queues import Queue class SharedCounter(object): def __init__(self, n=0): self.count = multiprocessing.Value('i', n) def increment(self, n=1): with self.count.get_lock(): self.count.value += n @property def value(self): return self.count.value class MyQueue(Queue): def __init__(self, *args, **kwargs): super(MyQueue, self).__init__(*args, ctx=get_context(), **kwargs) self.size = SharedCounter(0) def __getstate__(self): return (super(MyQueue, self).__getstate__(),self.size) def __setstate__(self, state): super(MyQueue, self).__setstate__(state[0]) self.size = state[1] def put(self, *args, **kwargs): self.size.increment(1) super(MyQueue, self).put(*args, **kwargs) def get(self, *args, **kwargs): self.size.increment(-1) # uncomment this for error return super(MyQueue, self).get(*args, **kwargs) def qsize(self): return self.size.value def empty(self): return not self.qsize() def clear(self): while not self.empty(): self.get() def worker(queue): while True: item = queue.get() if item is None: break print(f'[{os.getpid()}]: got {item}') time.sleep(1) if __name__ == '__main__': num_processes = 4 q = MyQueue() pool = multiprocessing.Pool(num_processes, initializer=worker, initargs=(q,)) for i in range(10): q.put("hello") q.put("world") for i in range(num_processes): q.put(None) q.close() q.join_thread() pool.close() pool.join() note that in python 3 we don't need to use super(MyQueue, self), as super() would suffice, and will make it easier to rename your class in the future and other portability and refactoring benefits, so consider swapping any super(x,y) with just super() | 5 | 3 |
74,738,922 | 2022-12-9 | https://stackoverflow.com/questions/74738922/finding-permutation-matrix-with-numpy | I am looking for the correct permutation matrix that would take matrix a and turn it into matrix b given a = np.array([[1,4,7,-2],[3,0,-2,-1],[-4,2,1,0],[-8,-3,-1,2]]) b = np.array([[-4,2,1,0],[3,0,-2,-1],[-8,-3,-1,2],[1,4,7,-2]]) I tried x = np.linalg.solve(a,b) However, I know this is incorrect and it should be np.array([[0,0,1,0],[0,1,0,0],[0,0,0,1],[1,0,0,0]]) What numpy code would deliver this matrix from the other two? | Generally, if you have some PA = B and you want P then you need to solve the equation for P. Matrix multiplication is not commutative, so you have to right multiply both sides by the inverse of A. With numpy, the function to get the inverse of a matrix is np.linalg.inv(). Using the matrix multiplication operator @, you can right-multiply with b to get P, taking note that you will end up with floating point precision errors: In [4]: b @ np.linalg.inv(a) Out[4]: array([[-1.38777878e-16, -3.05311332e-16, 1.00000000e+00, -1.31838984e-16], [ 0.00000000e+00, 1.00000000e+00, 6.93889390e-17, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, -1.11022302e-16, 1.00000000e+00], [ 1.00000000e+00, -4.44089210e-16, 5.55111512e-17, 0.00000000e+00]]) As @Mad Physicist points out, you can compare this matrix with > 0.5 to convert it to a boolean matrix: In [7]: bool_P = (b @ np.linalg.inv(a)) > 0.5 In [8]: bool_P @ a Out[8]: array([[-4, 2, 1, 0], [ 3, 0, -2, -1], [-8, -3, -1, 2], [ 1, 4, 7, -2]]) In [9]: bool_P @ a == b # our original equation, PA = B Out[9]: array([[ True, True, True, True], [ True, True, True, True], [ True, True, True, True], [ True, True, True, True]]) | 4 | 4 |
74,734,461 | 2022-12-8 | https://stackoverflow.com/questions/74734461/defer-method-returning-unknown-interaction-error | Issue My slash commands return 404 Not Found (error code: 10062): Unknown interaction when I run them. I do have them deferred as: @bot.tree.command(name="evaluate") async def evaluate(interaction: discord.Interaction): await interaction.response.defer(ephemeral=True) await asyncio.sleep(10) await interaction.followup.send("Command works") for example. However, when I run my commands, it raises the following error: File "/home/container/.local/lib/python3.11/site-packages/discord/app_commands/commands.py", line 862, in _do_call return await self._callback(interaction, **params) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/container/bot.py", line 282, in evaluate await interaction.response.defer(ephemeral=False, thinking=True) File "/home/container/.local/lib/python3.11/site-packages/discord/interactions.py", line 636, in defer await adapter.create_interaction_response( File "/home/container/.local/lib/python3.11/site-packages/discord/webhook/async_.py", line 218, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction This happens mostly in one command, but at the very end. The entire command works fine but the bot doesn't send the final confirmation response await interaction.followup.send() and instead throws the above error. (Edited paragraph) Reason for posting: I don't really think I'm making an error with the code itself, which is why I'd like some insight as to why this is or may be happening. I also posted this in case someone else in the future encounters the same issue, since I couldn't find similar or same questions asked here on SO. | Posted this on the discord.py official Discord Server and got an answer. If a 404 Not Found error appears even after deferring, it means that it took too long for the defer() to execute. Or in other words, your await interaction.response.defer() is running after 3 seconds have passed and the API request has already been terminated. Credit: SolsticeShard on the discord.py Discord. The solution was to run defer() much earlier up the code block. That ensures that the interaction is deferred before the 3s time limit has passed. | 3 | 3 |
74,743,525 | 2022-12-9 | https://stackoverflow.com/questions/74743525/how-to-check-input-arguments-in-a-python-script-with-cli | I'm writing a small script to learn Python. The script prints a chess tournament table for N players. It has a simple CLI with a single argument N. Now I'm trying the following approach: import argparse def parse_args(argv: list[str] | None = None) -> int: parser = argparse.ArgumentParser(description="Tournament tables") parser.add_argument('N', help="number of players (2 at least)", type=int) args = parser.parse_args(argv) if args.N < 2: parser.error("N must be 2 at least") return args.N def main(n: int) -> None: print(F"Here will be the table for {n} players") if __name__ == '__main__': main(parse_args()) But this seems to have a flaw. The function main doesn't check n for invalid input (as it's the job of CLI parser). So if somebody calls main directly from another module (a tester for example), he may call it with lets say 0, and the program most likely crashes. How should I properly handle this issue? I'm considering several possible ways, but not sure what is the best. Add a proper value checking and error handling to main. This option looks ugly to me, as it violates the DRY principle and forces main to double the job of CLI. Just document that main must take only n >= 2, and its behaviour is unpredicted otherwise. Possibly to combine with adding an assertion check to main, like this: assert n >= 2, "n must be 2 or more" Perhaps such a function should not be external at all? So the whole chosen idiom is wrong and the script's entry point should be rewritten another way. ??? | You could have main do all the checking aind raise ArgumentError if something is amiss. Then catch that exception and forward it to the parser for display. Something along these lines: import argparse def run_with_args(argv: list[str] | None = None) -> int: parser = argparse.ArgumentParser(description="Tournament tables") parser.add_argument('N', help="number of players (2 at least)", type=int) args = parser.parse_args(argv) try: main(args.N) except argparse.ArgumentError as ex: parser.error(str(ex)) def main(n: int) -> None: if N < 2: raise argparse.ArgumentError("N must be 2 at least") print(F"Here will be the table for {n} players") if __name__ == '__main__': run_with_args() If you don't want to expose argparse.ArgumentError to library users of main, you can also create a custom exception type instead of it. | 4 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.