question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
76,960,371
2023-8-23
https://stackoverflow.com/questions/76960371/executing-several-python-files-on-the-same-interactive-prompt-on-vs-code
I'm transitioning from Spyder to VS Code, and I'm having an issue executing ad-hoc code on an interactive session. I first save this to a tmp1.py and run the selection on an interactive window: import pandas as pd df = pd.DataFrame({'a': [1,2,3,4], 'b': [1,1,2,3]}) Then on tmp2.py I just write and run: print(df) It opens a new interactive window and complains NameError: name 'df' is not defined. How can I configure it to run on the currently active window / session? Note that I'm not using a Jupyter notebook but IPython. At some point I'll want to run code in parallel on several terminals, but at first I'd like to get this simple set-up working.
For some reason my "Jupyter > Interactive Window: Creation Mode" setting was "perfile", although "multiple" is supposed to be the default. After changing it to "multiple" everything works logically. I can also open sevaral interactive windows, and the selected code gets evaluated in the active one.
3
5
76,973,376
2023-8-24
https://stackoverflow.com/questions/76973376/how-to-use-assertraises-in-table-driven-tests-where-some-tests-raise-and-others
How can I avoid calling the function I'm testing in two different places when writing table-driven tests where some of the tests should raise but others should not? This is what I want to do but it fails passing None to assertRaises: tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: with self.assertRaises(exc): results = my_code(n) assert len(results) == n The best I have come up with is this but the redundant call to my_code is bothering me: tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: if exc is not None: with self.assertRaises(exc): my_code(n) else: results = my_code(n) assert len(results) == n After adding a helper func on our base test case using the answer from @AmuroRay this is now: tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: with self.assertRaisesUnlessNone(exc): results = my_code(n) assert len(results) == n
contextlib has nullcontext() which can accomplish this. It's not really using fewer lines (without an ugly one-liner branch), but it eliminates the second user-code call: from contextlib import nullcontext for n, exc in tests: if exc is None: cm = nullcontext() else: cm = self.assertRaises(exc) with cm: results = my_code(n) assert len(results) == n
2
3
76,967,606
2023-8-24
https://stackoverflow.com/questions/76967606/how-to-run-parallel-processes-reading-from-the-same-variable-in-python-efficient
I have a Python script to process one single file (~100 Gb) with many simple statistical operation that do not depend on each other. So I first load the file, then I use Joblib to run the statistical operations on the loaded data in parallel. I can see from htop that all CPU cores are loaded to 100% but most of the time the load is red, which (according to htop) indicates kernel operations. To give more details of my implementation, I first load the file into one single numpy array defined as a global variable. Then I define the statistical operations as a function that uses that global variable, and finally with Joblib I run the functions. I think the red CPU load happens because all the parallel operations use the same variable. So I am asking if there is better way to do this. Below is a simplified minimal example of my implementation. myVariable = np.load("myFile.npy") myParameters = np.arange(100) def myFunction(myParameter): return np.mean(myVariable**myParameter) Parallel(n_jobs=-1)(delayed(myFunction)(p) for p in myParameters) Please note that I have posted a similar question that I am still working on. In the other question the problem came from reading many files in parallel, and in that case the CPUs were not loaded and were always in D state. In this case only one file is read and many processes are executed using the loaded data, and in this case the CPUs are loaded but they shown in red.
TL;DR: there are several issues in this code making is very inefficient. Put it shortly, it is bound by system overheads like page faults, cache misses, cache-line write allocates, useless memory writes, and even possibly memory swapping (depending on the amount of available RAM). Under the hood To understand why this code is inefficient, what are the overheads and how to fix them, we first need to understand what each line does and how modern systems actually work. First of all, np.load("myFile.npy") allocates a huge array of 100 Gb (that is 12.5 GB) and fill its content from a storage device. This part can be particularly slow using a HDD (at least few minutes), but it should be relatively fast with a SSD (at least few seconds). The speed of this part is dependent of the hardware, the OS modules/drivers, the OS itself and the way Numpy read data. Regarding myVariable**myParameter, it first allocates a new temporary array of the same size than myVariable. At this point, the OS does not reserve the space in RAM. It only reserve the space in virtual memory. More specifically, a group of virtual memory (fixed-size) chunks called pages (typically 4KB). This array is then filled with the result of myVariable[i]**myParameter for every i. When a value is written in a page for the first time (i.e. first touch), the OS maps the virtual memory page to a physical memory page in RAM. This operation is called a page fault and it is very expensive. In addition, big temporary arrays are slow to write in RAM on x86-64 systems because of the way CPU caches works (more specifically the cache coherence protocol). CPU caches are split in small memory chunks called cache lines. x86-64 CPUs (as well as some other architectures) requires a cache line to be read if a part of it is written, which is the case here. This is called cache line write allocations. This means half the bandwidth is wasted due to reads when a big temporary array is written. np.mean read the temporary array recently written in RAM so to produce a scalar value. Parallel(n_jobs=-1)(delayed(myFunction)(p) for p in myParameters) creates N processes, executing the target function. N is typically the number of (physical) cores, or even SMT threads (also called logical cores) on some machines. Global variables are serialised (using pickle) which is very slow due to interprocess communication (and pickling). However, based on this answer, Numpy arrays are shared between processes thanks to virtual memory and a smart implementation of JobLib. Each process one creates temporary arrays and compute the mean of it. This means the amount of RAM needed is at least N+1 times the size of myVariable. For a machine with 8 cores, this means 12.5*9 = 212.5 GB of RAM which is a huge amount of RAM. If there is not enough memory available, the OS need to store memory pages on a storage device area called the memory swap which is generally extremely slow compared to the speed of the RAM. One need to ensure there is enough memory to do the computation. In fact, one need to avoid allocating so much memory in the first place if possible. The cache and the RAM are used very inefficiently. Indeed, myVariable is read N times (because of the N processes) from the RAM while it can theoretically be read only once. The temporary arrays causes a lot of expensive cache misses due to the large amount of data written. They can make the computation even slower due to an effect called cache trashing. Note that the RAM of modern architectures tends to be so slow that only few core are often able to saturate it. For exemple, 2 cores of my i5-9600KF CPU (having 6-core) can saturate my 40 GiB/s RAM (2 channels of DDR4 RAM running at 3200 MHz). How to write a faster code The first thing to to is to avoid creating huge temporary arrays like the plague. You can perform the computation using small chunks and reuse small temporary arrays for multiple chunks. Alternatively, you can use tools like Numba or Cython so to compute data on-the-fly preventing the need to creates temporary arrays here. This can be done using basic loops operating on scalar items. Loops are not slow with Numba or Cython because they are compiled to native code (if written correctly). This method can be combined with the previous one. This removes the overhead of page-faults, the RAM and the CPU caches are far more efficiently used (by not reloading data N times and not writing huge amount of data), not to mention it allocates far less memory avoiding memory swap issues. The benefits are so huge that 1 process running an optimised code can be significantly faster than your current code using N processes. The best actual strategy to use is dependent of your exact needs. For example, Numba cannot compile any arbitrary Python code : the code needs not to use modules other than Numpy yet (mainly). Cython is good to make loops faster but it cannot optimize Numpy function calls. Chunks-based code using Numpy is often not as fast as using Numba or Cython because of the overheads of Numpy. However, it is sometimes more flexible in practice. Besides, It might also be fast enough for your needs. In practice, Numba does not support compiling functions using large global variable efficiently (like in your case). But, this is not really a problem. We can add an indirection layer so to do that. Here is an example: import numba as nb # Caching is used to avoid the function to be recompiled in each process @nb.njit(cache=True) def myFunction_numba_seq(myParameter, myVariable): sum = 0.0 for i in range(myVariable.size): sum += myVariable[i] ** myParameter return sum / myVariable.size def myFunction(myParameter): return myFunction_numba_seq(myParameter, myVariable) # [...] Same code using Parallel(n_jobs=-1) here That being said, using global variable is generally strongly discouraged in software engineering (e.g. for optimizations and for maintainability). Besides, Numba and Cython support multi-threading (without GIL issues). Thus, you can just use Numba without joblib in the following way: import numba as nb @nb.njit(parallel=True) def myFunction_numba_par(myParameters, myVariable): sums = np.zeros(myParameters.size) # Parallel loop. for i in nb.prange(myVariable.size): for j in range(myParameters.size): sums[j] += myVariable[i] ** myParameters[j] return sums / myVariable.size myVariable = np.load("myFile.npy") myParameters = np.arange(100) result = myFunction_numba_par(myParameters, myVariable) Note that each item of myVariable (read once from RAM) is reused so to compute all the mean simultaneously. This means the code only need to read myVariable once. Note that this code can be improved using chunks to make it faster. Indeed, compilers can generate a fast SIMD instructions on some platforms. There are also fast math libraries to compute a**b often more efficiently than the default math library provided by the system. Also please note that summing very large arrays tends to cause accuracy issues. Chunks can help to make results more accurate here. There are also accurate algorithm to compute a mean (e.g. Kahan summation). Writing a fast accurate mean is actually not trivial to do. Numpy can does it quite well, so it can be simpler with chunks.
2
3
76,964,381
2023-8-23
https://stackoverflow.com/questions/76964381/format-large-number-output-in-pythons-tqdm-module-for-readability
is there any way we can format a big number in tqdm's progress bar for better readability? For example, 1000000row/s must be 1,000,000rows/s . I know about unit scale, but it formats a number like 1,21M rows/s which is not currently what I am trying to achieve Thanks!
A little bit of reverse-engineering how tqdm handles it: from time import sleep from tqdm import tqdm tqdm.format_sizeof = lambda x, divisor=None: f"{x:,}" if divisor else f"{x:5.2f}" with tqdm(total=1_000_000, unit_scale=True) as t: for _ in range(1_000_000): sleep(0.00001) t.update() Prints: 8%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 78,888/1,000,000 [00:04<00:56, 0.00it/s]
4
5
76,972,113
2023-8-24
https://stackoverflow.com/questions/76972113/expand-a-nested-list-in-a-pandas-datatable-column-and-move-to-their-own-column
I am trying to use panads to manage data that I am pulling from an API. The data has one column that is a nested list and I am trying to extract it out to their own columns. The data looks like this so far: id mail displayName propertiesRegistered createdDateTime 00000000-0000-0000-0000-000000000000 [email protected] User, Joe ['address', 'mobilePhone', 'officePhone'] 2023-08-19T15:00:00.00Z The desired output would look like: id mail displayName address mobilePhone officePhone homePhone createdDateTime 00000000-0000-0000-0000-000000000000 [email protected] User, Joe TRUE TRUE TRUE FALSE 2023-08-19T15:00:00.00Z I've tried expanding, series, and pivot tables but cannot seem to figure it out. And not sure I'm even phrasing my question correctly in my searching. A lot of people and examples have the data made into additional rows, which I was able to do, but getting it to a single row is ideal. Any help is greatly appreciated.
You can use str.get_dummies: valid_properties = ['address', 'mobilePhone', 'officePhone', 'homePhone'] df = df.join(df.pop('propertiesRegistered').agg('|'.join).str.get_dummies() .reindex(columns=valid_properties, fill_value=0) # or astype(bool) for real booleans .replace({1: 'TRUE', 0: 'FALSE'}) ) Or a crosstab: s = df.pop('propertiesRegistered').explode() df = df.join(pd.crosstab(s.index, s) .reindex(columns=valid_properties, fill_value=0) .gt(0) ) Output: id mail displayName createdDateTime address mobilePhone officePhone homePhone 0 00000000-0000-0000-0000-000000000000 [email protected] User, Joe 2023-08-19T15:00:00.00Z TRUE TRUE TRUE FALSE
3
1
76,971,984
2023-8-24
https://stackoverflow.com/questions/76971984/how-do-you-convert-data-frame-column-values-to-integer
I need to convert data frame column to int. df['Slot'].unique() displays this: array(['1', '2', '3', '4', 1, 3, 5], dtype=object) some values have '' around it some dont. I tried to convert the data type to int as below: df['Slot']=df['Slot'].astype('Int64') I get this error: TypeError: cannot safely cast non-equivalent object to int64 Any ideas how to convert to int?
You should first convert to numeric, then cast to Int64: df['Slot'] = pd.to_numeric(df['Slot']).astype('Int64') After casting the type: df.dtypes Slot Int64 dtype: object As noted in comments, if you use numpy's int64 type this works from scratch: df['Slot'].astype('int64')
3
3
76,970,781
2023-8-24
https://stackoverflow.com/questions/76970781/capturing-negative-lookahead
I need for https://github.com/mchelem/terminator-editor-plugin to capture different type of path with line number. So far I use this pattern: (?![ab]\/)(([^ \t\n\r\f\v:\"])+?\.(html|py|css|js|txt|xml|json|vue))(\". line |:|\n| )(([0-9]+)*) I'm trying to make it work for git patch format, which adds a 'a/' and 'b/' before paths, how do I make this work, I can't make the lookahead gulp the first slash. Here's the test text: diff --git a/src/give/forms.py b/src/give/forms.py M give/locale/fr/LC_MESSAGES/django.po M agive/models/translation.py M give/views.py Some problem at src/give/widgets.py:103 Traceback (most recent call last): File "/usr/lib/python3.10/unittest/case.py", line 59, in testPartExecutor yield File "/usr/lib/python3.10/unittest/case.py", line 587, in run self._callSetUp() File "/usr/lib/python3.10/unittest/case.py", line 546, in _callSetUp self.setUp() File "/home/projects/src/give/tests/test_models.py", line 14, in setUp https://regex101.com/r/tF50pn/1 (In this link I want the same capture text except for the first line where it is currrently capturing /src/give/forms.py and /src/give/forms.py but I want src/give/forms.py and src/give/forms.py)
You have redundant groups that you can remove and make the starting [ab]/ part optional to leave it out of first capture group. Here is refactored and optimized regex: (?:[ab]/)?([^ \t\n\r\f\v:"]+\.(?:html|py|css|js|txt|xml|json|vue))(?:". line |[:\n ]) Updated RegEx Demo
2
3
76,970,380
2023-8-24
https://stackoverflow.com/questions/76970380/random-forest-predicting-neither-class-when-target-is-one-hot-encoded
I fairly know that trees are sensitive to one hot encoded (OHE) targets however I want to understand why it returns the predictions like this: array([[0, 0, 0, 0], [0, 0, 0, 0], . . . [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) For most of the samples, it predict neither class. I will encode my targets as ordinal (since it is applicable) but what if it was not? What to do then? This is how it looks before OHE: array(['4 -8 weeks', '13 - 16 weeks', '17 - 20 weeks', ..., '9 - 12 weeks', '13 - 16 weeks'], dtype=object) Full code: from sklearn.preprocessing import LabelBinarizer mlb = LabelBinarizer() b = mlb.fit_transform(Class) list(mlb.classes_) # Split the data into training and test sets X_train, X_test, y_train, y_test = train_test_split(data, b, test_size=0.2, random_state=42) # Create a multi-label classifier classifier = RandomForestClassifier() # Train the classifier classifier.fit(X_train, y_train) # Make predictions on the test set y_pred = classifier.predict(X_test) accuracy = accuracy_score(y_test, y_pred)
When the targets are one-hot encoded, sklearn treats the problem as a multi-label one (each row could have any number of labels). As such, you get a predicted probability for each label, and those are independently thresholded at 0.5 in order to make the class predictions. When the targets are ordinally encoded, sklearn treats the problem as a multiclass one (each row has exactly one class). Despite the numerical ordering, sklearn doesn't care (well, except in tiebreaking) and treats the classes as unordered. The predicted probabilities sum to 1, and the predicted class is the one with largest probability. You don't need to encode labels at all. sklearn will encode them internally for computational efficiency; but leaving strings as the labels is fine, will be treated as multiclass, and allows for the class predictions to also be strings (no need to decode).
2
3
76,966,431
2023-8-24
https://stackoverflow.com/questions/76966431/chaining-functions-with-pipe-pipe-using-not-the-first-argument
In a pipe i want to use the result of previous steps as the second argument to a subsequent step. In R i can use the result of the previous chain using . like so: df %>% b(arg1b) %>% c(arg1c, .) How to do this in python for example using pipe? df.pipe(b, arg1b).pipe(c, arg1c, **) syntax error
I'm not an active user of pandas, but this documentation page on pandas.DataFrame.pipe seems to cover your case df.pipe(b, arg1b) .pipe((c, 'second_arg_name'), arg1c) where 'second_arg_name' should be replaced with actual name of the second argument in function c
2
3
76,959,316
2023-8-23
https://stackoverflow.com/questions/76959316/remove-border-contours-from-fingerprint-image
how do i remove the outer contour lines at the edge of this fingerprint image without affecting ridges and valleys contours before processing after segmentation and ROI result after applying CLAHE and enhancement [] (https://i.sstatic.net/TIMu6.jpg) import cv2 image = cv2.imread('fingerprint.jpg') original = image.copy() gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (9,9), 0) thresh = cv2.threshold(gray,0,255,cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1] kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel) dilate_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9)) dilate = cv2.dilate(opening, dilate_kernel, iterations=5) cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] cnts = sorted(cnts, key=cv2.contourArea, reverse=True) for c in cnts: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) ROI = original[y:y+h, x:x+w] break cv2.imshow('ROI', ROI) but am not getting the desired result.
Here’s a possible solution. I’m working with the binary image. You don’t show how you get this image, you mention segmentation and CLAHE but none of these operations are shown in your snippet. It might be easier to deal with the β€œborders” there, before actually getting the binary image of the finger ridges. Anyway, my solution assumes that the borders are the first and last blobs to be encountering while scanning the image left to right. It also assumes that the borders are contiguous. The idea is to locate them and then flood-fill them with any color, in this case black, to β€œerase” them. First, locate the most external contours. It can be done by reducing the image to a row. The reduced row will give you the exact horizontal location of the first and last white pixels if you reduce using the MAX mode – that should correspond to the external borders. As the borders seem to be located at the upper part of the image, you can just take a portion where you are sure the borders are located: import cv2 # Set image path imagePath = "D://opencvImages//TIMu6.jpg" # Load image: image = cv2.imread(imagePath) # Get binary image: gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) binaryImage = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1] showImage("Binary", binaryImage) # BGR of binary image: bgrImage = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR) bgrCopy = bgrImage.copy() # Get image dimensions: imageHeight, imageWidth = binaryImage.shape[0:2] # Vertically divide in 4 parts: heightDivision = 4 heightPortion = imageHeight // heightDivision # Store divisions here: imageDivisions = [] # Check out divisions: for i in range(heightDivision): # Compute y coordinate: y = i * heightPortion # Set crop dimensions: x = 0 w = imageWidth h = heightPortion # Crop portion: portionCrop = binaryImage[y:y + h, x:x + w] # Store portion: imageDivisions.append(portionCrop) # Draw rectangle: cv2.rectangle(bgrImage, (0, y), (w, y + h), (0, 255, 0), 1) cv2.imshow("Portions", bgrImage) cv2.waitKey(0) This first bit just vertically divides the image into four portions. Just for visual purposes, let’s see the four divisions: I stored each portion in the imageDivisions list, but you just need the first one. Next, reduce it to a row using the MAX mode: # Reduce first portion to a row: reducedImage = cv2.reduce(imageDivisions[0], 0, cv2.REDUCE_MAX) This will vertically β€œcrush” the matrix into a row (i.e., a vertical projection), where each pixel value is the maximum value (in this case, 255 – white) of each column. The result is a kinda-hard to see tiny row: Let’s search for the first and last white pixels. You can just look for the black to white and white to black transitions in this array: # Get first and last white pixel positions: pastPixel = 0 pixelCoordinates = [] for i in range(imageWidth): # Get current pixel: currentPixel = reducedImage[0][i] # Search for first transition black to white: if currentPixel == 255 and pastPixel == 0: pixelCoordinates.append(i) else: # Search for last transition white to black: if currentPixel == 0 and pastPixel == 255: pixelCoordinates.append(i - 1) # Set last pixel: pastPixel = currentPixel The horizontal coordinates of the white pixels are stored in the pixelCoordinates list. Lastly, let’s use this as positions for locating the most external borders and flood-fill them: # Flood fill original image: color = (0, 0, 255) # Red for i in range(len(pixelCoordinates)): # Get x coordinate: x = pixelCoordinates[i] # Set y coordinate: y = heightPortion # Set seed point: seedPoint = (x, y) # Flood-fill: cv2.floodFill(bgrCopy, None, seedPoint, color) cv2.imshow("Flood-filled", bgrCopy) cv2.waitKey(0) Here I’m actually flood-filling a deep copy of the original BGR Image, and using a red color: If you want to fill the borders with black, just change the color to (0,0,0). If you want to flood-fill the original binary image, just change the first argument of the floodFill function. This is the result:
3
3
76,964,787
2023-8-23
https://stackoverflow.com/questions/76964787/how-can-i-get-the-start-positions-of-my-regex-matches-from-a-string-without-also
This is kind of a complicated question to explain and easier to just show as an example. So let's say I have this string here: exampleString = "abcauehafj['hello']jfa['hello']jasfjgafadsf" And so we have the regex pattern of: regex = r"\['hello'\]" Now, what I want to do is get the start positions of regex matches within the exampleString. This would be [10, 22], which I currently calculate using the following code: import re matches = re.finditer(regex, exampleString) start_pos = [] for match in matches: start_pos.append(match.start()) Now, the issue is, I do not want to include the length of the ['hello'] to the string itself. So the start_pos that I actually want would be [10, 13]. What would be the best way to do this without affecting the strings?
If you prefer a list comprehension, it's easy to do with enumerate, unless I misunderstood what you desired: import re exampleString = "abcauehafj['hello']jfa['hello']jasfjgafadsf" regex = r"\['hello'\]" start_pos = [match.start() - i * len(match.group()) for (i, match) in enumerate(re.finditer(regex, exampleString))] Edit 1: of course, right after posting, I saw Kelly Bundy's comment stating exactly that... Edit 2: just for the sake of it, I tried to wrap Kelly Bundy's variable length solution into a list comprehension; it's not pretty but at least it works ;) exampleString = "abcauehafj['hi']jfa['there']jasfjgafads['hello']f" regex = r"\['[a-z]*'\]" start_pos = [match.start() - length_sum + len(match.group()) for i, match in enumerate(re.finditer(regex, exampleString)) if (length_sum := len(match.group()) + (length_sum if i else 0))]
2
2
76,965,208
2023-8-23
https://stackoverflow.com/questions/76965208/in-python-how-can-i-type-hint-an-input-that-is-not-str-or-bytes-and-is-a-sequen
In python, how can I type hint an input that is not str or bytes and is a sequence? I have a function that I want to accept a sequence but not accept a string or bytes input. But this code allows in str and bytes: import typing def validate(input_data: typing.Sequence) -> None: # implementation that validate that input data meets expected constraints #one can ignore the implementation pass
One can do this by creating a Sequence protocol that implements the sequence interface, and by making the contains method ingest object, str and bytes classes will not implement the new protocol. Answer from: https://github.com/python/typing/issues/256#issuecomment-1687374072 str __contains__ is incompatible with this definition bytes __contains__ is incompatible with this definition Here's the code: import typing _T_co = typing.TypeVar("_T_co", covariant=True) class Sequence(typing.Protocol[_T_co]): """ if a Protocol would define the interface of Sequence, this protocol would NOT allow str/bytes as their __contains__ is incompatible with the definition in Sequence. methods from: https://docs.python.org/3/library/collections.abc.html#collections.abc.Collection """ def __contains__(self, value: object, /) -> bool: raise NotImplementedError def __getitem__(self, index, /): raise NotImplementedError def __len__(self) -> int: raise NotImplementedError def __iter__(self) -> Iterator[_T_co]: raise NotImplementedError def __reversed__(self, /) -> typing.Iterator[_T_co]: raise NotImplementedError def validate(input_data: Sequence) -> None: # implementation that validates that input data meets expected constraints #one can ignore the implementation pass def validate_typed_sequence(input_data: Sequence[int]) -> None: # implementation that validates that input data meets expected constraints #one can ignore the implementation pass
2
3
76,962,240
2023-8-23
https://stackoverflow.com/questions/76962240/numpy-turn-hierarchy-of-matrices-into-concatenation
I have the following 4 matrices: >>> a array([[0., 0.], [0., 0.]]) >>> b array([[1., 1.], [1., 1.]]) >>> c array([[2., 2.], [2., 2.]]) >>> d array([[3., 3.], [3., 3.]]) I'm creating another matrix that will contain them: >>> e = np.array([[a,b], [c,d]]) >>> e.shape (2, 2, 2, 2) I want to "cancel the hierarchy" and reshape e into a 4x4 matrix that will look like this: 0 0 1 1 0 0 1 1 2 2 3 3 2 2 3 3 However, when I run e.reshape((4,4)), I get the following matrix: >>> e.reshape((4,4)) array([[0., 0., 0., 0.], [1., 1., 1., 1.], [2., 2., 2., 2.], [3., 3., 3., 3.]]) Is there a way to reshape my (2,2,2,2) matrix into a (4,4) matrix by cancelling the hierarchy, rather than the the by the indexing I'm currently getting?
Try np.block([[a,b],[c,d]]) This concatenates the inner lists horizontally, and does a vertical stack. Alternatively you could swap 2 axes of e and then reshape. In [41]: np.block([[a,b],[c,d]]) Out[41]: array([[0., 0., 1., 1.], [0., 0., 1., 1.], [2., 2., 3., 3.], [2., 2., 3., 3.]]) In [45]: e.transpose(0,2,1,3).reshape(4,4) Out[45]: array([[0., 0., 1., 1.], [0., 0., 1., 1.], [2., 2., 3., 3.], [2., 2., 3., 3.]]) block is doing the equivalent of: In [47]: np.vstack([np.hstack([a,b]),np.hstack([c,d])]) Out[47]: array([[0., 0., 1., 1.], [0., 0., 1., 1.], [2., 2., 3., 3.], [2., 2., 3., 3.]])
3
2
76,964,144
2023-8-23
https://stackoverflow.com/questions/76964144/python-csv-writer-adds-unwanted-characters
I am letting my script read a csv file and I want to save it first into an array, then slice the second element (which is the second line of my csv file) and write the array back to the csv file. csv file looks like this before: first name,username,age,status test1,test1_username,28,2023-08-18 13:41:10+00:00 test2,test2_username,28,2023-08-18 13:41:10+00:00 test3,test3_username,28,2023-08-18 13:41:10+00:00 test4,test4_username,28,2023-08-18 13:41:10+00:00 test5,test5_username,28,2023-08-18 13:41:10+00:00 test6,test6_username,28,2023-08-18 13:41:10+00:00 test7,test7_username,28,2023-08-18 13:41:10+00:00 test8,test8_username,28,2023-08-18 13:41:10+00:00 After running my script, the csv is this: first name,username,age,status test2', 'test2_username', '28', '2023-08-18 13:41:10+00:00']] test3', 'test3_username', '28', '2023-08-18 13:41:10+00:00']] test4', 'test4_username', '28', '2023-08-18 13:41:10+00:00']] test5', 'test5_username', '28', '2023-08-18 13:41:10+00:00']] test6', 'test6_username', '28', '2023-08-18 13:41:10+00:00']] test7', 'test7_username', '28', '2023-08-18 13:41:10+00:00']] test8', 'test8_username', '28', '2023-08-18 13:41:10+00:00']] But this is the output which I want (The same as before but without the second line): first name,username,age,status test2,test2_username,28,2023-08-18 13:41:10+00:00 test3,test3_username,28,2023-08-18 13:41:10+00:00 test4,test4_username,28,2023-08-18 13:41:10+00:00 test5,test5_username,28,2023-08-18 13:41:10+00:00 test6,test6_username,28,2023-08-18 13:41:10+00:00 test7,test7_username,28,2023-08-18 13:41:10+00:00 test8,test8_username,28,2023-08-18 13:41:10+00:00 If I run it multiple times then more characters like ' and " are appearing out of nowhere? Why does this happen? How exactly do I solve this problem? This is my code: data = [] rows = [] with open('list.csv', 'r', encoding='UTF-8') as f: data = csv.reader(f,delimiter=",",lineterminator="\n") for row in data: rows.append([row,]) rows = rows[2:] with open('list.csv', 'w', encoding='UTF-8') as g: writer = csv.writer(g, delimiter="\n") writer2 = csv.writer(g, lineterminator="\n") writer2.writerow(['first name', 'username', 'age', 'status']) Thanks in advance!
You're not writing anything to the output file (or it is missing in example). Also, use writer.writerows to write the data: import csv rows = [] with open("in.csv", "r", encoding="UTF-8") as f: data = csv.reader(f, delimiter=",", lineterminator="\n") for row in data: rows.append(row) # <-- append only `row` to the list, not `[row]` rows = rows[2:] with open("out.csv", "w", encoding="UTF-8") as g: writer = csv.writer(g, lineterminator="\n") writer.writerow(["first name", "username", "age", "status"]) writer.writerows(rows) # <-- use `writer.writerows` to write the data in one one step out.csv will contain: first name,username,age,status test2,test2_username,28,2023-08-18 13:41:10+00:00 test3,test3_username,28,2023-08-18 13:41:10+00:00 test4,test4_username,28,2023-08-18 13:41:10+00:00 test5,test5_username,28,2023-08-18 13:41:10+00:00 test6,test6_username,28,2023-08-18 13:41:10+00:00 test7,test7_username,28,2023-08-18 13:41:10+00:00 test8,test8_username,28,2023-08-18 13:41:10+00:00
2
3
76,963,206
2023-8-23
https://stackoverflow.com/questions/76963206/transform-one-dimensional-numpy-array-into-mask-suitable-for-array-index-update
Given a 2-dimensional array a, I want to update select indices specified by b to a fixed value of 1. test data: import numpy as np a = np.array( [[0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 0, 0]] ) b = np.array([1, 2, 2, 0, 3, 3]) One solution is to transform b into a masked array like this: array([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1]]) which would allow me to do a[b.astype(bool)] = 1 and solve the problem. How can I transform b into the "mask" version below?
No need to build the mask, use indexing directly: a[np.arange(len(b)), b] = 1 Output: array([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 0, 1]]) That said, the mask could be built using: mask = b[:,None] == np.arange(a.shape[1]) Output: array([[False, True, False, False], [False, False, True, False], [False, False, True, False], [ True, False, False, False], [False, False, False, True], [False, False, False, True]])
3
1
76,957,766
2023-8-23
https://stackoverflow.com/questions/76957766/save-a-txt-file-with-double-tab-as-separator
How can I save a table in txt file with double separator? I can save a txt file with one tab as below: df.to_csv("file1.txt", sep='\t', index=False) Name 05:58 06:11 06:18 1234DCE -3.43 -3.427 -4.622 18DDD12 11.564 10.883 10.915 453FGH2A 0.351 0.176 0.0 8FF3EFA -0.72 0.888 -0.638 98ACD12 12.574 11.883 13.356 A2DA21 0.0 0.162 -0.162 ABC1234 -2.385 -2.348 -2.248 But need two tabs as sep. Like below: Name 05:58 06:11 06:18 1234DCE -3.43 -3.427 -4.622 18DDD12 11.564 10.883 10.915 453FGH2A 0.351 0.176 0.0 8FF3EFA -0.72 0.888 -0.638 98ACD12 12.574 11.883 13.356 A2DA21 0.0 0.162 -0.162 ABC1234 -2.385 -2.348 -2.248
If you don't pass a filename to to_csv, Pandas will return a string. So you can add a second tab: with open('file1.txt', 'w') as fp: content = df.to_csv(sep='\t', index=False) fp.write(content.replace('\t', '\t\t').strip()) However your data will not be well aligned (row 453FGH2A for example). Maybe you should use to_string instead: df.to_string('file1.txt', col_space=16, justify='left', formatters={'Name': '{:<16}'.format}, index=False) With to_csv: Name 05:58 06:11 06:18 1234DCE -3.43 -3.427 -4.622 18DDD12 11.564 10.883 10.915 453FGH2A 0.351 0.176 0.0 8FF3EFA -0.72 0.888 -0.638 98ACD12 12.574 11.883 13.356 A2DA21 0.0 0.162 -0.162 ABC1234 -2.385 -2.348 -2.248 With to_string: Name 05:58 06:11 06:18 1234DCE -3.430 -3.427 -4.622 18DDD12 11.564 10.883 10.915 453FGH2A 0.351 0.176 0.000 8FF3EFA -0.720 0.888 -0.638 98ACD12 12.574 11.883 13.356 A2DA21 0.000 0.162 -0.162 ABC1234 -2.385 -2.348 -2.248
3
1
76,938,875
2023-8-20
https://stackoverflow.com/questions/76938875/remove-non-ascii-characters-from-a-polars-dataframe
I have a Polars Dataframe with a mix of Series, which I want to write to a CSV / Upload to a Database. The problem is if any of the UTF8 series have non-ASCII characters, it is failing due to the DB Type I'm using so I would like to filter out the non-ASCII characters, whilst leaving everything else. I created a function that uses a lambda function, which does work, but it is slow compared with standard Polars functions and I was hoping to replace this with a Polars alternative def df_column_clean(df:pl.DataFrame, drop_non_ascii:bool=False): """ Takes a Polars Dataframe and performs data cleaning on all columns Currently it only converts string series to ascii but can be expanded in the future """ if drop_non_ascii: df_changes = [] df_columns = df.schema for col_name, col_type in df_columns.items(): if col_type != pl.Utf8: continue # Remove non-ascii characters df_changes.append(pl.col(col_name).apply(lambda x: None if x is None else x.encode('ascii', 'ignore').decode('ascii'), skip_nulls=False)) if len(df_changes) > 0: return df.with_columns(df_changes) return df Is the method I came up with the best option or does Polars have an-inbuilt function that can be used to filter out non-ASCII characters? Thanks in advance
.str.replace_all() with a regex to match non-ascii chars: pl.col(pl.String).str.replace_all(r"[^\p{Ascii}]", "")
3
7
76,941,782
2023-8-21
https://stackoverflow.com/questions/76941782/pycord-unknown-interaction
import discord from discord.ext import commands, tasks import imageio import os import numpy as np import asyncio @stocks.command(name="graph", description="Generate and display a stock price history graph") async def graph(self, interaction: discord.Interaction, symbol): video_path = "stock_graph.mp4" imageio.mimsave(video_path, ani_frames, fps=8) message = await interaction.response.defer() asyncio.sleep(10) await interaction.followup.send(file=discord.File(video_path)) plt.close() Error: discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction
First of all, understand that a Discord interaction has a short lifetime, which is exactly 3 seconds. That is, you must respond to the interaction that called your command within 3 seconds or else it will become unknown. If your command performs a time-consuming task, where you can't respond to the interaction in less than 3 seconds, you can defer the interaction response (which will cause your bot to go into status thinking) and then send a followup message using await interaction.followup.send(). async def graph(self, interaction: discord.Interaction, symbol): await interaction.response.defer() # put this on the start video_path = "stock_graph.mp4" imageio.mimsave(video_path, ani_frames, fps=8) # maybe this task is taking long? asyncio.sleep(10) # why are you sleeping 10 seconds? await interaction.followup.send(file=discord.File(video_path)) plt.close()
2
2
76,937,581
2023-8-20
https://stackoverflow.com/questions/76937581/defining-custom-types-in-pydantic-v2
The code below used to work in Pydantic V1: from pydantic import BaseModel class CustomInt(int): """Custom int.""" pass class CustomModel(BaseModel): """Custom model.""" custom_int: CustomInt Using it with Pydantic V2 will trigger an error. The error can be resolved by including arbitrary_types_allowed=True in model_config, but is there a better solution? Kindly note that the docs suggest using Annotated, but that doesn't allow defining custom docstring, which is desirable in my use case.
You can keep using a class which inherits from a type by defining core schema on the class: from typing_extensions import Any from pydantic import GetCoreSchemaHandler, TypeAdapter from pydantic_core import CoreSchema, core_schema class CustomInt(int): """Custom int.""" ... @classmethod def __get_pydantic_core_schema__( cls, source_type: Any, handler: GetCoreSchemaHandler ) -> CoreSchema: return core_schema.no_info_after_validator_function(cls, handler(int)) Note: These are advanced methods and may have undesirable side effects. Further documentation here: https://docs.pydantic.dev/latest/api/pydantic_core/
6
3
76,956,869
2023-8-22
https://stackoverflow.com/questions/76956869/add-dataframe-rows-based-on-external-condition
I have this dataframe: Env location lob grid row server model make slot Prod USA Market AB3 bc2 Server123 Hitachi dcs 1 Prod USA Market AB3 bc2 Server123 Hitachi dcs 2 Prod USA Market AB3 bc2 Server123 Hitachi dcs 3 Prod USA Market AB3 bc2 Server123 Hitachi dcs 4 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 4 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 6 UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6 In this example: If model is IBM, there must be 8 slots; because the slot starts from slot=3, so it must go from 3 to 10. In this case, only slots 3 to 6 are present. Therefore, I need to add 4 more rows (slot 7, 8, 9, 10). If model is Cisco, row count for cisco needs to be 6. Only slots 3 to 6 are present. Therefore, I need to add 2 more rows New rows: must repeat the last row for the model, while incrementing the slot number Their "grid" cell must indicate "available". This needs to be done programmatically where given the model, I need to know the total number of slots and if the number of slots is short, I need to create new rows. The final dataframe needs to be like this: Env location lob grid row server model make slot Prod USA Market AB3 bc2 Server123 Hitachi dcs 1 Prod USA Market AB3 bc2 Server123 Hitachi dcs 2 Prod USA Market AB3 bc2 Server123 Hitachi dcs 3 Prod USA Market AB3 bc2 Server123 Hitachi. dcs 4 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 3 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 4 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 5 Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 6 Dev EMEA Ins. available bc4 Serverabc IBM abc 7 Dev EMEA Ins. available bc4 Serverabc IBM abc 8 Dev EMEA Ins. available bc4 Serverabc IBM abc 9 Dev EMEA Ins. available bc4 Serverabc IBM abc 10 UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6 UAT PAC Retail available bc4 Serverzzz Cisco ust 7 UAT PAC Retail available bc4 Serverzzz Cisco ust 8 I tried something like this: def slots(row): if 'IBM' in row['model']: number_row=8 if 'Cisco' in row['model']: number_row=6 How do I do this?
I would use groupby.apply to add new rows to each model. The general workflow is as follows. Remove duplicate slots from each model. Group the dataframe by the 'model' column. For each model, do anything at all only if a. it is either IBM or Cisco (identified by whether it is a key in N_slots dictionary) b. the number of slots does not reach the required number of slots (the values in N_slots dictionary) If (3) is satisfied, then use reindex() to add new empty rows. Assign slot values to the 'slots' column (e.g. For IBM, it's 3-10) Fill the newly created empty rows in the grid column by 'available'. Fill all other newly created rows by values one row above (ffill()). Reset index to remove duplicate indices. def add_slots(s): # get model name model = s['model'].iat[0] # get how many slots there should be for this model slots = N_slots.get(model, 0) # where to start reindexing start = s.index[0] low = s['slot'].dropna().astype(int).min() # for pandas>=1.1, remove the previous line and uncomment the next line # low = s['slot'].min() if len(s) < slots: # add new indices s = s.reindex(range(start, start + slots)) # assign slots s['slot'] = range(low, low + slots) # assign grids at newly created slots to be 'available' s['grid'] = s['grid'].fillna('available') return s N_slots = {'IBM': 8, 'Cisco': 6} new_df = ( df.drop_duplicates(['server', 'model', 'slot'], ignore_index=True) # remove duplicate slots .groupby('model', sort=False, group_keys=False).apply(add_slots) # add new slots .ffill() # fill rest of the columns .sort_values(by=['server', 'slot']) .reset_index(drop=True) # reset index ) Env location lob grid row server model make slot 0 Prod USA Market AB3 bc2 Server123 Hitachi dcs 1 1 Prod USA Market AB3 bc2 Server123 Hitachi dcs 2 2 Prod USA Market AB3 bc2 Server123 Hitachi dcs 3 3 Prod USA Market AB3 bc2 Server123 Hitachi dcs 4 4 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3 5 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 4 6 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5 7 Dev EMEA Ins AB6 bc4 Serverabc IBM abc 6 8 Dev EMEA Ins available bc4 Serverabc IBM abc 7 9 Dev EMEA Ins available bc4 Serverabc IBM abc 8 10 Dev EMEA Ins available bc4 Serverabc IBM abc 9 11 Dev EMEA Ins available bc4 Serverabc IBM abc 10 12 UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3 13 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4 14 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5 15 UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6 16 UAT PAC Retail available bc4 Serverzzz Cisco ust 7 17 UAT PAC Retail available bc4 Serverzzz Cisco ust 8
4
3
76,917,508
2023-8-16
https://stackoverflow.com/questions/76917508/calculating-partial-correlation-from-a-shrunken-covariance-matrix-help-port
There's a paper that I found interesting and would like to use some of the methods in Python. Erb et al. 2020 implements partial correlation on compositional data and Jin et al. 2022 implements it in an R package called Propr. I found the function bShrink that I'm simplifying below: library(corpcor) # Load iris dataset (not compositional but will work for this case) X = read.table("https://pastebin.com/raw/e3BSEZiK", sep = "\t", row.names = 1, header = TRUE, check.names = FALSE) bShrink <- function(M){ # transform counts to log proportions P <- M / rowSums(M) B <- log(P) # covariance shrinkage D <- ncol(M) Cb <- cov.shrink(B,verbose=FALSE) G <- diag(rep(1,D))-matrix(1/D,D,D) Cov <- G%*%Cb%*%G # partial correlation PC <- cor2pcor(Cov) return(PC) } > bShrink(X) [,1] [,2] [,3] [,4] [1,] 1.0000000 0.96409509 0.6647093 -0.23827651 [2,] 0.9640951 1.00000000 -0.4585507 0.02735205 [3,] 0.6647093 -0.45855072 1.0000000 0.85903005 [4,] -0.2382765 0.02735205 0.8590301 1.00000000 Now I'm trying to port this in Python. Getting some small differences between Cb which is expected but major differences in PC which is the partial correlation (cor2pcor function). I tried using the answers from here but couldn't get it to work: Partial Correlation in Python Here's my Python code: import numpy as np import pandas as pd from sklearn.covariance import LedoitWolf def bShrink(M:pd.DataFrame): components = M.columns M = M.values P = M/M.sum(axis=1).reshape(-1,1) B = np.log(P) D = M.shape[1] lw_model = LedoitWolf() lw_model.fit(B) Cb = lw_model.covariance_ G = np.eye(D) - np.ones((D, D)) / D Cov = G @ Cb @ G precision = np.linalg.inv(Cov) diag = np.diag(precision) Z = np.outer(diag, diag) partial = -precision / Z return pd.DataFrame(partial, index=components, columns=components) X = pd.read_csv("https://pastebin.com/raw/e3BSEZiK", sep="\t", index_col=0) bShrink(X) # sepal_length sepal_width petal_length petal_width # sepal_length -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17 # sepal_width -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17 # petal_length -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17 # petal_width -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17 I'm trying to avoid using Pingouin or any other packages than numpy, pandas, and sklearn. How can I create a partial correlation matrix from a shrunken covariance matrix?
The problems There are two issues here, the estimation of the covariate matrices are different and the translation of the cor2pcor step is wrong. Let's start with cor2pcor. I honestly did not understand very well how you are computing it so I decided to check the implementation in R. If we go on the corpcor implementation we get the function: cor2pcor = function(m, tol) { # invert, then negate off-diagonal entries m = -pseudoinverse(m, tol=tol) diag(m) = -diag(m) # standardize and return return(cov2cor(m)) } We can also check the implementation of the R stats package to see the implementation of cov2cor: cov2cor <- function(V) { ## Purpose: Covariance matrix |--> Correlation matrix -- efficiently ## ---------------------------------------------------------------------- ## Arguments: V: a covariance matrix (i.e. symmetric and positive definite) ## ---------------------------------------------------------------------- ## Author: Martin Maechler, Date: 12 Jun 2003, 11L:50 p <- (d <- dim(V))[1L] if(!is.numeric(V) || length(d) != 2L || p != d[2L]) stop("'V' is not a square numeric matrix") Is <- sqrt(1/diag(V)) # diag( 1/sigma_i ) if(any(!is.finite(Is))) warning("diag(.) had 0 or NA entries; non-finite result is doubtful") r <- V # keep dimnames r[] <- Is * V * rep(Is, each = p) ## == D %*% V %*% D where D = diag(Is) r[cbind(1L:p,1L:p)] <- 1 # exact in diagonal r } With those two functions in mind, we can begin to translate them into Python, make sure to use pinv instead of inv because your matrix can be singular: def cov2cor(V): """ Convert Covariance matrix to Correlation matrix efficiently. Arguments: V: a covariance matrix (i.e. symmetric and positive definite) """ p, d = V.shape Is = np.sqrt(1 / np.diag(V)) # diag( 1/sigma_i ) r = V.copy() # keep dimnames r *= Is.reshape(-1, 1) * Is.reshape(1, -1) np.fill_diagonal(r, 1) # exact in diagonal return r def cor2pcor(m, tol=1e-15): """ Convert a correlation matrix to a partial correlation matrix efficiently. Arguments: m: a correlation matrix tol: tolerance for calculating the pseudo-inverse """ # Invert, then negate off-diagonal entries m = -np.linalg.pinv(m, rcond=tol) np.fill_diagonal(m, -np.diag(m)) # Standardize and return return cov2cor(m) Finally, we can use those to implement b_shrink: def b_shrink(M: pd.DataFrame): components = M.columns M = M.values # transform counts to log proportions P = M / M.sum(axis=1).reshape(-1, 1) B = np.log(P) # Covariance shrinkage D = M.shape[1] Cb, shrinkage = ledoit_wolf(B) G = np.eye(D) - np.ones((D, D)) / D cov = G @ Cb @ G # Partial autocorrelation partial = cor2pcor(cov) return pd.DataFrame(partial, index=components, columns=components) Doing this I got the following matrix, I replaced LedoitWolf with ledoit_wolf which is pretty much the same thing on this function: sepal_length sepal_width petal_length petal_width sepal_length 1 0.951383 0.606789 -0.188676 sepal_width 0.951383 1 -0.352785 -0.0561523 petal_length 0.606789 -0.352785 1 0.867358 petal_width -0.188676 -0.0561523 0.867358 1 The covariance matrix According to the documentation of cov.shrink the implementation in R is based on (Opgen-Rhein and Strimmer, 2007) and (SchΓ€fer and Strimmer, 2005) which build on top of Ledoit-Wolf's work, while Sklearn's shrinkage implements only Ledoit-Wolf shrinkage. This gives slightly different matrices Cb in both languages: In R: sepal_length sepal_width petal_length petal_width sepal_length 0.01384628 0.03049407 -0.03983544 -0.08249265 sepal_width 0.03049407 0.09092548 -0.10892279 -0.21334250 petal_length -0.03983544 -0.10892279 0.13876001 0.26559868 petal_width -0.08249265 -0.21334250 0.26559868 0.58653240 In Python: [[ 0.01425639 0.0286866 -0.03757796 -0.07812832] [ 0.0286866 0.09108648 -0.10768074 -0.21175137] [-0.03757796 -0.10768074 0.13876641 0.26434719] [-0.07812832 -0.21175137 0.26434719 0.58509169]] We would get the same results if we replaced the Cb matrix from R in Python. As far as I know, there are no implementations of Rhein, SchΓ€fer, and Strimmer's work in Python.
3
2
76,923,540
2023-8-17
https://stackoverflow.com/questions/76923540/why-does-aws-lambda-experience-a-consistent-10-second-delay-for-an-unresolvable
I'm experiencing a peculiar behavior when trying to make an HTTP request to unresolved domain names from an AWS Lambda function using the requests library in Python. When I attempt to make a request using: response = requests.get('https://benandjerry.com', timeout=(1,1)) In AWS Lambda, it consistently takes around 10 seconds before it throws an error. However, when I run the same code on my local environment, it's instant. I've verified this using logs and isolated tests. I've considered potential issues like Lambda's cold starts, Lambda runtime differences, and even VPC configurations, but none seem to be the root cause. I also tried using curl to access the domain, and it instantly returned with Could not resolve host: benandjerry.com. Last point, this is happening on specific unresolved domain names, not all of them. Here's a sample: http://seen.ma/ benandjerry.com clenyabeauty.com gong.com FYI, you can easily replicate the issue by creating a python3.9 Lambda on AWS & adding the following code: import json from botocore.vendored import requests import urllib.request import os def lambda_handler(event, context): # TODO implement url = 'http://benandjerry.com' try: response = requests.get(url, proxies=None,verify=False) except Exception as e: print(e) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } Questions: What could be causing this consistent 10-second delay in AWS Lambda for an unresolvable domain using requests? How can I get AWS Lambda to instantly recognize that the domain is unresolvable, similar to the behavior on my local machine?
The issue you're seeing is due to the AWS DNS taking up to 10 seconds trying to resolve the domain. If you want more control over the DNS resolution, you can implement a custom requests Transport Adapter to do the DNS resolution yourselfβ€”which allows you to better customize the timeout. pip install dnspython import urllib import requests import dns.resolver class CustomDnsResolverHttpAdapter(requests.adapters.HTTPAdapter): def resolve(self, hostname): resolver = dns.resolver.Resolver(configure=False) resolver.timeout = 5 resolver.lifetime = 5 resolver.nameservers = [ "1.1.1.1" # cloudflare dns , "8.8.8.8" # google dns # , "169.254.78.1" # aws dns ] answer = resolver.resolve(hostname, "A", lifetime=5) if len(answer) == 0: return None return str(answer[0]) def send(self, request, **kwargs): connection_pool_kwargs = self.poolmanager.connection_pool_kw result = urllib.parse.urlparse(request.url) resolved_ip = self.resolve(result.hostname) if result.scheme == "https" and resolved_ip: request.url = request.url.replace( "https://" + result.hostname, "https://" + resolved_ip, ) connection_pool_kwargs["server_hostname"] = result.hostname # SNI connection_pool_kwargs["assert_hostname"] = result.hostname # overwrite the host header request.headers["Host"] = result.hostname else: # clear these headers if they were set in a previous TLS request connection_pool_kwargs.pop("server_hostname", None) connection_pool_kwargs.pop("assert_hostname", None) # overwrite the host header request.headers["Host"] = result.hostname return super(CustomDnsResolverHttpAdapter, self).send(request, **kwargs) http_agent = requests.Session() http_agent.mount("http://", CustomDnsResolverHttpAdapter()) http_agent.mount("https://", CustomDnsResolverHttpAdapter()) response = http_agent.get("https://benandjerry.com")
5
3
76,956,654
2023-8-22
https://stackoverflow.com/questions/76956654/can-this-recursive-function-be-turned-into-an-iterative-function-with-similar-pe
I am writing a function in python using numba to label objects in a 2D or 3D array, meaning all orthogonally connected cells with the same value in the input array will be given a unique label from 1 to N in the output array, where N is the number of orthogonally connected groups. It is very similar to functions such as scipy.ndimage.label and similar functions in libraries such as scikit-image, but those functions label all orthogonally connected non-zero groups of cells, so it would merge connected groups with different values, which I don't want. For example, given this input: [0 0 7 7 0 0 0 0 7 0 0 0 0 0 0 0 0 7 0 6 6 0 0 7 0 0 4 4 0 0] The scipy function would return [0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 3 0 2 2 0 0 3 0 0 2 2 0 0] Notice that the 6s and 4s were merged into the label 2. I want them to be labeled as separate groups, e.g.: [0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 4 0 2 2 0 0 4 0 0 3 3 0 0] I asked this about a year ago and have been using the solution in the accepted answer, however I am working on optimizing the runtime of my code and am revisiting this problem. For the data size I generally work with, the linked solution takes about 1m30s to run. I wrote the following recursive algorithm which takes about 30s running as regular python and with numba's JIT runs in 1-2s (side note, I hate that adjacent function, any tips to make it less messy while still numba-compatible would be appreciated): @numba.njit def adjacent(idx, shape): coords = [] if len(shape) > 2: if idx[0] < shape[0] - 1: coords.append((idx[0] + 1, idx[1], idx[2])) if idx[0] > 0: coords.append((idx[0] - 1, idx[1], idx[2])) if idx[1] < shape[1] - 1: coords.append((idx[0], idx[1] + 1, idx[2])) if idx[1] > 0: coords.append((idx[0], idx[1] - 1, idx[2])) if idx[2] < shape[2] - 1: coords.append((idx[0], idx[1], idx[2] + 1)) if idx[2] > 0: coords.append((idx[0], idx[1], idx[2] - 1)) else: if idx[0] < shape[0] - 1: coords.append((idx[0] + 1, idx[1])) if idx[0] > 0: coords.append((idx[0] - 1, idx[1])) if idx[1] < shape[1] - 1: coords.append((idx[0], idx[1] + 1)) if idx[1] > 0: coords.append((idx[0], idx[1] - 1)) return coords @numba.njit def apply_label(labels, decoded_image, current_label, idx): labels[idx] = current_label for aidx in adjacent(idx, labels.shape): if decoded_image[aidx] == decoded_image[idx] and labels[aidx] == 0: apply_label(labels, decoded_image, current_label, aidx) @numba.njit def label_image(decoded_image): labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 for idx in zip(*np.where(decoded_image >= 0)): if labels[idx] == 0: current_label += 1 apply_label(labels, decoded_image, current_label, idx) return labels, current_label This worked for some data, but crashed on other data and I found the issue is that when there are very large objects to label, the recursion limit is reached. I tried to rewrite label_image to not use recursion, but it now takes ~10s with numba. Still a huge improvement from where I started, but it seems like it should be possible to get the same performance as the recursive version. Here is my iterative version: @numba.njit def label_image(decoded_image): labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 for idx in zip(*np.where(decoded_image >= 0)): if labels[idx] == 0: current_label += 1 idxs = [idx] while idxs: cidx = idxs.pop() if labels[cidx] == 0: labels[cidx] = current_label for aidx in adjacent(cidx, labels.shape): if labels[aidx] == 0 and decoded_image[aidx] == decoded_image[idx]: idxs.append(aidx) return labels, current_label Is there a way I can improve this?
Can this recursive function be turned into an iterative function with similar performance? Turning this into an iterative function is straightforward, considering it's just a simple depth-first search (you could also use a breadth-first search using a queue instead of a stack here, both work). Simply use a stack to keep track of the nodes to visit. Here's a general solution that works with any number of dimensions: def label_image(decoded_image): shape = decoded_image.shape labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 for idx in zip(*np.where(decoded_image > 0)): if labels[idx] == 0: current_label += 1 stack = [idx] while stack: top = stack.pop() labels[top] = current_label for i in range(0, len(shape)): if top[i] > 0: neighbor = list(top) neighbor[i] -= 1 neighbor = tuple(neighbor) if decoded_image[neighbor] == decoded_image[idx] and labels[neighbor] == 0: stack.append(neighbor) if top[i] < shape[i] - 1: neighbor = list(top) neighbor[i] += 1 neighbor = tuple(neighbor) if decoded_image[neighbor] == decoded_image[idx] and labels[neighbor] == 0: stack.append(neighbor) return labels Adding or subtracting one from the i-th component of the tuple is awkward though (I'm going over a temporary list here) and numba doesn't accept it (type error). One simple solution would be to explicitly write versions for 2d and 3d, which will likely greatly help performance: @numba.njit def label_image_2d(decoded_image): w, h = decoded_image.shape labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 for idx in zip(*np.where(decoded_image > 0)): if labels[idx] == 0: current_label += 1 stack = [idx] while stack: x, y = stack.pop() if decoded_image[x, y] != decoded_image[idx] or labels[x, y] != 0: continue # already visited or not part of this group labels[x, y] = current_label if x > 0: stack.append((x-1, y)) if x+1 < w: stack.append((x+1, y)) if y > 0: stack.append((x, y-1)) if y+1 < h: stack.append((x, y+1)) return labels @numba.njit def label_image_3d(decoded_image): w, h, l = decoded_image.shape labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 for idx in zip(*np.where(decoded_image > 0)): if labels[idx] == 0: current_label += 1 stack = [idx] while stack: x, y, z = stack.pop() if decoded_image[x, y, z] != decoded_image[idx] or labels[x, y, z] != 0: continue # already visited or not part of this group labels[x, y, z] = current_label if x > 0: stack.append((x-1, y, z)) if x+1 < w: stack.append((x+1, y, z)) if y > 0: stack.append((x, y-1, z)) if y+1 < h: stack.append((x, y+1, z)) if z > 0: stack.append((x, y, z-1)) if z+1 < l: stack.append((x, y, z+1)) return labels def label_image(decoded_image): dim = len(decoded_image.shape) if dim == 2: return label_image_2d(decoded_image) assert dim == 3 return label_image_3d(decoded_image) Note also that the iterative solution doesn't suffer from stack limits: np.full((100,100,100), 1) works just fine in the iterative solution, but fails in the recursive solution (segfaults if using numba). Doing a very rudimentary benchmark of for i in range(1, 10000): label_image(np.full((20,20,20), i)) (many iterations to minimize the impact of JIT, could also do a few warmup runs, then start measuring time or similar) The iterative solution seems to be several times faster (about 5x on my machine see below). You could probably optimize the recursive solution and get it to a comparable speed, f.e. by avoiding the temporary coords list or by changing the np.where to > 0. I don't know how well numba can optimize the zipped np.where. For further optimization, you could consider (and benchmark) using explicit nested for x in range(0, w): for y in range(0, h): loops there. To remain competitive with the merge strategy proposed by Nick, I've optimized this further, picking some low hanging fruit: Convert the zip to explicit loops with continue rather than np.where. Store decoded_image[idx] in a local variable (ideally shouldn't matter, but doesn't hurt). Reuse the stack. This prevents unnecessary (re)allocations and GC strain. It could further be considered to provide an initial capacity for the stack (of w*h or w*h*l respectively). @numba.njit def label_image_2d(decoded_image): w, h = decoded_image.shape labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 stack = [] for sx in range(0, w): for sy in range(0, h): start = (sx, sy) image_label = decoded_image[start] if image_label <= 0 or labels[start] != 0: continue current_label += 1 stack.append(start) while stack: x, y = stack.pop() if decoded_image[x, y] != image_label or labels[x, y] != 0: continue # already visited or not part of this group labels[x, y] = current_label if x > 0: stack.append((x-1, y)) if x+1 < w: stack.append((x+1, y)) if y > 0: stack.append((x, y-1)) if y+1 < h: stack.append((x, y+1)) return labels @numba.njit def label_image_3d(decoded_image): w, h, l = decoded_image.shape labels = np.zeros_like(decoded_image, dtype=np.uint32) current_label = 0 stack = [] for sx in range(0, w): for sy in range(0, h): for sz in range(0, l): start = (sx, sy, sz) image_label = decoded_image[start] if image_label <= 0 or labels[start] != 0: continue current_label += 1 stack.append(start) while stack: x, y, z = stack.pop() if decoded_image[x, y, z] != image_label or labels[x, y, z] != 0: continue # already visited or not part of this group labels[x, y, z] = current_label if x > 0: stack.append((x-1, y, z)) if x+1 < w: stack.append((x+1, y, z)) if y > 0: stack.append((x, y-1, z)) if y+1 < h: stack.append((x, y+1, z)) if z > 0: stack.append((x, y, z-1)) if z+1 < l: stack.append((x, y, z+1)) return labels I then cobbled together a benchmark to compare the four approaches (original recursive, old iterative, new iterative, merge-based), putting them in four different modules: import numpy as np import timeit import rec import iter_old import iter_new import merge shape = (100, 100, 100) n = 20 for module in [rec, iter_old, iter_new, merge]: print(module) label_image = module.label_image # Trigger compilation of 2d & 3d functions label_image(np.zeros((1, 1))) label_image(np.zeros((1, 1, 1))) i = 0 def test_full(): global i i += 1 label_image(np.full(shape, i)) print("single group:", timeit.timeit(test_full, number=n)) print("random (few groups):", timeit.timeit( lambda: label_image(np.random.randint(low = 1, high = 10, size = shape)), number=n)) print("random (many groups):", timeit.timeit( lambda: label_image(np.random.randint(low = 1, high = 400, size = shape)), number=n)) print("only groups:", timeit.timeit( lambda: label_image(np.arange(np.prod(shape)).reshape(shape)), number=n)) This outputs something like <module 'rec' from '...'> single group: 32.39212468900041 random (few groups): 14.648884047001047 random (many groups): 13.304533919001187 only groups: 13.513677138000276 <module 'iter_old' from '...'> single group: 10.287227957000141 random (few groups): 17.37535468200076 random (many groups): 14.506630064999626 only groups: 13.132202609998785 <module 'iter_new' from '...'> single group: 7.388022166000155 random (few groups): 11.585243002000425 random (many groups): 9.560101995000878 only groups: 8.693653742000606 <module 'merge' from '...'> single group: 14.657021331999204 random (few groups): 14.146574055999736 random (many groups): 13.412314713001251 only groups: 12.642367746000673 It seems to me that the improved iterative approach may be better. Note that the original rudimentary benchmark seems to be the worst case for the recursive variant. In general the difference isn't as large. The tested array is pretty small (20Β³). If I test with a larger array (100Β³), and a smaller n (20), I get roughly the following results (rec is omitted because due to stack limits, it would segfault): <module 'iter_old' from '...'> single group: 3.5357716739999887 random (few groups): 4.931695729999774 random (many groups): 3.4671142009992764 only groups: 3.3023930709987326 <module 'iter_new' from '...'> single group: 2.45903080700009 random (few groups): 2.907660342001691 random (many groups): 2.309699692999857 only groups: 2.052835552000033 <module 'merge' from '...'> single group: 3.7620838259990705 random (few groups): 3.3524249689999124 random (many groups): 3.126650959999097 only groups: 2.9456547739991947 The iterative approach still seems to be more efficient.
2
4
76,934,579
2023-8-19
https://stackoverflow.com/questions/76934579/pydanticusererror-if-you-use-root-validator-with-pre-false-the-default-you
I want to execute this code in google colab but I get following error: from llama_index.prompts.prompts import SimpleInputPrompt # Create a system prompt system_prompt = """[INST] <> more string here.<> """ query_wrapper_prompt = SimpleInputPrompt("{query_str} [/INST]") Error: /usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:269: UserWarning: Valid config keys have changed in V2: * 'allow_population_by_field_name' has been renamed to 'populate_by_name' warnings.warn(message, UserWarning) --------------------------------------------------------------------------- PydanticUserError Traceback (most recent call last) <ipython-input-36-c45796b371fe> in <cell line: 3>() 1 # Import the prompt wrapper... 2 # but for llama index ----> 3 from llama_index.prompts.prompts import SimpleInputPrompt 4 # Create a system prompt 5 system_prompt = """[INST] <> 6 frames /usr/local/lib/python3.10/dist-packages/pydantic/deprecated/class_validators.py in root_validator(pre, skip_on_failure, allow_reuse, *__args) 226 mode: Literal['before', 'after'] = 'before' if pre is True else 'after' 227 if pre is False and skip_on_failure is not True: --> 228 raise PydanticUserError( 229 'If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`.' 230 ' Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.', PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. For further information visit https://errors.pydantic.dev/2.1.1/u/root-validator-pre-skip If I follow the link, there is no solution for my case. How can I solve that problem? Thanks in forward.
In my env, I have pip list | grep pydantic pydantic 2.2.1 I fix the problem, by downgrading pydantic version pip install pydantic==1.10.9
22
36
76,910,725
2023-8-16
https://stackoverflow.com/questions/76910725/pyspark-getting-jaccard-similarity-from-co-ocurrence-matrix
I have a co-occurence matrix in pyspark for co-ocurrences of certain keywords A,B,C A B C A 5 1 0 B 1 3 2 C 0 2 3 How can I calculate the jaccard similarity from this matrix in Python for all keywords. Is there any library available to do that or should I simply compute the similarity by using the Jaccard similarity formula?
Let's assume the co-occurrence matrix is given as a list of lists: com = [[5, 1, 0], [1, 3, 2], [0, 2, 3]] n_elem = len(com) The Jaccard similarity of two sets A and B is given by |A ∩ B| / |A βˆͺ B|. The co-occurrence matrix gives the value of |A|, |B|, and |A ∩ B|. The value of |A βˆͺ B| is simply |A| + |B| - |A ∩ B|, which we can find the Jaccard index. First, let's create a list of lists containing ones that is the same size as com. The default value is 1 because the similarity index of a set with itself is 1, and we will not calculate these elements: similarity = [[1 for _ in row] for row in com] Now, we can loop over each pair of values in com and calculate the similarities. The inner loop starts at i+1 because similarity[i][j] is identical to similarity[j][i], so we only need to calculate the upper triangle of the matrix: for i in range(n_elem): a = com[i][i] # |A| for j in range(i+1, n_elem): b = com[j][j] # |B| aib = com[i][j] # |A ∩ B| aub = a + b - aib # |A βˆͺ B| # Set both off-diagonal elements simultaneously similarity[i][j] = similarity[j][i] = aib / aub This leaves us with the following similarity matrix: [[1 , 0.14285714285714285, 0.0], [0.14285714285714285, 1 , 0.5], [0.0 , 0.5 , 1]] Now, if your co-occurrence matrix is a numpy array (or you're open to using numpy), you can speed up this computation by outsourcing the loops to numpy's C backend. import numpy as np com_arr = np.array([[5, 1, 0], [1, 3, 2], [0, 2, 3]]) n_elem = com_arr.size First, we can get the occurrence of each element using the diagonal of the matrix: occ = np.diag(com_arr) # array([5, 3, 3]) Next, create the matrix of |A βˆͺ B|. Remember that |A ∩ B| is already specified by com_arr: aub = occ[:, None] + occ[None, :] - com_arr Since occ is a 1-d array, adding a None index will create a 2-d array of one column (a column vector of shape (3, 1)) and one row (a row vector of shape (1, 3)) respectively. When adding a row vector to a column vector, numpy automatically broadcasts the dimensions so that you end up with a (in this case) square matrix of shape (3, 3). Now, aub looks like this: array([[5, 7, 8], [7, 3, 4], [8, 4, 3]]) Finally, divide the intersection by the union: similarity = com_arr / aub et voila, we have the same values as before: array([[1. , 0.14285714, 0. ], [0.14285714, 1. , 0.5 ], [0. , 0.5 , 1. ]])
2
5
76,957,392
2023-8-22
https://stackoverflow.com/questions/76957392/how-to-sort-the-contents-of-a-text-file
I am trying to sort the content of a text file. Currently, I'm sorting it using excel. I wanted to use python script to automatically sort the contents. The code below does not produce the needed output. Any idea will be very much appreciated. def sorting(filename): infile = open(filename) words = [] for line in infile: temp = line.split() for i in temp: words.append(i) infile.close() words.sort() outfile = open("result.txt", "w") for i in words: outfile.writelines(i) outfile.writelines(" ") outfile.close() sorting("myfile.txt") Current code output: -0.162 -0.638 -2.248 -2.348 -3.427 -4.622 0.00 0.162 0.176 -> more data... myfile.txt # file needed to be sorted ABC1234 -2.385 05:58 1234DCE -3.430 05:58 98ACD12 12.574 05:58 18DDD12 11.564 05:58 453FGH2A 0.351 05:58 A2DA21 0.00 05:58 8FF3EFA -0.720 05:58 ABC1234 -2.348 06:11 1234DCE -3.427 06:11 98ACD12 11.883 06:11 18DDD12 10.883 06:11 453FGH2A 0.176 06:11 A2DA21 0.162 06:11 8FF3EFA 0.888 06:11 ABC1234 -2.248 06:18 1234DCE -4.622 06:18 98ACD12 13.356 06:18 18DDD12 10.915 06:18 453FGH2A 0.00 06:18 A2DA21 -0.162 06:18 8FF3EFA -0.638 06:18 Intended Output: # to be sorted based on the latest time. Name 06:18 06:11 05:58 ABC1234 -2.248 -2.348 -2.385 1234DCE -4.622 -3.427 -3.430 98ACD12 13.356 11.883 12.574 18DDD12 10.915 10.883 11.564 453FGH2A 0.00 0.176 0.351 A2DA21 -0.162 0.162 0.00 8FF3EFA -0.638 0.888 -0.720
I don't usually argue for pandas as the first solution, but in this case I think it is the right answer: import pandas as pd data = {} for row in open('x.csv'): cols = row.rstrip().split() if cols[2] not in data: data[cols[2]] = {} data[cols[2]][cols[0]] = cols[1] print(data) df = pd.DataFrame(data) print(df) Output: {'05:58': {'ABC1234': '-2.385', '1234DCE': '-3.430', '98ACD12': '12.574', '18DDD12': '11.564', '453FGH2A': '0.351', 'A2DA21': '0.00', '8FF3EFA': '-0.720'}, '06:11': {'ABC1234': '-2.348', '1234DCE': '-3.427', '98ACD12': '11.883', '18DDD12': '10.883', '453FGH2A': '0.176', 'A2DA21': '0.162', '8FF3EFA': '0.888'}, '06:18': {'ABC1234': '-2.248', '1234DCE': '-4.622', '98ACD12': '13.356', '18DDD12': '10.915', '453FGH2A': '0.00', 'A2DA21': '-0.162', '8FF3EFA': '-0.638'}} 05:58 06:11 06:18 ABC1234 -2.385 -2.348 -2.248 1234DCE -3.430 -3.427 -4.622 98ACD12 12.574 11.883 13.356 18DDD12 11.564 10.883 10.915 453FGH2A 0.351 0.176 0.00 A2DA21 0.00 0.162 -0.162 8FF3EFA -0.720 0.888 -0.638 (Side note -- you could eliminate two lines of code by using defaultdict for data. In this case, I don't think it's worth the trouble.)
2
3
76,956,697
2023-8-22
https://stackoverflow.com/questions/76956697/efficiently-append-csv-files-python
I am trying to append > 1000 csv files (all with the same format) in Python. The files have anything between 1KB and 30GB. All the files have 200GB in total. I want to combine these files into a unique dataframe. Below what I am doing which is very very slow: folder_path = 'Some path here' csv_files = [file for file in os.listdir(folder_path) if file.endswith('.csv')] combined_data = pd.DataFrame() for e, csv_file in enumerate(csv_files): print(f'Processing {e+1} out of {len(csv_files)}: {csv_file}') combined_data = pd.concat([combined_data, pd.read_csv(os.path.join(folder_path, csv_file), dtype={'stringbvariablename': str})]) One of the variables is a string. Everything else numbers. RAM memory is not an issue since I am using a cluster.
If all of your files are in the same format, and you are just trying to create a new CSV, then ditch pandas. import csv import pathlib def write_to_writer(writer: csv.writer, path: pathlib.Path, skipheader=False) -> None: with path.open(newline="") as f: reader = csv.reader(f) if skipheader: next(reader) writer.writerows(reader) folder_path = pathlib.Path('Some path here') csv_files = [p for p in folder_path.iterdir() if p.suffix =='.csv'] with open('combined_data.csv', 'w', newline="") as fout: writer = csv.writer(fout) # handle first csv, with header first_path, *rest = csv_paths write_to_writer(writer, first_path, skipheader=False) for path in rest: write_to_writer(writer, path, skipheader=True) This avoids the pd.concat in a loop problem, which is a classic antipattern. However, if you need a large dataframe (because you are going to actually use the dataframe), then your problem is fixed simply by appending to a list and then concating at the end: import pathlib import pandas as pd folder_path = pathlib.Path('Some path here') csv_paths = [p for p in folder_path.iterdir() if p.suffix =='.csv'] df = pd.concat([ pd.read_csv(path, dtype={'stringbvariablename': str}) for path in csv_paths ]) Again, this avoids the anti-pattern (because .appending to a list in a loop is efficient)
2
4
76,956,328
2023-8-22
https://stackoverflow.com/questions/76956328/pandas-column-containing-a-list-of-matched-strings-found-using-str-contains
I have string data that I am matching to a list of terms and I want to create a new column that shows all words found in each row of the dataframe df = pd.DataFrame({'String': ['Cat Dog Fish', 'Cat Dog', 'Pig Horse', 'DogFish']}) print(df) String 0 Cat Dog Fish 1 Cat Dog 2 Pig Horse 3 DogFish 4 CatHorse words = ['Cat', 'Dog', 'Fish'] I know using df.loc[df['String'].str.contains('|'.join(words))].copy() will return all rows that contain at least one of the terms I'm searching for but I would like output that keeps a record of which terms are found in each string like this: String Matched 0 Cat Dog Fish [Cat, Dog, Fish] 1 Cat Dog [Cat, Dog] 2 DogFish [Dog, Fish] 3 CatHorse [Cat] or even just String Matched 0 Cat Dog Fish CatDogFish 1 Cat Dog CatDog 2 DogFish DogFish 3 CatHorse Cat Not sure where to begin with making this column, any help is appreciated
Try: words = ["Cat", "Dog", "Fish"] df["Matched"] = df["String"].apply(lambda s: [w for w in words if w in s]) print(df) Prints: String Matched 0 Cat Dog Fish [Cat, Dog, Fish] 1 Cat Dog [Cat, Dog] 2 Pig Horse [] 3 DogFish [Dog, Fish]
2
3
76,947,564
2023-8-21
https://stackoverflow.com/questions/76947564/how-can-i-quickly-sum-over-a-pandas-groupby-object-while-handling-nans
I have a DataFrame with key and value columns. value is sometimes NA: df = pd.DataFrame({ 'key': np.random.randint(0, 1_000_000, 100_000_000), 'value': np.random.randint(0, 1_000, 100_000_000).astype(float), }) df.loc[df.value == 0, 'value'] = np.nan I want to group by key and sum over the value column. If any value is NA for a key, I want the sum to be NA. The code in this answer takes 35.7 seconds on my machine: df.groupby('key')['value'].apply(np.array).apply(np.sum) This is a lot slower than what is theoretically possible. The built-in Pandas SeriesGroupBy.sum takes 6.31 seconds on my machine: df.groupby('key')['value'].sum() but it doesn't support NA handling (see this GitHub issue). What code can I write to get comparable performance to the built-in operator while still handling NaNs?
One workaround could be to replace the NaNs by Inf: df.fillna({'value': np.inf}).groupby('key')['value'].sum().replace(np.inf, np.nan) Faster alternative: df['value'].fillna(np.inf).groupby(df['key']).sum().replace(np.inf, np.nan) Example output: key 0 45208.0 1 NaN 2 62754.0 3 50001.0 4 51073.0 ... 99995 55102.0 99996 43048.0 99997 49497.0 99998 43301.0 99999 NaN Name: value, Length: 100000, dtype: float64 Timing (on 10m rows). # original sum 743 ms Β± 81 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # Inf workaround 918 ms Β± 70.6 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # Inf workaround (alternative) 773 ms Β± 60.6 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # custom apply with numpy 5.99 s Β± 263 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
2
3
76,949,979
2023-8-22
https://stackoverflow.com/questions/76949979/conda-when-running-give-runtimeerror-openssl-3-0s-legacy-provider-failed-to-lo
I was accidentally go to my miniconda directory and rm -r ca* (just because I using Arch and when update miniconda3 package, it said there are some certificates were there so it can't perform update, so I removed them) Now every time I run conda command, it give me RuntimeError: OpenSSL 3.0's legacy provider failed to load. here is the full log: $conda env list Traceback (most recent call last): File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 16, in __call__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/cli/main.py", line 70, in main_subshell p = generate_parser() ^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/cli/conda_argparse.py", line 65, in generate_parser p = ArgumentParser( ^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/cli/conda_argparse.py", line 152, in __init__ self._subcommands = context.plugin_manager.get_hook_results("subcommands") ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/base/context.py", line 502, in plugin_manager from ..plugins.manager import get_plugin_manager File "/opt/miniconda3/lib/python3.11/site-packages/conda/plugins/__init__.py", line 3, in <module> from .hookspec import hookimpl # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/plugins/hookspec.py", line 9, in <module> from .types import CondaSolver, CondaSubcommand, CondaVirtualPackage File "/opt/miniconda3/lib/python3.11/site-packages/conda/plugins/types.py", line 7, in <module> from ..core.solve import Solver File "/opt/miniconda3/lib/python3.11/site-packages/conda/core/solve.py", line 41, in <module> from .index import _supplement_index_with_system, get_reduced_index File "/opt/miniconda3/lib/python3.11/site-packages/conda/core/index.py", line 24, in <module> from .subdir_data import SubdirData, make_feature_record File "/opt/miniconda3/lib/python3.11/site-packages/conda/core/subdir_data.py", line 53, in <module> from ..trust.signature_verification import signature_verification File "/opt/miniconda3/lib/python3.11/site-packages/conda/trust/signature_verification.py", line 12, in <module> from conda_content_trust.authentication import verify_delegation, verify_root File "/opt/miniconda3/lib/python3.11/site-packages/conda_content_trust/authentication.py", line 34, in <module> from .common import ( File "/opt/miniconda3/lib/python3.11/site-packages/conda_content_trust/common.py", line 66, in <module> import cryptography.hazmat.backends.openssl.ed25519 File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/__init__.py", line 6, in <module> from cryptography.hazmat.backends.openssl.backend import backend File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 61, in <module> from cryptography.hazmat.bindings.openssl import binding File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 232, in <module> Binding.init_static_locks() File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 206, in init_static_locks cls._ensure_ffi_initialized() File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 195, in _ensure_ffi_initialized _legacy_provider_error(cls._legacy_provider_loaded) File "/opt/miniconda3/lib/python3.11/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 104, in _legacy_provider_error raise RuntimeError( RuntimeError: OpenSSL 3.0's legacy provider failed to load. This is a fatal error by default, but cryptography supports running without legacy algorithms by setting the environment variable CRYPTOGRAPHY_OPENSSL_NO_LEGACY. If you did not expect this error, you have likely made a mistake with your OpenSSL configuration. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/miniconda3/bin/conda", line 13, in <module> sys.exit(main()) ^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/cli/main.py", line 129, in main return conda_exception_handler(main, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 376, in conda_exception_handler return_value = exception_handler(func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 19, in __call__ return self.handle_exception(exc_val, exc_tb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 75, in handle_exception return self.handle_unexpected_exception(exc_val, exc_tb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 88, in handle_unexpected_exception self.print_unexpected_error_report(error_report) File "/opt/miniconda3/lib/python3.11/site-packages/conda/exception_handler.py", line 159, in print_unexpected_error_report from .cli.main_info import get_env_vars_str, get_main_info_str File "/opt/miniconda3/lib/python3.11/site-packages/conda/cli/main_info.py", line 15, in <module> from ..core.index import _supplement_index_with_system File "/opt/miniconda3/lib/python3.11/site-packages/conda/core/index.py", line 24, in <module> from .subdir_data import SubdirData, make_feature_record File "/opt/miniconda3/lib/python3.11/site-packages/conda/core/subdir_data.py", line 53, in <module> from ..trust.signature_verification import signature_verification File "/opt/miniconda3/lib/python3.11/site-packages/conda/trust/signature_verification.py", line 12, in <module> from conda_content_trust.authentication import verify_delegation, verify_root File "/opt/miniconda3/lib/python3.11/site-packages/conda_content_trust/authentication.py", line 34, in <module> from .common import ( File "/opt/miniconda3/lib/python3.11/site-packages/conda_content_trust/common.py", line 334, in <module> cryptography.hazmat.backends.openssl.ed25519._Ed25519PrivateKey, # DANGER ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'cryptography.hazmat.backends' has no attribute 'openssl' How can I fix this?
my problem was fixed as follow: export CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 conda install cryptography that's all.
4
9
76,950,969
2023-8-22
https://stackoverflow.com/questions/76950969/error-response-from-daemon-failed-to-create-task-for-container
I have a django app. That also has redis, celery and flower. Everything is working on my local machine. But When I am trying to dockerize it The redis and django app is starting. But celery and flower is failing to start. It is giving me this error while starting celery: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "celery -A core worker -P eventlet --autoscale=10,1 -l INFO": executable file not found in $PATH: unknown Dockerfile # For more information, please refer to https://aka.ms/vscode-docker-python FROM python:bullseye EXPOSE 8000 RUN apt update && apt upgrade -y # && apk install cron iputils-ping sudo nano -y # Install pip requirements COPY requirements.txt . RUN python -m pip install -r requirements.txt RUN rm requirements.txt WORKDIR /app COPY ./src /app RUN mkdir "log" # Set the environment variables ENV PYTHONUNBUFFERED=1 ENV DJANGO_SETTINGS_MODULE=core.settings # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app # RUN echo 'appuser ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/appuser USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug ENTRYPOINT ["sh", "entrypoint.sh"] CMD ['gunicorn', 'core.wsgi:application', '--bind', '0.0.0.0:8000'] entrypoint.sh #!/bin/sh echo "Apply database migrations" python manage.py migrate # Start server echo "Starting server" # run the container CMD exec "$@" docker-compose.yml version: "3.8" services: mailgrass_backend: container_name: mailgrass_backend restart: unless-stopped ports: - "8000:8000" volumes: - npm:/app - ./.env:/app/.env:ro networks: - npm env_file: .env depends_on: - redis build: context: . dockerfile: ./Dockerfile redis: container_name: redis image: redis:7.0-alpine restart: unless-stopped env_file: .env ports: - "6379:6379" command: - 'redis-server' networks: - npm celery: container_name: celery restart: unless-stopped env_file: .env volumes: - npm:/app - ./.env:/app/.env:ro build: context: . dockerfile: ./Dockerfile networks: - npm depends_on: - redis entrypoint: - "celery -A core worker -P eventlet --autoscale=10,1 -l INFO" flower: container_name: flower restart: unless-stopped ports: - "5555:5555" env_file: .env volumes: - npm:/app - ./.env:/app/.env:ro build: context: . dockerfile: ./Dockerfile networks: - npm depends_on: - redis - celery entrypoint: - "celery -b redis://redis:6379 flower" volumes: npm: postgres: networks: npm: requirements.txt amqp==5.1.1 asgiref==3.7.2 attrs==23.1.0 billiard==4.1.0 black==23.7.0 celery==5.3.1 certifi==2023.7.22 cffi==1.15.1 charset-normalizer==3.2.0 click==8.1.6 click-didyoumean==0.3.0 click-plugins==1.1.1 click-repl==0.3.0 cron-descriptor==1.4.0 cryptography==41.0.3 defusedxml==0.7.1 dj-crontab==0.8.0 dj-rest-auth==4.0.1 Django==4.2.4 django-allauth==0.54.0 django-annoying==0.10.6 django-celery-beat==2.5.0 django-cleanup==8.0.0 django-cors-headers==4.2.0 django-debug-toolbar==4.2.0 django-filter==23.2 django-phonenumber-field==7.1.0 django-timezone-field==5.1 djangorestframework==3.14.0 djangorestframework-simplejwt==5.2.2 dnspython==2.4.1 drf-spectacular==0.26.4 email-validator==2.0.0.post2 eventlet==0.33.3 flower==2.0.1 greenlet==2.0.2 humanize==4.7.0 idna==3.4 inflection==0.5.1 isort==5.12.0 jsonschema==4.18.6 jsonschema-specifications==2023.7.1 kombu==5.3.1 mailchecker==5.0.9 Markdown==3.4.4 mypy-extensions==1.0.0 oauthlib==3.2.2 packaging==23.1 pathspec==0.11.2 phonenumberslite==8.13.18 Pillow==10.0.0 platformdirs==3.10.0 prometheus-client==0.17.1 prompt-toolkit==3.0.39 pycparser==2.21 PyJWT==2.8.0 pyotp==2.9.0 python-crontab==3.0.0 python-dateutil==2.8.2 python-dotenv==1.0.0 python3-openid==3.2.0 pytz==2023.3 PyYAML==6.0.1 redis==4.6.0 referencing==0.30.2 requests==2.31.0 requests-oauthlib==1.3.1 rpds-py==0.9.2 six==1.16.0 sqlparse==0.4.4 tornado==6.3.2 tzdata==2023.3 uritemplate==4.1.1 urllib3==2.0.4 vine==5.0.0 wcwidth==0.2.6 whitenoise==6.5.0 gunicorn What am I doing wrong here?
I have found the problem in my docker-compose.yml. I had defined entrypoint like this: entrypoint: - "celery -A core worker -P eventlet --autoscale=10,1 -l INFO" But the correct way to define this in one line is this: entrypoint: "celery -A core worker -P eventlet --autoscale=10,1 -l INFO"
3
0
76,948,077
2023-8-21
https://stackoverflow.com/questions/76948077/python-automatically-matching-named-parameters
Is there any way to pass parameters to a Python function such that if the variable name matches the parameter name, the function will ensure the input is correct? Let's say I have a function as such: def func(foo, bar): return do_something_with(foo, bar) and I wanted to call it as such: foo = get_foo() bar = get_bar() ret_val = func(foo=foo, bar=bar) I can do this, but I feel like when the parameter name and the variable name are the same, there should be some mechanism to automatically match up the parameters (like NATURAL JOIN in SQL). Is there any such mechanism?
There is no special function call syntax to pass variables as keyword arguments of the same name. If that's how you want to call your function, you must write func(foo=foo, bar=bar). While there are a few ways you could work around the issue, they're all so much worse than just writing the keywords out that I wouldn't recommend them. As an example of a bad workaround, you could define your function to take arbitrary extra keyword arguments (with def func(foo, bar, **kwargs_to_be_ignored)) and then call the function with func(**locals()). That will pass in every local variable in your namespace as a keyword argument. In the body of some functions with few local variables, this might be reasonably clean, but in many other contexts (especially at the global level) it will very often pass a whole mess of unnecessary stuff. And if your function has optional arguments (e.g. def func(foo, bar, baz='default)) you can wind up accidentally passing in an overriding value to an argument you didn't want to pass if you just happen to have a variable with that argument's name in your local namespace.
3
1
76,948,587
2023-8-21
https://stackoverflow.com/questions/76948587/multi-indexed-dataframe-join-keep-updated-data-if-not-nan-and-append-on-new-ind
I have two DataFrames with multiple indices named df_base and df_updates. I want to combine these DataFrames into a single DataFrame and keep the multi indices. >>> import numpy as np >>> import pandas as pd >>> df_base = pd.DataFrame( ... { ... "price": { ... ("2019-01-01", "1001"): 100, ... ("2019-01-01", "1002"): 100, ... ("2019-01-01", "1003"): 100, ... ("2019-01-02", "1001"): 100, ... ("2019-01-02", "1002"): 100, ... ("2019-01-02", "1003"): 100, ... ("2019-01-03", "1001"): 100, ... ("2019-01-03", "1002"): 100, ... ("2019-01-03", "1003"): 100, ... } ... }, ... ) >>> df_base.index.names = ["date", "id"] >>> df_base.convert_dtypes() price date id 2019-01-01 1001 100 1002 100 1003 100 2019-01-02 1001 100 1002 100 1003 100 2019-01-03 1001 100 1002 100 1003 100 >>> >>> df_updates = pd.DataFrame( ... { ... "price": { ... ("2019-01-01", "1001"): np.nan, ... ("2019-01-01", "1002"): 100, ... ("2019-01-01", "1003"): 100, ... ("2019-01-02", "1001"): 100, ... ("2019-01-02", "1002"): 100, ... ("2019-01-02", "1003"): 100, ... ("2019-01-03", "1001"): 100, ... ("2019-01-03", "1002"): 100, ... ("2019-01-03", "1003"): 100, ... } ... } ... ) >>> df_updates.index.names = ["date", "id"] >>> df_updates.convert_dtypes() price date id 2019-01-01 1001 <NA> 1002 99 1003 99 1004 100 I want to combine them with the following rules: Keep the old data if the new data is not specified (NaN) Append the new data if the indices doesn't exist in the base DataFrame I already tried using .join but it raise an error >>> df_base.join(df_updates) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[48], line 21 ... ValueError: columns overlap but no suffix specified: Index(['price'], dtype='object') even if I added suffix, it just make the data more complex (need another solution) I also already tried using .update, but the new data with different indices from base were not included in the results >>> df_base.update(df_updates) >>> df_base price date id 2019-01-01 1001 100.0 1002 99.0 1003 99.0 2019-01-02 1001 100.0 1002 100.0 1003 100.0 2019-01-03 1001 100.0 1002 100.0 1003 100.0 And the last, I also try a "tricky" operation >>> df_base.update(df_updates) >>> df_base = df_updates.combine_first(df_base) >>> df_base price date id 2019-01-01 1001 100.0 1002 99.0 1003 99.0 1004 100.0 2019-01-02 1001 100.0 1002 100.0 1003 100.0 2019-01-03 1001 100.0 1002 100.0 1003 100.0 It is the result that I expected, but I'm not sure if it's the best solution for this case, I try using %timeit, and the results are >>> %timeit df_base.update(df_updates) 345 Β΅s Β± 17.1 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) >>> %timeit df_updates.combine_first(df_base) 1.36 ms Β± 10.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) when using large data the results are >>> %timeit df_base.update(df_updates) 2.38 ms Β± 180 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) >>> %timeit df_updates.combine_first(df_base) 9.65 ms Β± 400 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Is that the best solution for my case or is there any more efficient/optimized function (I expected a single liner pandas function)? Thanks! Edit 1: Full Code import numpy as np import pandas as pd df_base = pd.DataFrame( { "price": { ("2019-01-01", "1001"): 100, ("2019-01-01", "1002"): 100, ("2019-01-01", "1003"): 100, ("2019-01-02", "1001"): 100, ("2019-01-02", "1002"): 100, ("2019-01-02", "1003"): 100, ("2019-01-03", "1001"): 100, ("2019-01-03", "1002"): 100, ("2019-01-03", "1003"): 100, } }, ) df_base.index.names = ["date", "id"] df_base.convert_dtypes() df_updates = pd.DataFrame( { "price": { ("2019-01-01", "1001"): np.nan, ("2019-01-01", "1002"): 100, ("2019-01-01", "1003"): 100, ("2019-01-02", "1001"): 100, ("2019-01-02", "1002"): 100, ("2019-01-02", "1003"): 100, ("2019-01-03", "1001"): 100, ("2019-01-03", "1002"): 100, ("2019-01-03", "1003"): 100, } } ) df_updates.index.names = ["date", "id"] df_updates.convert_dtypes() df_base.update(df_updates) df_base = df_updates.combine_first(df_base) df_base
You shouldn't need to update then combine_first, just combine_first: df_base = df_updates.combine_first(df_base) Output: price date id 2019-01-01 1001 100.0 1002 99.0 1003 99.0 1004 100.0 2019-01-02 1001 100.0 1002 100.0 1003 100.0 2019-01-03 1001 100.0 1002 100.0 1003 100.0
2
2
76,948,670
2023-8-21
https://stackoverflow.com/questions/76948670/regex-match-a-word-but-do-not-match-a-phrase-that-can-appear-anywhere-in-the-p
I have been spending time on this regex but I can't get it to work. So I need to match bunch of words in a phrase but if the same word occurs with a set of words, I do not want that to be captured. For example: phrase: Hi, I am talking about a recall on the product I bought last month. If I recall correctly, I purchased this at your store on august 15th. Can you tell me if I can get a refund on this recall? Result should match the first recall and the last recall. but it should not match 'If I recall' since those three words together doesn't talk about the product recall. I tried different variations of this but couldn't get it to work. This matches all 'recall' terms. (?<!If\sI\srecall).*?(recalls?|recalled).*?(?!If\sI\srecall) I am using Python 3.10 to test this. Any help would be appreciated.
If you want to match the 2 words: (?<!\bIf\sI\s)\brecall(?:s|ed)?\b The pattern matches: (?<!\bIf\sI\s) Negative lookbehind, assert not If I to the left \brecall(?:s|ed)? Match one of the words recall recalls recalled \b A word boundary Regex demo | Python demo
2
3
76,946,298
2023-8-21
https://stackoverflow.com/questions/76946298/loading-of-gdal-library-impossible-after-an-upgrade
I have a Python environment with Django 3 that has to use GDAL (because the DB engine I use is django.contrib.gis.db.backends.postgis). After an upgrade of GDAL (from 3.6.4_6 to 3.7.1_1), I have this exception on any command to run the django project : File "~/.pyenv/versions/3.8.9/lib/python3.8/ctypes/__init__.py", line 373, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen(/usr/local/lib/libgdal.dylib, 0x0006): Symbol not found: __ZN3Aws6Client19ClientConfigurationC1Ev Referenced from: <FA8C3295-2793-3C69-A419-16C41753696B> /opt/homebrew/Cellar/apache-arrow/12.0.1_4/lib/libarrow.1200.1.0.dylib Expected in: <BDB1F1E3-0BE9-3D7D-A57E-9D9F8CAD197A> /opt/homebrew/Cellar/aws-sdk-cpp/1.11.145/lib/libaws-cpp-sdk-core.dylib I've managed to isolate the problem in a fresh python environment with only loading the GDAL library : from ctypes import CDLL; CDLL("/usr/local/lib/libgdal.dylib"). Note, I'm on a Mac (CPU M2 Pro), I've installed GDAL and all its dependancies via brew. I've tried reinstalling it from fresh but it didn't change a thing and I've also tried with different python versions. Aftermatch: An issue was opened soon after my post (#140082) and the pull request (!140127) fixing it was merged on the 22nd of August 2023. So you just need to do brew update && brew upgrade
I faced the same OSError: dlopen(/usr/local/lib/libgdal.dylib, 0x0006): Symbol not found: __ZN3Aws6Client19ClientConfigurationC1Ev . Applied the approach suggested by @bennylope (https://nelson.cloud/how-to-install-older-versions-of-homebrew-packages/) and got it worked: wget https://raw.githubusercontent.com/Homebrew/homebrew-core/824c551b4f514e7cd86358133924906b982c87e2/Formula/postgis.rb wget https://raw.githubusercontent.com/Homebrew/homebrew-core/0faa586d8415cd2ced33345aa3e13c664c92aeda/Formula/a/aws-sdk-cpp.rb brew install postgis.rb brew uninstall --ignore-dependencies aws-sdk-cpp brew install aws-sdk-cpp.rb brew link aws-sdk-cpp
2
2
76,945,282
2023-8-21
https://stackoverflow.com/questions/76945282/python-asyncio-run-in-executor-never-done
I running the following code in python 3.10.12. There is a sync function fetch_data, which should return 1 immediately. And in main function, a CPU-bound task like sum(i for i in range(int(1e6))) which cost 4~5s. import asyncio def fetch_data(): print("done") return 1 async def main(): loop = asyncio.get_event_loop() task = loop.run_in_executor(None, fetch_data) sum(i for i in range(int(1e6))) print(task.done()) asyncio.run(main()) The output I expected is done True But what I got is done False The fetch_data seems never done, but when I insert await asyncio.sleep(0) before print(task.done()), the output is what I expected. import asyncio def fetch_data(): print("done") return 1 async def main(): loop = asyncio.get_event_loop() task = loop.run_in_executor(None, fetch_data) sum(i for i in range(int(1e6))) await asyncio.sleep(0) # After sleep(0), the output is as expected. print(task.done()) asyncio.run(main()) Why is this happening? What should I do to achieve what I needed without asyncio.sleep(0)?
It's happening because you don't give the executor a change to process it's tasks. With await asyncio.sleep(0) you interrupt your main() function and executor process the tasks and the result is as expected. If asyncio.sleep is problem, you can use other executor: import asyncio import concurrent.futures def fetch_data(): print("done", flush=True) print() return 1 async def main(): with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: task = executor.submit(fetch_data) sum(i for i in range(int(1e8))) print(task.done()) asyncio.run(main()) Prints: done True
2
3
76,945,193
2023-8-21
https://stackoverflow.com/questions/76945193/populate-relationship-in-sqlalchemy-with-query-join
Consider the following models class A(Base): __tablename__ = "as" id = mapped_column(Integer, primary_key=True) b_id = mapped_column(ForeignKey("bs.id")) b: Mapped[B] = relationship() class B(Base): __tablename__ = "bs" id = mapped_column(Integer, primary_key=True) c_id = mapped_column(ForeignKey("cs.id")) c: Mapped[C] = relationship() x = mapped_column(Integer) class C(Base): __tablename__ = "cs" id = mapped_column(Integer, primary_key=True) y = mapped_column(Integer) I want to query A objects with constraints on the associated .b.x and .b.c.y value, and I want the resulting A objects to have populated fields (not lazy). If I use joinedload(A.b).joinedload(B.c), I cannot apply constraints directly: select(A) .options(joinedload(A.b).joinedload(B.c)) .where(B.x == 0, C.y == 0) SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id, bs, cs WHERE bs.x = ? AND cs.y = ? and as you can see, the constraints are not on the joined table, so the query is incorrect. If I use .has(), I get the proper results but this creates am inefficient query (in practice I have many more constraints and large tables): select(A) .options(joinedload(A.b).joinedload(B.c)) .where( A.b.has( and_(B.x == 0, b.c.has(C.y == 0)) ) ) SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id WHERE EXISTS (SELECT 1 FROM bs WHERE bs.id = `as`.b_id AND bs.x = ? AND (EXISTS (SELECT 1 FROM cs WHERE cs.id = bs.c_id AND cs.y = ?))) with (EXISTS (SELECT 1 ...)) which are inefficient and not needed here. If I use .join(), I can get clean request, but then the relationship fields do not get populated automatically: select(A) .join(B, A.b_id == B.id) .join(C, B.c_id == C.id) .where(B.x == 0, C.y == 0) SELECT `as`.id, `as`.b_id FROM `as` INNER JOIN bs ON `as`.b_id = bs.id INNER JOIN cs ON bs.c_id = cs.id WHERE bs.x = ? AND cs.y = ? and as you can see, the fields for A.b and A.b.c are not included, so accessing A.b will trigger a new request (same for A.b.c of course). If I combine .join and joinedload, the request is not valid: select(A) .join(B, A.b_id == B.id) .join(C, B.c_id == C.id) .where(B.x == 0, C.y == 0) .options(joinedload(A.b).joinedload(B.c)) SELECT `as`.id, `as`.b_id, cs_1.id AS id_1, cs_1.y, bs_1.id AS id_2, bs_1.c_id, bs_1.x FROM `as` INNER JOIN bs ON `as`.b_id = bs.id INNER JOIN cs ON bs.c_id = cs.id LEFT OUTER JOIN bs AS bs_1 ON bs_1.id = `as`.b_id LEFT OUTER JOIN cs AS cs_1 ON cs_1.id = bs_1.c_id WHERE bs.x = ? AND cs.y = ? contains 4 joins instead of two. Is there a way to get A.b and A.b.c populated as-if I used joinedload, but using a SQL requests similar to .join()?
How's this: from sqlalchemy.orm import contains_eager query = ( select(A) .join(A.b) .join(B.c) .where(and_(B.x == 0, C.y == 0)) .options(contains_eager(A.b), contains_eager(A.b, B.c)) ) Original answer: select(A) .join(A.b) .join(B.c) .where(and_(B.x == 0, C.y == 0)) .options(joinedload(A.b).joinedload(B.c))
2
3
76,946,010
2023-8-21
https://stackoverflow.com/questions/76946010/getting-list-of-unique-special-characters
I want to obtain a list of all the unique characters in a text. A particularity of the text is that it includes composed characters like s̈, bΜƒ. So when I split the text, the special characters are separated. For example, this character s̈ is separated into two characters s and Β¨. This is an example of the text I want to process. sentence = "nejon Γ‘mas̈hΓ³ TΜƒiqu c̈abΜƒop" print(sentence) print(list[set(sentence)]) I want to obtain a list with the unique characters. For this sentence, this list should be expected_list = ['a', 'Γ‘', 'bΜƒ', 'c̈', 'e', 'h', 'i', 'j', 'm', 'n', 'o', 'Γ³', 'p', 'q', 's̈', 'TΜƒ', 'u' ] but it is actual_list = ['j', 'p', 'c', 'n', 'a', ' ', 'i', 'Γ‘', 'o', 'T', 'u', 'Μƒ', 'h', '̈', 'q', 's', 'e', 'm', 'b', 'Γ³'] I was reading that I can normalize the special characters as follows import unicodedata # Only for the character s̈ print(ascii(unicodedata.normalize('NFC', '\u0073\u00a8'))) #prints 's\xa8' But I don't know how to continue. Any help would be greatly appreciated.
Handling composed characters in Python can be a bit tricky due to the nature of how they are encoded. Try the grapheme library, which specifically deals with grapheme clusters (textual units that are displayed as a single character) Install the grapheme library using pip: pip install grapheme or I prefer this way (to make sure it's installing to the current python binary dirs) python3 -m pip install grapheme Then, you can use it to extract the unique grapheme clusters from the sentence: import grapheme sentence = "nejon Γ‘mas̈hΓ³ TΜƒiqu c̈abΜƒop" unique_characters = list(grapheme.graphemes(sentence)) print(unique_characters)
2
4
76,916,457
2023-8-16
https://stackoverflow.com/questions/76916457/how-can-i-implement-real-time-sentiment-analysis-on-live-audio-streams-using-pyt
I'm currently working on a project where I need to perform real-time sentiment analysis on live audio streams using Python. The goal is to analyze the sentiment expressed in the spoken words and provide insights in real-time. I've done some research and found resources on text-based sentiment analysis, but I'm unsure about how to adapt these techniques to audio streams. Context and Efforts: Research: I've researched various libraries and tools for sentiment analysis, such as Natural Language Processing (NLP) libraries like NLTK and spaCy. However, most resources I found focus on text data rather than audio. Audio Processing: I'm familiar with libraries like pyaudio and soundfile in Python for audio recording and processing. I've successfully captured live audio streams using these libraries. Text-to-Speech Conversion: I've experimented with converting the spoken words from the audio streams into text using libraries like SpeechRecognition to prepare the data for sentiment analysis. Challenges: Sentiment Analysis: My main challenge is adapting the sentiment analysis techniques to audio data. I'm not sure if traditional text-based sentiment analysis models can be directly applied to audio, or if there are specific approaches for this scenario. Real-Time Processing: I'm also concerned about the real-time aspect of the analysis. How can I ensure that the sentiment analysis is performed quickly enough to provide insights in real-time without introducing significant delays? Question: I'm seeking guidance on the best approach to implement real-time sentiment analysis on live audio streams using Python. Are there any specialized libraries or techniques for audio-based sentiment analysis that I should be aware of? How can I effectively process the audio data and perform sentiment analysis in real-time? Any insights, code examples, or recommended resources would be greatly appreciated.
The solution I found here by following these steps: Adding middleware to make text from video Running the sentiment analysis for each complete sentence (Using Whisper as @doneforaiur suggested) So, the latency here depends on the length of the words of the sentence. It's a trade-off for getting the analysis for the full context. Making chunks doesn't reflect the whole sentence thus the complete context. There is a possible way to extend this by adding multiple sentence feeds to the analysis system to get deeper context. I'm thinking about it. Sometimes, we describe a scenario with multiple sentences. Thanks, @doneforaiur for setting me in the right direction.
4
0
76,943,502
2023-8-21
https://stackoverflow.com/questions/76943502/reading-a-komoot-xml-file-gpx-with-pandas
I want to read a xml file generated by komoot into a DataFrame. Here is the structure of the xml file: <?xml version='1.0' encoding='UTF-8'?> <gpx version="1.1" creator="https://www.komoot.de" xmlns="http://www.topografix.com/GPX/1/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd"> <metadata> <name>Title</name> <author> <link href="https://www.komoot.de"> <text>komoot</text> <type>text/html</type> </link> </author> </metadata> <trk> <name>Title</name> <trkseg> <trkpt lat="60.126749" lon="4.250254"> <ele>455.735013</ele> <time>2023-08-20T17:42:34.674Z</time> </trkpt> <trkpt lat="60.126580" lon="4.250247"> <ele>455.735013</ele> <time>2023-08-20T17:42:36.695Z</time> </trkpt> <trkpt lat="60.126484" lon="4.250240"> <ele>455.735013</ele> <time>2023-08-20T17:44:15.112Z</time> </trkpt> </trkseg> </trk> </gpx> I tried this code: pd.read_xml('testfile.gpx',xpath='./gpx/trk/trkseg') But somehow it seems there are problems with my xpath. Namely, I get this ValueError: ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. I tried a lot but no xpath I chose worked out.
Following the ValueError guidelines, you need to pass a namespace to read_xml : df = ( pd.read_xml( "testfile.gpx", xpath=".//doc:trkseg/doc:trkpt", namespaces={"doc": "http://www.topografix.com/GPX/1/1"} ) ) Output : print(df) lat lon ele time 0 60.126749 4.250254 455.735013 2023-08-20T17:42:34.674Z 1 60.126580 4.250247 455.735013 2023-08-20T17:42:36.695Z 2 60.126484 4.250240 455.735013 2023-08-20T17:44:15.112Z
3
3
76,932,969
2023-8-18
https://stackoverflow.com/questions/76932969/customizing-p-value-thresholds-for-star-text-format-in-statannotations
The statannotations package provides visualization annotation on the level of statistical significance for pairs of data in plots (in seaborn boxplot or strip plot, for example). These annotation can be in "star" text format, where one or more stars appears on top of the bar between pairs of data: . Is there any way to customize the thresholds for stars? I want 0.0001 to be the threshold for the first significance threshold instead of 0.05, and 0.00001 for two stars **, and 0.000001 for three stars ***. The example figure was generated from example codes from statsannotations' github page: from statannotations.Annotator import Annotator import matplotlib.pyplot as plt import seaborn as sns import pandas as pd df = sns.load_dataset("tips") x = "day" y = "total_bill" order = ['Sun', 'Thur', 'Fri', 'Sat'] ax = sns.boxplot(data=df, x=x, y=y, order=order) annot = Annotator(ax, [("Thur", "Fri"), ("Thur", "Sat"), ("Fri", "Sun")], data=df, x=x, y=y, order=order) annot.configure(test='Mann-Whitney', text_format='star', loc='outside', verbose=2) annot.apply_test() ax, test_results = annot.annotate() plt.savefig('example_non-hue_outside.png', dpi=300, bbox_inches='tight') With verbose set to 2, this would also tell us the thresholds used for determining how many stars appear above the bars: p-value annotation legend: ns: p <= 1.00e+00 *: 1.00e-02 < p <= 5.00e-02 **: 1.00e-03 < p <= 1.00e-02 ***: 1.00e-04 < p <= 1.00e-03 ****: p <= 1.00e-04 I want to feed something like a dictionary of p-value threshold: number of stars to Annotator, but I don't know to what parameter should I feed to.
In their repository, specifically inside file [Annotator.py][1]:,we have self._pvalue_format = PValueFormat(). That implies we can change the same. The PValueFormat() class, which can be found here, has the following configurable parameters: CONFIGURABLE_PARAMETERS = [ 'correction_format', 'fontsize', 'pvalue_format_string', 'simple_format_string', 'text_format', 'pvalue_thresholds', 'show_test_name' ] For completeness, here is the modified version of your code and the new result with two lines showing the before and after values for the pvalues. Also, the image changes accordingly. # ! pip install statannotations from smartprint import smartprint as sprint from statannotations.Annotator import Annotator import matplotlib.pyplot as plt import seaborn as sns import pandas as pd df = sns.load_dataset("tips") x = "day" y = "total_bill" order = ['Sun', 'Thur', 'Fri', 'Sat'] ax = sns.boxplot(data=df, x=x, y=y, order=order) annot = Annotator(ax, [("Thur", "Fri"), ("Thur", "Sat"), ("Fri", "Sun")], data=df, x=x, y=y, order=order) print ("Before hardcoding pvalue thresholds ") sprint (annot.get_configuration()["pvalue_format"]) annot.configure(test='Mann-Whitney', text_format='star', loc='outside', verbose=2) annot._pvalue_format.pvalue_thresholds = [[0.01, '****'], [0.03, '***'], [0.2, '**'], [0.6, '*'], [1, 'ns']] annot.apply_test() ax, test_results = annot.annotate() plt.savefig('example_non-hue_outside.png', dpi=300, bbox_inches='tight') print ("After hardcoding pvalue thresholds ") sprint (annot.get_configuration()["pvalue_format"]) Output: Before hardcoding pvalue thresholds Dict: annot.get_configuration()["pvalue_format"] Key: Value {'correction_format': '{star} ({suffix})', 'fontsize': 'medium', 'pvalue_format_string': '{:.3e}', 'pvalue_thresholds': [[0.0001, '****'], [0.001, '***'], [0.01, '**'], [0.05, '*'], [1, 'ns']], 'show_test_name': True, 'simple_format_string': '{:.2f}', 'text_format': 'star'} p-value annotation legend: ns: p <= 1.00e+00 *: 2.00e-01 < p <= 6.00e-01 **: 3.00e-02 < p <= 2.00e-01 ***: 1.00e-02 < p <= 3.00e-02 ****: p <= 1.00e-02 Thur vs. Fri: Mann-Whitney-Wilcoxon test two-sided, P_val:6.477e-01 U_stat=6.305e+02 Thur vs. Sat: Mann-Whitney-Wilcoxon test two-sided, P_val:4.690e-02 U_stat=2.180e+03 Sun vs. Fri: Mann-Whitney-Wilcoxon test two-sided, P_val:2.680e-02 U_stat=9.605e+02 After hardcoding pvalue thresholds Dict: annot.get_configuration()["pvalue_format"] Key: Value {'correction_format': '{star} ({suffix})', 'fontsize': 'medium', 'pvalue_format_string': '{:.3e}', 'pvalue_thresholds': [[0.01, '****'], [0.03, '***'], [0.2, '**'], [0.6, '*'], [1, 'ns']], 'show_test_name': True, 'simple_format_string': '{:.2f}', 'text_format': 'star'} Image: Edit: Based on user: Bonlenfum's comment, changing the thresholds can also be achieved by simply appending the key-value when calling .configure, as shown below: annot.configure(test='Mann-Whitney', text_format='star', loc='outside',\ verbose=2, pvalue_thresholds=[[0.01, '****'], \ [0.03, '***'], [0.2, '**'], [0.6, '*'], [1, 'ns']])
3
3
76,941,993
2023-8-21
https://stackoverflow.com/questions/76941993/why-is-list-pop0-not-an-o1-operation-in-python
l = [1,2,3,4] popping the last element would be an O(1) operation since: It returns the last element Changes a few fixed attributes like len of the list Why can't we do the same with pop(0)? Return the first element Change the pointer of the first element (index 0) to that of the second element (index 1) and change a few fixed attributes?
Lists could have been implemented to do what you suggest, but it would add complexity and overhead, for an operation better handled by collections.deque. All lists everywhere would need extra metadata to track how much empty space is at the front (important for handling the resize policy, and calling free on the right pointer when the list dies or gets resized), and the logic for when and how to resize would also become more complicated. The tradeoff was not deemed worthwhile.
4
5
76,941,870
2023-8-21
https://stackoverflow.com/questions/76941870/valueerror-one-input-key-expected-got-text-one-text-two-in-langchain-wit
I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted here, but in that one, they are passing only one input. Here is the setup: from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.memory import ConversationBufferMemory llm = OpenAI( model="text-davinci-003", openai_api_key=environment_values["OPEN_AI_KEY"], # Used dotenv to store API key temperature=0.9, client="", ) memory = ConversationBufferMemory(memory_key="chat_history") prompt = PromptTemplate( input_variables=[ "text_one", "text_two", "chat_history" ], template=( """You are an AI talking to a huamn. Here is the chat history so far: {chat_history} Here is some more text: {text_one} and here is a even more text: {text_two} """ ) ) chain = LLMChain( llm=llm, prompt=prompt, memory=memory, verbose=False ) When I run output = chain.predict( text_one="Hello", text_two="World" ) I get ValueError: One input key expected got ['text_one', 'text_two'] I've looked at this stackoverflow post, which suggests to try: output = chain( inputs={ "text_one" : "Hello", "text_two" : "World" } ) which gives the exact same error. In the spirit of trying different things, I've also tried: output = chain.predict( # Also tried .run() here inputs={ "text_one" : "Hello", "text_two" : "World" } ) which gives Missing some input keys: {'text_one', 'text_two'}. I've also looked at this issue on the langchain GitHub, which suggests to do pass the llm into memory, i.e. # Everything the same except... memory = ConversationBufferMemory(llm=llm, memory_key="chat_history") # Note the llm here and I still get the same error. If someone knows a way around this error, please let me know. Thank-you.
While drafting this question, I came across the answer. When defining the memory variable, pass an input_key="human_input" and make sure each prompt has a human_input defined. memory=ConversationBufferMemory( memory_key="chat_history", input_key="human_input" ) Then, in each prompt, make sure there is a human_input input. prompt = PromptTemplate( input_variables=[ "text_one", "text_two", "chat_history", "human_input", # Even if it's blank ], template=( """You are an AI talking to a huamn. Here is the chat history so far: {chat_history} Here is some more text: {text_one} and here is a even more text: {text_two} {human_input} """ ) ) Then, build your chain: chain = LLMChain( llm=llm, prompt=prompt, memory=memory, # Contains the input_key verbose=False ) And then run it as: output = chain.predict( human_input="", # or whatever you want text_one="Hello", text_two="World" ) print(output) # On my machine, it outputs: '\nAI: Hi there! How can I help you?'
5
13
76,941,553
2023-8-20
https://stackoverflow.com/questions/76941553/pep-622-can-match-statement-be-used-as-an-expression
PEP 622 introduced match statement as an alternative to if-elif-else. However, one thing I can't find in the proposal or in any of the material online is whether the match statement can be used as an expression and not just as a statement. A couple of examples to make it clear: Example 1: def make_point_2d(pt): match pt: case (x, y): return Point2d(x, y) case _: raise TypeError("not a point we support") Example 2: match response.status: case 200: do_something(response.data) case 301 | 302: retry(response.location) In the first example, the function returns from inside a case clause, and in the second example, nothing is returned. But I want to be able to do something like the following hypothetical example: spouse = match name: case "John": "Jane" case "David": "Alice" print(spouse) But it doesn't compile.
Not in Python. In Rust and Haskell, matches are expressions composed of expressions: let spouse = match name { // expr => expr, "John" => "Jane", // expr => {stmt; stmt; expr}, "David" => {let s = "Alice"; println!("Matched David"); s}, _ => panic!("Unknown name"), }; do spouse <- case name of "John" -> return "Jane" "David" -> do let s = "Alice" putStrLn "Matched David" return s ...so it's not technically impossible that the Python match statement couldn't have been designed to work as an expression. Presumably, the reason it was avoided for Python are syntactical reasons: Python does not have syntax for expression statements (e.g. {stmt; stmt; expr}). spouse = match name: case "John": "Jane" case "David": # stmt; stmt; expr s = "Alice" print("Matched David") s Indentation to indicate a block being treated as an expression may look unnatural to some people. (See also: match grammar.) That said, it is possible your proposed syntax could be accepted in the future.
12
6
76,936,532
2023-8-19
https://stackoverflow.com/questions/76936532/how-to-manage-legend-tracegroupgap-for-different-row-heights-in-subplots-in-plot
i found this example code for subplot with legend at each subplot. i changed it by adding row_heights and now the legend do not fit to the subplots. import pandas as pd import plotly.express as px df = px.data.gapminder().query("continent=='Americas'") from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=3, cols=1, row_heights=[2,1,0.75]) fig.append_trace(go.Scatter( x=df.query("country == 'Canada'")['year'], y=df.query("country == 'Canada'")['lifeExp'], name = 'Canada', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'United States'")['year'], y=df.query("country == 'United States'")['lifeExp'], name = 'United States', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Mexico'")['year'], y=df.query("country == 'Mexico'")['lifeExp'], name = 'Mexico', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Colombia'")['year'], y=df.query("country == 'Colombia'")['lifeExp'], name = 'Colombia', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Brazil'")['year'], y=df.query("country == 'Brazil'")['lifeExp'], name = 'Brazil', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Argentina'")['year'], y=df.query("country == 'Argentina'")['lifeExp'], name = 'Argentina', legendgroup = '3' ), row=3, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Chile'")['year'], y=df.query("country == 'Chile'")['lifeExp'], name = 'Chile', legendgroup = '3' ), row=3, col=1) fig.update_layout( height=800, width=800, title_text="Life Expectancy in the Americas", xaxis3_title = 'Year', yaxis1_title = 'Age', yaxis2_title = 'Age', yaxis3_title = 'Age', legend_tracegroupgap = 100, yaxis1_range=[50, 90], yaxis2_range=[50, 90], yaxis3_range=[50, 90] ) fig.show() now i am looking for a solution to manage the legend_tracegroupgap for different row_heights. i expect the legends at the top beside the subplots.
As of Plotly v5.15, you can add multiple legends, and position them relative to the height of the plot. In your example, you can add the argument legend='legend', legend='legend2', or legend='legend3' to each go.Scatter to match them with the legendgroup, then add the arguments legend = {"y": 1.0}, legend2 = {"y": 0.42},legend3 = {"y": 0.08} to fig.update_layout. Below is the full code and resulting figure: import pandas as pd import plotly.express as px df = px.data.gapminder().query("continent=='Americas'") from plotly.subplots import make_subplots import plotly.graph_objects as go fig = make_subplots(rows=3, cols=1, row_heights=[2,1,0.75]) fig.append_trace(go.Scatter( x=df.query("country == 'Canada'")['year'], y=df.query("country == 'Canada'")['lifeExp'], name = 'Canada', legend='legend', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'United States'")['year'], y=df.query("country == 'United States'")['lifeExp'], name = 'United States', legend='legend', legendgroup = '1' ), row=1, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Mexico'")['year'], y=df.query("country == 'Mexico'")['lifeExp'], name = 'Mexico', legend='legend2', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Colombia'")['year'], y=df.query("country == 'Colombia'")['lifeExp'], name = 'Colombia', legend='legend2', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Brazil'")['year'], y=df.query("country == 'Brazil'")['lifeExp'], name = 'Brazil', legend='legend2', legendgroup = '2' ), row=2, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Argentina'")['year'], y=df.query("country == 'Argentina'")['lifeExp'], name = 'Argentina', legend='legend3', legendgroup = '3' ), row=3, col=1) fig.append_trace(go.Scatter( x=df.query("country == 'Chile'")['year'], y=df.query("country == 'Chile'")['lifeExp'], name = 'Chile', legend='legend3', legendgroup = '3', ), row=3, col=1) fig.update_layout( height=800, width=800, title_text="Life Expectancy in the Americas", xaxis3_title = 'Year', yaxis1_title = 'Age', yaxis2_title = 'Age', yaxis3_title = 'Age', legend = {"y": 1.0}, legend2 = {"y": 0.42}, legend3 = {"y": 0.08}, yaxis1_range=[50, 90], yaxis2_range=[50, 90], yaxis3_range=[50, 90] ) fig.show()
2
3
76,936,567
2023-8-19
https://stackoverflow.com/questions/76936567/why-are-these-hashes-of-the-same-values-different-between-different-pandas-dataf
When hashing the same email address in two DataFrames, I am returned different hashes. These two dataframes, df1 and df2, each contain a column of email addresses which need to be hashed, so the hashes can be compared when they are inner joined, like this: import pandas as pd ### Boring part to import the data ### # define table 1 as df1 df1 = pd.DataFrame([[2, '[email protected]'], [6, '[email protected]'], [7, '[email protected]'], [8, '[email protected]'], [200, '[email protected]'], [18, '[email protected]'], [19, '[email protected]']]) df1 = df1.set_axis(['ID1', 'email 1'], axis=1) # define table 2 as df2 df2 = pd.DataFrame([[100, '[email protected]'], [6, '[email protected]'], [99, '[email protected]'], [10, '[email protected]'], [115, '[email protected]'], [116, '[email protected]'], [8, '[email protected]'], [200, '[email protected]']]) df2 = df2.set_axis(['ID2', 'email 2'], axis=1) ### End part to import the data ### ### Fun part now... ### # hash the emails in each row of df1? df1['hash 1'] = pd.util.hash_pandas_object(df1['email 1'].astype(str)) # hash the emails in each row of df2? df2['hash 2'] = pd.util.hash_pandas_object(df2['email 2'].astype(str)) # perform an inner join of df1 and df2 about their IDs, ID1 and ID2 respectively df3 = pd.merge(df1, df2, how='inner', left_on='ID1', right_on='ID2') # add an email comparison column df3['same email'] = df3['email 1'] == df3['email 2'] # add a hash comparison column df3['same hash'] = df3['hash 1'] == df3['hash 2'] # print the table... print(df3) The result shows that while the email addresses in row 1 are identical (as far as I can tell) the hashes are not the same: ID1 email 1 hash 1 ID2 email 2 hash 2 same email same hash 0 6 [email protected] 18381560226251184406 6 [email protected] 16113553761483526335 False False 1 8 [email protected] 5780217243550696535 8 [email protected] 6939369575697951555 True False 2 200 [email protected] 13252009090739560311 200 [email protected] 1942861278265138167 False False Why are these hashes of the same email address from different DataFrames different to one another?
According to the documentation, the default mode of operation is including index in hash computation. So when two same emails have different indices, the hash is different. You can try: df1["hash 1"] = pd.util.hash_pandas_object(df1["email 1"].astype(str), index=False) df2["hash 2"] = pd.util.hash_pandas_object(df2["email 2"].astype(str), index=False) Then the result will be: ID1 email 1 hash 1 ID2 email 2 hash 2 0 6 [email protected] 5185970979410096600 6 [email protected] 18338061231746973003 1 8 [email protected] 9881121729072933860 8 [email protected] 9881121729072933860 2 200 [email protected] 742268446511091656 200 [email protected] 775994242592712264 Other method of hash computation is using the built-in hash function: df1["hash 1"] = df1["email 1"].apply(hash) df2["hash 2"] = df2["email 2"].apply(hash)
2
3
76,933,933
2023-8-19
https://stackoverflow.com/questions/76933933/langchain-specific-default-response
using LangChain and OpenAI, how can I have the model return a specific default response? for instance, let's say I have these statement/responses Statement: Hi, I need to update my email address. Answer: Thank you for updating us. Please text it here. Statement: Hi, I have a few questions regarding my case. Can you call me back? Answer: Hi. Yes, one of our case managers will give you a call shortly. if the input is similar to one of the above statements, I would like to have OpenAI respond with the specific answer.
You can handle this with precise set of examples and role of AI assistant. I am setting verbose = 1 in LLMChain so that you can see the observation/execution.. from langchain.prompts import PromptTemplate from langchain.prompts import FewShotPromptTemplate from langchain.chat_models import ChatOpenAI from langchain.chains import LLMChain examples = [ { "query": "What's the weather like?", "answer": "It's raining cats and dogs, better bring an umbrella!" }, { "query": "How old are you?", "answer": "Age is just a number, but I'm timeless." }, { "query":"Could you update my email address", "answer":"Thank you for updating us. Please text it here" }, { "query":"I have a few questions regarding my case. Can you call me back?", "answer":"Yes, one of our case managers will give you a call shortly" } ] example_template = """ User:{query}, AI:{answer} """ example_prompt = PromptTemplate( input_variables=["query", "answer"], template=example_template ) # prefix= """ The following are excerpts from conversations with an AI # assistant. The assistant is known for its humor and wit, providing # entertaining and amusing responses to users' questions. Here are some # examples:""" prefix= """ The following are excerpts from conversations with an AI assistant. The assistant is known for its accurate responses to users' questions. Here are some examples:""" suffix=""" User:{query}, AI: """ few_shot_template = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, prefix=prefix, suffix=suffix, input_variables=["query"], example_separator="\n\n" ) chat = ChatOpenAI(model_name="gpt-3.5-turbo-0301", temperature=0.0) chain = LLMChain(llm=chat, prompt=few_shot_template, verbose=1) print(chain.run("what's meaning of life ?")) print(chain.run("Could you update my email address ?")) print(chain.run("I have a few questions regarding my case. Can you call me back?"))
4
2
76,932,528
2023-8-18
https://stackoverflow.com/questions/76932528/how-can-i-sort-a-list-of-args-in-sympy
I have a an expression, i.e. 6*x**3 + 2*x**(1/2) and want to select the item with the lowest exponent, in this case 2*x**(1/2). Is there a simple way to do that? Working with args gives an unsorted list of items and working with Poly is not possible here.
Try using as_ordered_terms and as_coeff_exponent with min: >>> from sympy import symbols >>> x = symbols('x') >>> expr = 6*x**3 + 2*x**(1/2) >>> terms = expr.as_ordered_terms() >>> lowest_term = min(terms, key=lambda term: term.as_coeff_exponent(x)[1]) >>> lowest_term 2*x**0.5
2
3
76,917,629
2023-8-16
https://stackoverflow.com/questions/76917629/copy-file-from-ephemeral-docker-container-to-host
I want to copy a file generated by a docker container, and store inside it to my local host. What is a good way of doing that? The docker container is ephemeral (i.e. it runs for a very short time and then stops.) I am working with the below mentioned scripts: Python (script.py) which generates and saves a file titled read.txt. with open('read.txt', 'w') as file: lst = ['Some\n', 'random\n', 'sentence\n'] file.writelines("% s\n" % words for words in lst) I use the below Dockerfile: FROM python:3.9 WORKDIR /app RUN pip install --upgrade pip COPY . /app/ RUN pip install --requirement /app/requirements.txt CMD ["python", "/app/script.py.py"] Below is my folder structure: - local - folder1 - script.py - requirements.txt - Dockerfile - folder2 Till now, I have managed to successfully build a docker container using: docker build --no-cache -t test:v1 . When I run this docker container inside /local/folder1/ using the below command, I get the desired file, i.e. read.txt inside /local/folder1/ docker run -v /local/folder1/:/app/ test:v1 But, when I run docker run -v /local/folder2/:/app/ test:v1 inside /local/folder2/, I do not see read.txt inside /local/folder2/ and I get the below message. python: can't open file '/app/script.py': [Errno 2] No such file or directory I want to be able to get read.txt inside /local/folder2/ when I run the docker container test:v1 inside /local/folder2/. How can I do it? I want to be able to do it without copying the contents of /local/folder1/ inside /local/folder2/. The docker container is ephemeral (i.e. it runs for a very short time and then stops.) Hence answers given in this and this Stackoverflow posts, which focus on docker cp have not worked for me. Time of running of the abovementioned container is not very essential. Even if a workable solution increases the time of running of a container, that is okay.
Since the container is ephemeral, you may want to run the container and keep it running. And then copy the file. Try the below steps: 1. Keep the container running and run the python command. docker run -it --name temp test:v1 bash -c 'python script.py' Here temp is the name of a temporary container. 2. Copy file from temp container to local host. docker cp temp:/app/read.txt /local/folder2/. 3. Remove the temporary container. docker rm temp
4
2
76,931,578
2023-8-18
https://stackoverflow.com/questions/76931578/how-can-i-log-into-file-and-on-console-at-the-same-time-with-tee-command
I'm executing a python file with powershell and I would like to get the log in a file and on console also, so tried to use the tee command. cd folder .\program.py | tee log.txt But error was thrown back Cannot run a document in the middle of a pipeline Python file just contains 1 line of code print "test" How can I use this command at the right way?
You have to call python to execute the script. Try executing in the folder where program.py and log.txt is: python program.py | tee log.txt The explanation is that ./program.py without python command tries to run or open the file, but you are not executing it. Then, you try to to pass the ouput to log.txt with the pipe operator and tee. So it is the expected behaviour. Aditionally, you are using python 2 syntax which is deprecated since ages. Try using print("test") inside the script.
2
3
76,930,775
2023-8-18
https://stackoverflow.com/questions/76930775/how-to-concat-two-rows-by-index-in-a-dataframe-except-for-nan-values
I have a dataset that looks like (with more columns and rows): id type value 0 104 0 7999 1 105 1 196193579 2 108 0 245744 3 NaN 1 NaN Some rows have NaN values, and I already have the indexes for these rows. Now I would like to concat these rows with their previous row, except for NaN values. If I say indexes=[3], then the new dataframe should be: id type value 0 104 0 7999 1 105 1 196193579 2 108 01 245744 How can I do this? NOTE: First row never will be in the list of indexes I have. The solution must be given my list of indexes. I also know the names of the columns where NaN values are, if necessary.
If you have an external list of indices to merge with the rows above you can use: indexes=[3] out = (df .astype({'type': str}) .groupby((~df.index.to_series().isin(indexes)).cumsum()) .agg({'id': 'first', 'type': ''.join, 'value': 'first'}) ) If you have many columns build the aggregation dictionary programmatically: indexes=[3] d = {k: ''.join if t == object else 'first' for k, t in df.dtypes.items()} out = (df .astype({'type': str}) .groupby((~df.index.to_series().isin(indexes)).cumsum()) .agg(d) ) Output: id type value 1 104.0 0 7999.0 2 105.0 1 196193579.0 3 108.0 01 245744.0
2
2
76,924,873
2023-8-17
https://stackoverflow.com/questions/76924873/can-i-add-custom-data-to-a-pyproject-toml-file
I am using the toml package to read my pyproject.toml file. I want to add custom data which in this case I will read in my docs/conf.py file. When I try and add a custom section I get errors and warnings from Even Better TOML extension in Vs Code stating that my custom data is no allowed. Example TOML section in pyproject.toml [custom.metadata] docs_name = "GUI Automation for windows" docs_author = "myname" docs_copyright = "2023" docs_url= "https://someurl.readthedocs.io/en/latest/" So, my question is: Is there a valid way of adding custom data to a pyproject.toml file?
Use a [tool.*] table. Quoting PEP 518: The [tool] table is where any tool related to your Python project, not just build tools, can have users specify configuration data as long as they use a sub-table within [tool], e.g. the flit tool would store its configuration in [tool.flit]. Any tools that interpret pyproject.toml files will expect the [tool] sub-tables to contain arbitrary, tool-specific metadata. You should not have any issues populating a custom subtable with whatever project metadata you need. In your case, something like this should work warning-free: [tool.my_distribution_name] docs_name = "GUI Automation for windows" docs_author = "myname" docs_copyright = "2023" docs_url= "https://someurl.readthedocs.io/en/latest/" Creating your own custom table is not permitted, since PEP 518 reserves all top-level tables in pyproject.toml: Tables not specified in this PEP are reserved for future use by other PEPs.
7
9
76,928,604
2023-8-18
https://stackoverflow.com/questions/76928604/how-to-make-this-clock-app-always-on-top
I have this clock app written in python. It runs successfully on Windows. Here is the source code. # Source: https://www.geeksforgeeks.org/python-create-a-digital-clock-using-tkinter/ # importing whole module from tkinter import * from tkinter.ttk import * # importing strftime function to # retrieve system's time from time import strftime # creating tkinter window root = Tk() root.title('Clock') # This function is used to # display time on the label def time(): string = strftime('%H:%M:%S %p') lbl.config(text=string) lbl.after(1000, time) # Styling the label widget so that clock # will look more attractive lbl = Label(root, font=('calibri', 40, 'bold'), background='purple', foreground='white') # Placing clock at the centre # of the tkinter window lbl.pack(anchor='center') time() mainloop() This is how the clock app looks. It works fine. However, I want to make the app always appear on top in Windows. How can I modify the code to make the app always appear on top? I am using Windows 11.
By adding the line root.attributes('-topmost', True), you set the window to be always on top of other windows. Update your code as follow: # importing whole module from tkinter import * from tkinter.ttk import * # importing strftime function to # retrieve system's time from time import strftime # creating tkinter window root = Tk() root.title('Clock') root.attributes('-topmost', True) # Set the window to always appear on top # This function is used to # display time on the label def time(): string = strftime('%H:%M:%S %p') lbl.config(text=string) lbl.after(1000, time) # Styling the label widget so that clock # will look more attractive lbl = Label(root, font=('calibri', 40, 'bold'), background='purple', foreground='white') # Placing clock at the centre # of the tkinter window lbl.pack(anchor='center') time() mainloop()
2
2
76,927,292
2023-8-18
https://stackoverflow.com/questions/76927292/different-numpy-overflow-behavior-on-mac-vs-linux
I'm encountering different overflow behavior on Mac vs Linux with the same version of numpy. MWE: import numpy as np arr = np.arange(0, 2 * 4e9, 1e9, dtype=float) print(arr.astype(np.uint32)) print(np.__version__) Mac (Python 3.9.13): array([ 0, 1000000000, 2000000000, 3000000000, 4000000000, 705032704, 1705032704, 2705032704], dtype=uint32) '1.22.4' Linux (Python 3.9.7): array([ 0, 1000000000, 2000000000, 3000000000, 4000000000, 0, 0, 0], dtype=uint32) '1.22.4' I would prefer the "Mac" behavior of the expected rollover (rather than forcing overflowed values to 0), so I would like to know how to fix this for the Linux version.
I believe that this is due to the underlying C implementation of numpy, which probably triggers undefined behaviour, which is handled differently by the compiler used for the linux and mac distributions of numpy. Looking at cast float to unsigned int in C with gcc which deals with a similar topic, we can also see link to the C standard which states on page 51 6.3.1.4 Real floating and integer 1 When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.61) One possibility would be I guess that you try out different compilers in compiling your own numpy version and check the behaviour. Alternatively, since you want the roll-over behaviour, you could try to convert first to an unsigned integer type of larger width and then to the smaller width, since unsigned integer conversions always do a roll-over 6.3.1.3 Signed and unsigned integers 1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged. 2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.60) import numpy as np arr = np.arange(0, 2 * 4e9, 1e9, dtype=float) print(arr.astype(np.uint64).astype(np.uint32)) print(np.__version__)
3
3
76,926,839
2023-8-18
https://stackoverflow.com/questions/76926839/polars-casting-a-column-to-decimal
I am trying to read a flat file, assign column names to the dataframe and cast few columns as per my requirements. However while casting a column to Decimal gives me error in Polars. I implemented the same successfully in Spark, but need help if anyone can guide me how to do the same in Polars. sample data data = b""" D|120277|Patricia|167.2|26/12/1982 D|757773|Charles|167.66|08/04/2019 D|248498|Katrina|168.68|20/11/2016 D|325561|Christina|170.86|05/10/1998 D|697464|Joshua|171.41|07/09/1970 D|244750|Zachary|169.43|08/12/2014 """.strip() Polars Script import polars as pl cols_dict = {'column_1': 'rtype', 'column_2': 'EMP_ID', 'column_3': 'EMP_NAME', 'column_4': 'SALARY', 'column_5': 'DOB'} df = pl.read_csv(data, separator='|', has_header=False) df = df.select(pl.all().name.map(lambda col_name: cols_dict.get(col_name))) df = df.with_columns( pl.col('EMP_ID').cast(pl.Decimal(scale=6, precision=0))) It throws me exception: InvalidOperationError: conversion from i64 to decimal[1,6] failed in column 'EMP_ID' for 5 out of 5 values: [120277, 757773, … 697464] Spark Script: rdd = sparkContext.textFile(core_data_file).filter(lambda x: x\[0\] == "D").map( lambda x: x.split('|')) sparkDf= rdd.toDF(schema=\["rtype"\] + list(cols_dict .values())) sparkDf= sparkDf.withColumn(col_name, coalesce( col(col_name).cast(DecimalType(data_length, data_scale)), lit(0))) sarkDf.show() +-----+------+---------+------+-------------------+ |rtype|EMP_ID|EMP_NAME |SALARY|DOB | +-----+------+---------+------+-------------------+ |D |120277|Patricia |167.20|1982-12-26 00:00:00| |D |757773|Charles |167.66|2019-04-08 00:00:00| |D |248498|Katrina |168.68|2016-11-20 00:00:00| |D |325561|Christina|170.86|1998-10-05 00:00:00| |D |697464|Joshua |171.41|1970-09-07 00:00:00| |D |244750|Zachary |169.43|2014-12-08 00:00:00| +-----+------+---------+------+-------------------+
I believe the scale and precision parameters are invalid. scale specifies the number of digits to the right of the decimal point, while precision specifies the total number of digits in the number. Setting precision = 0 is invalid. If you want 6 decimals, then rewrite precision as: pl.col('EMP_ID').cast(pl.Decimal(scale=6, precision=None))) I noticed that pl.Decimal is an "experimental work-in-progress feature and may not work as expected" Polars docs. An alternative is to use pl.Float64 instead: df = df.with_columns(pl.col('EMP_ID').cast(pl.Float64))
3
0
76,924,742
2023-8-17
https://stackoverflow.com/questions/76924742/how-to-sort-tuples-with-floats-using-isclose-comparisons-rather-than-absolute
I want to sort a list of pairs primarily by a float, and secondarily by an integer. The problem is with comparing floats for equality--in many cases we get an arbitrary comparison instead of resorting to the secondary order. Example my_list = [ (0, 1.2), (2, 0.07076863960397783), (1, 0.07076863960397785), (4, 0.02) ] I want to sort primarily by the float values, and secondarily by the integers. Running sorted(my_list, key=lambda x: (x[1], x[0])) returns this: [(4, 0.02), (2, 0.07076863960397783), (1, 0.07076863960397785), (0, 1.2)] But since the two floats are so close that they should really be considered equal, the desired order is this: [(4, 0.02), (1, 0.07076863960397785), (2, 0.07076863960397783), (0, 1.2)] How do I incorporate math.isclose or np.isclose into the sorting? My attempt My current approach is to wrap the floats into a class that performs the comparisons using the numpy's isclose. class FloatComparisonWrapper: def __init__(self, num): self.value = num def __lt__(self, other): return self.__ne__(other) and self.value < other.value def __gt__(self, other): return self.__ne__(other) and self.value > other.value def __eq__(self, other): return np.isclose(self.value, other.value) def __ne__(self, other): return not self.__eq__(other) def __le__(self, other): return self.__eq__(other) or self.value <= other.value def __ge__(self, other): return self.__eq__(other) or self.value >= other.value And then sorted(my_list, key=lambda x: (FloatComparisonWrapper(x[1]), x[0])) gives the correct ordering. Is this a good solution? Is it pythonic? Is it the best solution? Is there a quicker way to do it?
Is this a good solution? No. OP's approach is flawed for sorting purposes. since the two floats are so close that they should really be considered equal Consider if all values are nearly in the vicinity of of the same floating point value. Some pairs are "equal", other not. OP's goal fails as the compare does not form a Total order. From a C point-of-view that may apply to Python as I see the issue is language agnostic. This will not work as compare(a, b), compare(b,c), compare(a,c) must form a consistent order. When the same objects (consisting of size bytes, irrespective of their current positions in the array) are passed more than once to the comparison function, the results shall be consistent with one another. That is, for qsort they shall define a total ordering on the array, ... C1723 Β§ 7.24.5 4. Consider (2, float_before(x)), // Lowest value that "is close" to x. (1, x), (0, float_after(x)), // Greatest value that "is close" to x. Even though float_before(x) and x are consecutive "equal" values, as proposed by OP, (as well as x, float_after(x)), the sorting will arrive at different (and possible inconclusive: e.g. infinite loop) results, depending on the order the compares are applied since compare(float_before(x), float_after(x)) are not equal. A secondary compare should not be applied unless the first order values are equal. Nearly equal is insufficient as a β‰ˆ b and b β‰ˆ c may be true, but a β‰ˆ cis not. With OP's nearly equal, a > b and b > c due their primary compares are equal, but the secondary makes them greater than ordered, and implies a > c. Yet since a is not close to c and the primary ordering is a < c. This contradicts a > c. Is there a quicker way to do it? I recommend to drop the isclose() "two floats are so close that they should really be considered equal" approach for the primary compare. Simply do a value compare. Get the functionality correct before attempting to make it faster. // Pseudo code if (primary_compare_as_equal(a,b)) return secondary compare(a,b) return primary_compare(a,b)
3
2
76,926,251
2023-8-18
https://stackoverflow.com/questions/76926251/can-a-pytest-test-change-the-value-of-a-session-fixture-and-affect-subsequent-te
Let's say that a pytest session fixture returns a dictionary and this dictionary is then manipulated within a test. Would this affect subsequent tests which use the same fixture?
Yes. And that is trivial to verify. Exactly one of these tests will fail: import pytest @pytest.fixture(scope="session") def d(): return {} def test_one(d): assert not d d["k"] = "v" def test_two(d): assert not d d["k"] = "v" If the fixture scope is changed to "function" (the default scope) then both tests will pass, since the dict will be created anew each time.
3
3
76,925,094
2023-8-17
https://stackoverflow.com/questions/76925094/selenium-common-exceptions-sessionnotcreatedexception-message-session-not-crea
I am getting the following error: selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114 Current browser version is 116.0.5845.97 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe Can anyone help me? I've seen some suggestions in other posts but none worked here. I am aware of the Selenium v4.6 update My code: from selenium import webdriver import time from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By class ChromeAuto: def __init__(self): self.options = webdriver.ChromeOptions() self.options.add_experimental_option('excludeSwitches', ['enable-logging']) self.service = Service() self.chrome = webdriver.Chrome(service=self.service, options=self.options) self.chrome.implicitly_wait(20)
For Chrome 116+ you'll need selenium 4.11.2 at a minimum for the Selenium Manager to download chromedriver 116+ from https://googlechromelabs.github.io/chrome-for-testing/ Then you'll be able to run basic Selenium scripts like this: from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service() options = webdriver.ChromeOptions() options.add_experimental_option('excludeSwitches', ['enable-logging']) driver = webdriver.Chrome(service=service, options=options) # Add your code here driver.quit()
2
5
76,924,449
2023-8-17
https://stackoverflow.com/questions/76924449/how-can-inheritance-change-the-class-signature
I'm finding that inheriting from a base class can change the derived class signature according to inspect.signature, and I would like to understand how that happens. Specifically, the base class in question is tensorflow.keras.layers.Layer: import sys import inspect import tensorflow as tf class Class1(tf.keras.layers.Layer): def __init__(self, my_arg: int): pass class Class2: def __init__(self, my_arg: int): pass print("Python version: ", sys.version) print("Tensorflow version: ", tf.__version__) print("Class1 signature: ", inspect.signature(Class1)) print("Class2 signature: ", inspect.signature(Class2)) Outputs Python version: 3.8.10 (default, Mar 23 2023, 13:10:07) [GCC 9.3.0] Tensorflow version: 2.12.0 Class1 signature: (*args, **kwargs) Class2 signature: (my_arg: int) I tried running the code above and I expected it to print the same signature for both classes.
This is a bug behavior that is unique to earlier versions of Python, including 3.8 and early patch releases of Python 3.9/3.10 In 3.8 and 3.9.4: Class1 signature: (*args, **kwargs) Class2 signature: (my_arg: int) In newer versions of Python, like 3.9.17 and latest versions of 3.10 and 3.11, the result is how you would expect: Class1 signature: (my_arg: int) Class2 signature: (my_arg: int) I'm not 100% sure what exactly changed the behavior between these python versions, but my best guess is that this is related to either this issue or this issue, which were fixed in Python3.11 and backported to some earlier Python versions. However, Python3.8 did not receive any of these backport fixes.
3
1
76,918,044
2023-8-17
https://stackoverflow.com/questions/76918044/cannot-import-mediapipe-typeerror-numpy-dtypemeta-object-is-not-subscripta
The install is successful. I receive this error when trying to import. TypeError: 'numpy._DTypeMeta' object is not subscriptable I have tried higher and lower versions of numpy (1.22.0,1.23.0,1.24.0,1.25.0,1.25.2). I installed mediapipe via pypi as well as a downloaded whl ( mediapipe-0.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl) from pypi Versions numpy 1.21.5 mediapipe 0.10.3 Python 3.10.6 These questions are similar but not the answer. Importing xarray raises not subscriptable issue Cannot import mediapipe in Jupyter notebook How to prevent error message when importing import cv2?
Updating numpy to 1.23 works. I was not restarting the kernel after installing versions of numpy. With notebook-scoped libraries, use dbutils.library.restartPython(). If you are using cluster-scoped libraries, then be sure to restart your cluster. There's an issue about this that's documented on the OpenCV repo here: https://github.com/opencv/opencv/issues/23822 In short, it imported successfully with most numpy>=1.22.0 versions. Just be careful, if you use some of the most recent library versions, like numpy>=1.25.0, you start running into other compatibility issues.
4
5
76,923,353
2023-8-17
https://stackoverflow.com/questions/76923353/how-to-add-prefix-suffix-on-a-repeatable-dictionary-key-in-python
Could you please suggest is there any way to keep all the repeatable (duplicate) keys by adding prefix or suffix. In the below example, the address key is duplicated 3 times. It may vary (1 to 3 times). I want to get the output as in the expected output with adding a suffix to make the key unique. Currently the update function is overwriting the key value. list = ['name:John','age:25','Address:Chicago','Address:Phoenix','Address:Washington','email:[email protected]'] dic = {} for i in list: j=i.split(':') dic.update({j[0]:j[1]}) print(dic) Current output: {'name': 'John', 'age': '25', 'Address': 'Washington', 'email': '[email protected]'} Expected output: {'name': 'John', 'age': '25', 'Address1': 'Chicago', 'Address2': 'Phoenix', 'Address3': 'Washington', 'email': '[email protected]'} Tried the below: list = ['name:John','age:25','Address:Chicago','Address:Phoenix','Address:Washington','email:[email protected]'] dic = {} for i in list: j=i.split(':') dic.update({j[0]:j[1]}) print(dic) Expected output: {'name': 'John', 'age': '25', 'Address1': 'Chicago', 'Address2': 'Phoenix', 'Address3': 'Washington', 'email': '[email protected]'}
You can use something like this: list_ = ['name:John','age:25','Address:Chicago','Address:Phoenix','Address:Washington','email:[email protected]'] dic = {} for i in list_: j = i.split(':') key_ = j[0] count = 0 # counts the number of duplicates while key_ in dic: count += 1 key_ = j[0] + str(count) dic[key_] = j[1] Output: {'name': 'John', 'age': '25', 'Address': 'Chicago', 'Address1': 'Phoenix', 'Address2': 'Washington', 'email': '[email protected]'} PS. don't use the python keyword list to name your variables as it overrides the type list
5
1
76,917,614
2023-8-16
https://stackoverflow.com/questions/76917614/match-element-then-remove-both-elements
My list consists on several items: ["book","rule","eraser","clipboard","pencil",etc] Let say I have an element ["book"] and I want to match with another element ["rule"] with the same len (4). then I want to remove both elements from the list. ["eraser","clipboard","pencil",etc] I tried using a for loop and zip(lists, lists[1:]) but I can only remove the elements next to each other while other elements (like papers and pencil, len = 6) are never removed. I want to remove same length words and have a final list with only the words whose length has no match or found no pair. For example: ["book","rule","eraser","clipboard","pencil","book"] then the final list will be: ["clipboard","book"] as book and clipboard found no pair.
Much simpler, think of it as turning on or off a flag. If the flag is on, you turn it off by deleting. If it's off, you turn it on by adding the word. Flags are based on same length: lst = ["book", "rule", "eraser", "clipboard", "pencil", "book"] output = {} for item in lst: length = len(item) popped = output.pop(length, None) if popped is None: output[length] = item print(list(output.values()))
2
5
76,913,084
2023-8-16
https://stackoverflow.com/questions/76913084/calling-another-member-decorator-from-another-member-decorator-in-python-3
I am trying to re-use a member function decorator for other member function decorator but I am getting the following error: 'function' object has no attribute '_MyClass__check_for_valid_token' Basically I have a working decorator that checks if a user is logged in (@LOGIN_REQUIRED) and I would like to call this first in the @ADMIN_REQUIRED decorator (so the idea is to check that the user is logged in with the existing @LOGIN_REQUIRED decorator and then add some specific validation to check if the logged user is an Administrator in the @ADMIN_REQUIRED decorator. My current code is like this: class MyClass: def LOGIN_REQUIRED(func): @wraps(func) def decorated_function(self, *args, **kwargs): # username and token should be the first parameters # throws if not logged in self.__check_for_valid_token(args[0], args[1]) return func(self, *args, **kwargs) return decorated_function @LOGIN_REQUIRED def ADMIN_REQUIRED(func): @wraps(func) def decorated_function(self, *args, **kwargs): is_admin = self.check_if_admin() if not is_admin: raise Exception() return func(self, *args, **kwargs) return decorated_function @ADMIN_REQUIRED def get_administration_data(self, username, token): # return important_data # currently throws 'function' object has no attribute '_MyClass__check_for_valid_token' Do you have any idea how could I get this to work? Some notes based on the comments and answers for clarification: The method __check_for_valid_token name can be changed to not run into name mangling issues. I was just using double underscore because it was a method supposedly only accessible by the class itself (private). There is no inheritance in "MyClass". The @LOGIN_REQUIRED code must run before the @ADMIN_REQUIRED code (as that is what someone expects, at least in my case).
I think this is possible, with two caveats; first, the decorator will have to move outside the class, and second, some adaptation will be required in regards to the name mangling. Let's tackle the first - first. Moving some functions Decorating a decorator directly may seem intuitive, but it probably won't result with what you want. You can, however, decorate an inner function - just like how @wraps is used. Due to how python parses code, the outer decorator will have to be defined outside (and before) the class, otherwise you'll get a NameError. The code should look something like this: def _OUTER_LOGIN_REQUIRED(func): @wraps(func) def decorated_function(self, *args, **kwargs): self.__check_for_valid_token(args[0], args[1]) return func(self, *args, **kwargs) return decorated_function [Notice no code-changes to this function (yet)] class MyClass: # The following line will make the transition seamless for most methods LOGIN_REQUIRED = _OUTER_LOGIN_REQUIRED def ADMIN_REQUIRED(func): @wraps(func) @_OUTER_LOGIN_REQUIRED # <-- Note that this is where it should be decorated def decorated_function(self, *args, **kwargs): ... [the rest of ADMIN_REQUIRED remains unchanged] @ADMIN_REQUIRED def get_administration_data(self, username, token): ... [this should now invoke LOGIN_REQUIRED -> ADMIN_REQUIRED -> the function] @LOGIN_REQUIRED def get_some_user_data(self, ...): ... [Such definitions should still work, as we added LOGIN_REQUIRED attribute to the class] If such a change is acceptable so-far, let's move on to Name Mangling As the name of __check_for_calid_token function is mangled (as its name starts with a dunder), you'll have to decide how to tackle it. There are two options: If there is no restriction, simply shorten the dunder to a single underscore (or rename as you like - as long as it doesn't start with more than one underscore). If the name mangling is important, you'll have to change the call in _OUTER_LOGIN_REQUIRED like so: def decorated_function(self, *args, **kwargs): self._MyClass__check_for_valid_token(args[0], args[1]) This might affect inheritance, and should be tested thoroughly. Summary I've tested around this a bit on python 3.9, and it seems to work quite well. I noticed that login errors are raised before admin errors, as I assume is desired. Still, I only poked around in a shallow manner, and while I can't think of a reason for this to misbehave, I strongly recommend testing this thoroughly before committing to this method (especially if the code includes inheritance, which I didn't even touch). I hope this works for you, and if it doesn't - let us know where it breaks, and how.
3
3
76,913,677
2023-8-16
https://stackoverflow.com/questions/76913677/python-rounding-error-when-sampling-variable-y-as-a-function-of-x-with-histogram
I'm trying to sample a variable (SST) as a function of another variable (TCWV) using the function histogram, with weights set to the sample variable like this: # average sst over bins num, _ = np.histogram(tcwv, bins=bins) sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst) out=np.zeros_like(sstsum) out[:]=np.nan sstav = np.divide(sstsum,num,out=out, where=num>100) The whole code for reproducability is given below. My problem is that when I plot a scatter plot of the raw data and then I plot my calculated averages, the averages lie way outside the data "cloud" like this (see points on right): I can't think why this is happening, unless it is a rounding error perhaps? This is my whole code: import numpy as np import matplotlib.pyplot as plt from netCDF4 import Dataset # if you have a recent netcdf libraries you can access it directly here url = ('http://clima-dods.ictp.it/Users/tompkins/CRM/data/WRF_1min_mem3_grid4.nc#mode=bytes') ds=Dataset(url) ### otherwise need to download, and use this: ###ifile="WRF_1min_mem3_grid4.nc" ###ds=Dataset(idir+ifile) # axis bins bins=np.linspace(40,80,21) iran1,iran2=40,60 # can put in dict and loop sst=ds.variables["sst"][iran1:iran2+1,:,:] tcwv=ds.variables["tcwv"][iran1:iran2+1,:,:] # don't need to flatten, just tried it to see if helps (it doesn't) sst=sst.flatten() tcwv=tcwv.flatten() # average sst over bins num, _ = np.histogram(tcwv, bins=bins) sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst) out=np.zeros_like(sstsum) out[:]=np.nan sstav = np.divide(sstsum,num,out=out,where=num>100) # bins centroids avbins=(np.array(bins[1:])+np.array(bins[:-1]))/2 #plot subsam=2 fig,(ax)=plt.subplots() plt.scatter(tcwv.flatten()[::subsam],sst.flatten()[::subsam],s=0.05,marker=".") plt.scatter(avbins,sstav,s=3,color="red") plt.ylim(299,303) plt.savefig("scatter.png")
I can't think why this is happening, unless it is a rounding error perhaps? It is in fact a rounding error. Specifically, when you're calculating the sum of sst within each bin here: sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst) The results come out wrong by 0.1% versus two alternate methods I tried for calculating the sum. I have two ideas for ways to fix this. Approach #1 The simplest fix is to do the calculation in more precision. sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst.astype('float64')) Without this change, sst has a dtype of float32. Approach #2 You might want to keep the calculation in 32-bit floats for performance reasons. They are somewhat faster than 64-bit floats. An alternate solution would be to subtract the mean before summing to improve numerical stability. sst_mean = sst.mean() num, _ = np.histogram(tcwv, bins=bins) sstsum, _ = np.histogram(tcwv, bins=bins,weights=sst - sst_mean) out=np.zeros_like(sstsum) out[:]=np.nan sstav = np.divide(sstsum,num,out=out,where=num>100) sstav += sst_mean This subtracts the overall mean of sst from each data point, then adds it back at the end. Since floats have more precision around 0, this makes the calculation more precise. Comparison Here is a plot of what approach #1 looks like: The plot of approach #2 looks the same. The two methods are equal within 1.32 * 10-5 of each other.
3
4
76,912,588
2023-8-16
https://stackoverflow.com/questions/76912588/type-hints-warnings-for-python-in-vs-code
I enjoy using type hinting (annotation) and have used it for some time to help write clean code. I appreciate that they are just hints and as such do not affect the code. But today I saw a video where the linter picked up the hint with a warning (a squiggly yellow underline), which looks really helpful. My VS Code does not pick this up in the linter. Here is an image of what I expect (with annotations): So my question is, how can I achieve this? For example, is there a specific linter or setting that would do this?
If you are using Pylance, you can add a new line to your settings.json (you need to restart VS Code after updating the file): "python.analysis.typeCheckingMode": "basic" The default value is off, the other possible values are basic and strict. The following screenshot shows warnings for different situations: wrong variable type, wrong function return type and wrong parameter type.
8
14
76,913,406
2023-8-16
https://stackoverflow.com/questions/76913406/how-allow-fastapi-to-handle-multiple-requests-at-the-same-time
For some reason FastAPI doesn't respond to any requests while a request is handled. I would expect FastAPI to be able to handle multiple requests at the same time. I would like to allow multiple calls at the same time as multiple users might be accessing the REST API. Minimal Example: Asynchronous Processes After starting the server: uvicorn minimal:app --reload, running the request_test run request_test and executing test(), I get as expected {'message': 'Done'}. However, when I execute it again within the 20 second frame of the first request, the request is not being processed until the sleep_async from the first call is finished. Without asynchronous Processes The same problem (that I describe below) exists even if I don't use asynchronous calls and wait directly within async def info. That doesn't make sense to me. FastAPI: minimal #!/usr/bin/env python3 from fastapi import FastAPI from fastapi.responses import JSONResponse import time import asyncio app = FastAPI() @app.get("/test/info/") async def info(): async def sleep_async(): time.sleep(20) print("Task completed!") asyncio.create_task(sleep_async()) return JSONResponse(content={"message": "Done"}) Test: request_test #!/usr/bin/env python3 import requests def test(): print("Before") response = requests.get(f"http://localhost:8000/test/info") print("After") response_data = response.json() print(response_data)
You need to set the number of workers as shown in the uvicorn settings --workers <int>
5
3
76,910,226
2023-8-16
https://stackoverflow.com/questions/76910226/why-isnt-my-class-attribute-preserved-when-using-multiprocessing
I have the following class in a FastAPI application: import asyncio import logging from multiprocessing import Lock, Process from .production_status import Job as ProductionStatusJob class JobScheduler: loop = None logger = logging.getLogger("job_scheduler") process_lock = Lock() JOBS = [ProductionStatusJob] @classmethod def start(cls) -> None: cls.logger.info("Starting Up (1/2)") Process(target=cls._loop).start() @classmethod def _loop(cls) -> None: cls.loop = asyncio.get_event_loop() cls.loop.create_task(cls._run()) cls.logger.info("Startup Complete (2/2)") cls.loop.run_forever() cls.loop.close() @classmethod async def _run(cls) -> None: while True: ... @classmethod async def stop(cls) -> None: cls.logger.info("Shutting Down (1/2)") with cls.process_lock: cls.loop.stop() # <= This Line cls.loop.close() cls.logger.info("Shutdown Complete (2/2)") cls.loop = None On the startup and shutdown events of the FastAPI application, the JobScheduler.start() and JobScheduler.stop() methods will be called. The start method works smoothly, but in stop I get an error: File "/backend/app/main.py", line 146, in stop_job_scheduler 2023-08-16 11:46:27 await job_scheduler.stop() 2023-08-16 11:46:27 File "/backend/app/jobs/__init__.py", line 59, in stop 2023-08-16 11:46:27 cls.loop.stop() 2023-08-16 11:46:27 AttributeError: 'NoneType' object has no attribute 'stop' But cls.loop is set during the _loop method (which is executed at the end of start) - so why does cls.loop still have its initial None value when the stop method is called? Are there any better approaches to clean up the background processes when the FastAPI application calls shutdown?
multiprocessing in Python is funny. It's more powerful than multithreading but also comes with some caveats. The first of those is that you're actually running a different Python interpreter entirely. That means that global variables and the like are going to get a new copy for each process you run. Depending on your operating system and choice of start method, your processes may be forked or spawned. A spawned process will start anew, as though a new Python program was just spun up. A forked process will get all of the current values of variables from the source process, but it'll still copy all of those variables. Future changes to either process will not affect the other, without explicit synchronization using one of the multiprocessing helpers. You can use a Manager to synchronize data between processes explicitly. This acts sort of like a local server that both processes connect to. For more explicitly pub-sub data, you can also use a Queue to pass information from one process to another.
5
3
76,884,896
2023-8-11
https://stackoverflow.com/questions/76884896/add-row-count-per-group-in-polars
Is there a way to rewrite this: import numpy import polars df = (polars .DataFrame(dict( j=numpy.random.randint(10, 99, 20), )) .with_row_index() .select( g=polars.col('index') // 3, j='j' ) .with_columns(rn=1) .with_columns( rn=polars.col('rn').shift().fill_null(0).cum_sum().over('g') ) ) print(df) g (u32) j (i64) rn (i32) 0 47 0 0 22 1 0 82 2 1 19 0 1 85 1 1 15 2 2 89 0 2 74 1 2 26 2 3 11 0 3 86 1 3 81 2 4 16 0 4 35 1 4 60 2 5 30 0 5 28 1 5 94 2 6 21 0 6 38 1 shape: (20, 3) so it adds rn column without requiring it to add a column full of 1s first? I.e. somehow rewrite this part: .with_columns(rn=1) .with_columns( rn=polars.col('rn').shift().fill_null(0).cum_sum().over('g') ) so that: .with_columns(rn=1) is not required? Basically reduce two expressions to one. Or any other / better way to add a row count per group?
It can be done by generating an .int_range() using the length of each group. df.with_columns(rn = pl.int_range(pl.len()).over("g")) shape: (20, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ g ┆ j ┆ rn β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 0 ┆ 14 ┆ 0 β”‚ # group_len = 3, range = [0, 1, 2] β”‚ 0 ┆ 81 ┆ 1 β”‚ β”‚ 0 ┆ 72 ┆ 2 β”‚ β”‚ 1 ┆ 34 ┆ 0 β”‚ β”‚ 1 ┆ 90 ┆ 1 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 5 ┆ 26 ┆ 0 β”‚ β”‚ 5 ┆ 44 ┆ 1 β”‚ β”‚ 5 ┆ 27 ┆ 2 β”‚ β”‚ 6 ┆ 70 ┆ 0 β”‚ β”‚ 6 ┆ 86 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
6
6
76,866,139
2023-8-9
https://stackoverflow.com/questions/76866139/rdkit-image-moltoimage-scales-bond-widths-and-element-labels-inconsistently-for
I've noticed that when I create an image from a molecule in RDKit, the size argument leads to inconsistent scaling of the bond width and element labels. The bigger the size, the thinner the lines and the smaller the element labels. I've run a test by generating an image for the same molecule using MolToImage at progressively bigger sizes. I rescaled those images to size=(600,600) and then concatenated them into a GIF. This is the result. Here's my code from glob import glob from rdkit import Chem from rdkit.Chem import Draw from PIL import Image,ImageDraw,ImageFont def make_frames_from_smi(smi): for i in range(10): s = (i+3)*100 mol = Chem.MolFromSmiles(smi) img = Draw.MolToImage(mol,size=(s,s)) img = img.resize((600,600)) draw = ImageDraw.Draw(img) text = '%d: Initial Size: (%d,%d)'%(i+1,s,s) font_size = 40 font = ImageFont.truetype("arial.ttf", font_size) # Use your desired font # Calculate text position image_width, image_height = img.size text_x = (image_width - (bbox[2] - bbox[0])) // 2 text_y = 20 # Adjust the vertical position as needed draw.text((text_x, text_y), text, font=font, fill='black') img.save('%03dtest.png'%i) def make_gif_from_frames(paths): frames_paths = glob(paths) frames = [Image.open(imgp) for imgp in frames_paths] frames[0].save("mols.gif", format="GIF", append_images=frames, save_all=True, duration=500, loop=False) # make RDKit mol obj. smi = 'CN(C)CC1CCCCC1(C2=CC(=CC=C2)OC)O' make_frames_from_smi(smi) make_gif_from_frames('*.png') Is this expected behaviour? Is the bond width held constant for a certain absolute value of pixels? How can I generate these images with consistent proportions regardless of width/height of pixels?
OK found , I hope , two solutions ??? First one, using Draw.rdMolDraw2D.MolDraw2DCairo: from glob import glob from rdkit import Chem from rdkit.Chem import Draw from PIL import Image from io import BytesIO def make_frames_from_smi(smi): mol = Chem.MolFromSmiles(smi) for i in range(10): s = (i+3)*100 mol = Chem.MolFromSmiles(smi) d = Draw.rdMolDraw2D.MolDraw2DCairo(s,s) dopts = d.drawOptions() # dopts.maxFontSize=40 # dopts.minFontSize=40 dopts.maxFontSize=int(0.13*s) dopts.minFontSize=int(0.13*s) print('dopts.bondLineWidth : ', dopts.bondLineWidth) #default is 2.0 dopts.bondLineWidth=(0.007*s) print('dopts.bondLineWidth : ', dopts.bondLineWidth) #default is 2.0 d.DrawMolecule(mol, legend= '%d: Initial Size: (%d,%d)'%(i+1,s,s)) d.FinishDrawing() img = d.GetDrawingText() img_b = BytesIO() img_b.write(img) pil_img = Image.open(img_b).resize((600,600)) pil_img.save('%03dtest.png'%i) def make_gif_from_frames(paths): frames_paths = glob(paths) frames_paths.sort() print(frames_paths) frames = [Image.open(imgp) for imgp in frames_paths] frames[0].save("mols.gif", format="GIF", append_images=frames, save_all=True, duration=500, loop=False) smi = 'CN(C)CC1CCCCC1(C2=CC(=CC=C2)OC)O' make_frames_from_smi(smi) make_gif_from_frames('*.png') result: or using SVG, Draw.rdMolDraw2D.MolDraw2DSVG, that I like more as a pic format : import rdkit print('\n-------------------------------') print('\n rdkit Version : ', rdkit.__version__) print('\n-------------------------------') from glob import glob from rdkit import Chem from rdkit.Chem import Draw from PIL import Image,ImageDraw,ImageFont from io import BytesIO from cairosvg import svg2png from moviepy.editor import ImageClip, concatenate_videoclips def make_frames_from_smi(smi): mol = Chem.MolFromSmiles(smi) drawer= Draw.rdMolDraw2D.MolDraw2DSVG(600,600) dopts = drawer.drawOptions() for i in dir(dopts) : print(i) dopts.minFontSize = -1 dopts.maxFontSize = -1 # dopts.minFontSize = 80 # dopts.maxFontSize = 80 print('drawer.FontSize : ', drawer.FontSize()) # dopts.annotationFontScale = 0.5 dopts.addAtomIndices = True drawer.DrawMolecule(mol) drawer.FinishDrawing() svg_data = drawer.GetDrawingText() with open('dtest.svg' , 'w') as handler: handler.write(svg_data) for i in range(10): s = (i+3)*100 png = svg2png(bytestring=svg_data) img = Image.open(BytesIO(png)).convert('RGBA').resize((s,s)) img.save('%03dtest.png'%i) def make_gif_from_frames(paths): input_png_list = glob(paths) input_png_list.sort() clips = [ImageClip(i).set_duration(1) for i in input_png_list] concat_clip = concatenate_videoclips(clips, method="compose") concat_clip.write_gif("test.gif", fps=2) smi = 'CN(C)CC1CCCCC1(C2=CC(=CC=C2)OC)O' make_frames_from_smi(smi) make_gif_from_frames('*.png') result:
4
1
76,883,571
2023-8-11
https://stackoverflow.com/questions/76883571/polars-expression-when-then-otherwise-is-slow
I noticed a thing in python polars. I’m not sure but seems that pl.when().then().otherwise() is slow somewhere. For instance, for dataframe: df = pl.DataFrame({ 'A': [randint(1, 10**15) for _ in range(30_000_000)], 'B': [randint(1, 10**15) for _ in range(30_000_000)], }, schema={ 'A': pl.UInt64, 'B': pl.UInt64, }) Horizontal min with pl.min_horizontal: df.with_columns( pl.min_horizontal(['A', 'B']).alias('min_column') ) 92.4 ms Β± 16.3 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) And the same with when().then().otherwise(): df.with_columns( pl.when( pl.col('A') < pl.col('B') ).then(pl.col('A')).otherwise(pl.col('B')).alias('min_column'), ) 458 ms Β± 75.9 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) I measure explicitly the when part and seems that it is not a bottleneck. df.with_columns((pl.col('A') < pl.col('B')).alias('column_comparison')) 49.2 ms Β± 6.23 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) If remove otherwise() it will be even slower. df.with_columns( pl.when( pl.col('A') < pl.col('B') ).then(pl.col('A')).alias('min_column') ) 664 ms Β± 19.7 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) I also have tried some other methods for horizontal reducing such as pl.reduce or pl.fold and seems that they all are much faster than when().then(). So the questions here: Is it expected behavior? Why pl.when().then() is much slower than other expressions? In which cases should we avoid when().then().otherwise()?
I've got some comments from Polars developers at discord. I don't see anything out of the ordinary here. Removing otherwise just means .otherwise(pl.lit(None)) is called in the background. It will have to create that column rather than using the existing one. So it will be slower. If you can write your expression as a fold it might be faster, as you have noticed with min_horizontal. So my conclusion here: when you have a task to reduce several columns into one column, it is better choice to use fold or reduce methods when possible, instead of when().then(). EDIT Sinse polars 0.20.17 there is huge speed up in when then operations, caused by refactoring if-then-else kernels. Benchmarks: https://github.com/pola-rs/polars/pull/15131 So now it is not an issue to use if-then-else if it is necessary.
3
4
76,888,242
2023-8-12
https://stackoverflow.com/questions/76888242/how-to-set-environment-variables-in-pipelines-in-azure-ml-sdk-v2-with-jobs-creat
I am changing some of our code from Azure ML's SDK v1 to v2. However, when I invoke pipelines with components via ml_client.jobs.create_or_update, I just can't get them to use my environment variables. Here is what I am doing: preprocessing_component = load_component( source=Path(__file__).parent / "preprocessing_component.yaml" ) @pipeline() def example_train_pipeline(input_data_path): preprocess_step = preprocessing_component( input_data_path=input_data_path pipeline_job = example_train_pipeline( input_data_path=Input( type=AssetTypes.URI_FILE, path="xxx", ) ) pipeline_job.settings.default_compute = e.cluster_name pipeline_job = ml_client.jobs.create_or_update( pipeline_job, experiment_name=experiment_name ) I tried to set .env_variables when creating my AZ ML environment (which is loaded for this pipeline's component in the yaml). This stated this parameters was deprecated and I should use RunConfig.environment_variables instead. Thing is, I can't find docs on how to use a RunConfig with ml_client.jobs.create_or_update. I tried just passing a RunConfig with variables set via run_config.environment_variables to create_or_update, but this had no apparrent effect.
With the introduction of Azure ML SDK v2, the concept of components has been emphasized. These components allow you to define specific environments individually. You can set environment variables for each component as following sample code. You can define environment variables using Python code: environment_variables = {"environ": "val"} command_function = command( display_name="command-function-job", environment=environment, command='echo "hello world"', distribution=distribution, resources=resources, environment_variables=environment_variables, inputs=inputs, outputs=outputs, )
3
1
76,901,874
2023-8-14
https://stackoverflow.com/questions/76901874/userwarning-the-figure-layout-has-changed-to-tight-self-figure-tight-layouta
Why do I keep getting this warning whenever I try to use FacetGrid from seaborn? UserWarning: The figure layout has changed to tight. self._figure.tight_layout(*args, **kwargs) I understand it's a warning and not an error and I also understand it is changing the layout to tight. My question is why does it appear in the first place? Am I missing something? Example code: import seaborn as sns penguins = sns.load_dataset("penguins") g = sns.FacetGrid(penguins, col="island") g.map_dataframe(sns.histplot, x="bill_length_mm") This code throws that warning. What am I doing wrong? I know I can hide them with warnings module but I don't wanna do that.
As mentioned in the comments, this is a matplotlib bug. It was fixed in version 3.7.3, so you can avoid it by upgrading Matplotlib. One of the comments suggests calling plt.figure(..., layout='constrained'), instead of tight_layout(), and that matches a few comments I found in the docs, like the Constrained Layout Guide: Constrained layout is similar to Tight layout, but is substantially more flexible. I saw this warning, because I was calling subplots(), then repeatedly plotting, calling tight_layout(), saving the figure, and calling cla(). I fixed it by removing the call to tight_layout() from the loop, and calling subplots(..., layout='tight'). Mind that you may need Python 3.11 for matplotlib 3.7.3, as a test with Python 3.10 shows: conda install matplotlib[version='>=3.7.3'] Output: Retrieving notices: ...working... done Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): done Solving environment: \ warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE failed LibMambaUnsatisfiableError: Encountered problems while solving: - package matplotlib-3.8.0-py311h06a4308_0 requires matplotlib-base >=3.8.0,<3.8.1.0a0, but none of the providers can be installed Could not solve for environment specs The following packages are incompatible β”œβ”€ matplotlib-base 3.7.2.* is installable with the potential options β”‚ β”œβ”€ matplotlib-base 3.7.2, which can be installed; β”‚ β”œβ”€ matplotlib-base [3.7.2|3.8.0] would require β”‚ β”‚ └─ python >=3.10,<3.11.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib-base 3.7.2 would require β”‚ β”‚ └─ python >=3.8,<3.9.0a0 , which can be installed; β”‚ └─ matplotlib-base [3.7.2|3.8.0] would require β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; β”œβ”€ matplotlib >=3.7.3 is installable with the potential options β”‚ β”œβ”€ matplotlib 3.8.0 would require β”‚ β”‚ └─ matplotlib-base >=3.8.0,<3.8.1.0a0 with the potential options β”‚ β”‚ β”œβ”€ matplotlib-base [3.7.2|3.8.0], which can be installed (as previously explained); β”‚ β”‚ β”œβ”€ matplotlib-base [3.7.2|3.8.0], which can be installed (as previously explained); β”‚ β”‚ β”œβ”€ matplotlib-base 3.8.0 conflicts with any installable versions previously reported; β”‚ β”‚ └─ matplotlib-base 3.8.0 would require β”‚ β”‚ └─ python >=3.12,<3.13.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 3.8.0 would require β”‚ β”‚ └─ python >=3.10,<3.11.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 3.8.0 would require β”‚ β”‚ └─ python >=3.12,<3.13.0a0 , which can be installed; β”‚ └─ matplotlib 3.8.0 would require β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; └─ pin-1 is not installable because it requires └─ python 3.11.* , which conflicts with any installable versions previously reported. Pins seem to be involved in the conflict. Currently pinned specs: - python 3.11.* (labeled as 'pin-1')
10
5
76,898,131
2023-8-14
https://stackoverflow.com/questions/76898131/failedpreconditionerror-runs-is-not-a-directory
I made my own GAN with custom dataset that I collected, I wanted to use Tensorboard, It gives me an error.I asked ChatGPT,Bard and BingAI but they couldn't fix the error. from torch.utils.tensorboard import SummaryWriter writer_fake = SummaryWriter() writer_real = SummaryWriter() I get this error, it creates a new folder named "run", but then raises the error --------------------------------------------------------------------------- FailedPreconditionError Traceback (most recent call last) Cell In[28], line 14 9 # os.makedirs("runs/Images/fake") 10 # os.makedirs("runs/Images/real") 11 12 # Initialize SummaryWriter 13 writer_fake = SummaryWriter() ---> 14 writer_real = SummaryWriter() File ~\pytorch\MLvenv\Lib\site-packages\torch\utils\tensorboard\writer.py:247, in SummaryWriter.__init__(self, log_dir, comment, purge_step, max_queue, flush_secs, filename_suffix) 244 # Initialize the file writers, but they can be cleared out on close 245 # and recreated later as needed. 246 self.file_writer = self.all_writers = None --> 247 self._get_file_writer() 249 # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard 250 v = 1e-12 File ~\pytorch\MLvenv\Lib\site-packages\torch\utils\tensorboard\writer.py:277, in SummaryWriter._get_file_writer(self) 275 """Returns the default FileWriter instance. Recreates it if closed.""" 276 if self.all_writers is None or self.file_writer is None: --> 277 self.file_writer = FileWriter( 278 self.log_dir, self.max_queue, self.flush_secs, self.filename_suffix 279 ) 280 self.all_writers = {self.file_writer.get_logdir(): self.file_writer} 281 if self.purge_step is not None: File ~\pytorch\MLvenv\Lib\site-packages\torch\utils\tensorboard\writer.py:76, in FileWriter.__init__(self, log_dir, max_queue, flush_secs, filename_suffix) 71 # Sometimes PosixPath is passed in and we need to coerce it to 72 # a string in all cases 73 # TODO: See if we can remove this in the future if we are 74 # actually the ones passing in a PosixPath 75 log_dir = str(log_dir) ---> 76 self.event_writer = EventFileWriter( 77 log_dir, max_queue, flush_secs, filename_suffix 78 ) File ~\pytorch\MLvenv\Lib\site-packages\tensorboard\summary\writer\event_file_writer.py:72, in EventFileWriter.__init__(self, logdir, max_queue_size, flush_secs, filename_suffix) 57 """Creates a `EventFileWriter` and an event file to write to. 58 59 On construction the summary writer creates a new event file in `logdir`. (...) 69 pending events and summaries to disk. 70 """ 71 self._logdir = logdir ---> 72 tf.io.gfile.makedirs(logdir) 73 self._file_name = ( 74 os.path.join( 75 logdir, (...) 84 + filename_suffix 85 ) # noqa E128 86 self._general_file_writer = tf.io.gfile.GFile(self._file_name, "wb") File ~\pytorch\MLvenv\Lib\site-packages\tensorflow\python\lib\io\file_io.py:513, in recursive_create_dir_v2(path) 501 @tf_export("io.gfile.makedirs") 502 def recursive_create_dir_v2(path): 503 """Creates a directory and all parent/intermediate directories. 504 505 It succeeds if path already exists and is writable. (...) 511 errors.OpError: If the operation fails. 512 """ --> 513 _pywrap_file_io.RecursivelyCreateDir(compat.path_to_bytes(path)) FailedPreconditionError: runs is not a directory I searched a lot but couldn't find anything.
II just had a similar problem and was able to solve it. In my case the problem was that there were russian letters in the path of the .ipynb file (windows username is written in russian). As soon as I moved the file to a different path with only latin letters, the problem was solved.
3
2
76,887,424
2023-8-12
https://stackoverflow.com/questions/76887424/how-solve-python-setup-py-egg-info-did-not-run-successfully-in-anaconda-insta
I recently installed Anaconda and Python (3.11.4). I created my env in Anaconda, and when I tried to install rasa (with "pip install rasa") conda showed me this error. Please help me ERROR when i installed rasa pip Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\Eliseo\AppData\Local\Temp\pip-install-qo_63x3c\absl-py_16e4b8fcbea9469084883112126934fa\setup.py", line 34, in <module> raise RuntimeError('Python version 2.7 or 3.4+ is required.') RuntimeError: Python version 2.7 or 3.4+ is required. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I tried uninstalling Python, and "conda clean --all" in Anaconda Prompt. But it doesn't work.
In the official Rasa documentation, it says that it is not compactible with modern Python versions. The most current and 100% compact is installing python3.8 wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tar.xz tar -xf Python-3.8.0.tar.xz cd Python-3.8.0 ./configure --enable-optimizations make sudo make altinstall And to create your application.(example on tmp path) cd /tmp/myapp python3.8 -m venv venv source venv/bin/activate pip install rasa source venv/bin/activate Attention, you may get an outdated pip error pip install --upgrade pip
2
4
76,899,779
2023-8-14
https://stackoverflow.com/questions/76899779/minimum-package-requirements-to-run-a-jupyter-notebook-in-vscode
Recently I keep running into problems with my python notebooks in vscode where vscode doesn't see the installed ipykernel. There are several posts on this issue with suggestions to update certain packages (VSCode not picking up ipykernel, Python requires ipykernel to be installed, vscode not detecting ipykernel, verified it is actually installed, Install ipykernel in vscode - ipynb (Jupyter), ...) This makes me wonder what the actual minimum requirements are. Different "offical" channels mention different dependencies: https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment mentions jupyter https://github.com/microsoft/vscode-jupyter/wiki/Jupyter-Kernels-and-the-Jupyter-Extension#python-environments mentions ipython and ipykernel https://github.com/microsoft/vscode-jupyter/wiki/Installing-Jupyter mentions jupyter may be required. Previously I only had ipykernel notebook installed in the conda environment which worked just fine. So what are the actual requirements to run jupyter notebooks in vscode? What are the needed packages with versions?
The minimum requirement according to the ticket https://github.com/microsoft/vscode-jupyter/issues/14130 is ipykernel The jupyter package is only required when asked to install. It has to do with native zmq modules not working on the platform.
3
0
76,901,337
2023-8-14
https://stackoverflow.com/questions/76901337/why-do-f-strings-require-parentheses-around-assignment-expressions
In Python (3.11) why does the use of an assignment expression (the "walrus operator") require wrapping in parentheses when used inside an f-string? For example: #!/usr/bin/env python from pathlib import Path import torch DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") ckpt_dir = Path("/home/path/to/checkpoints") _ckpt = next(ckpt_dir.iterdir()) print(_ckpt) sdict = torch.load(_ckpt, map_location=DEVICE) model_dict = sdict["state_dict"] for k, v in model_dict.items(): print(k) print(type(v)) print(_shape := v.size()) print(f"{(_numel := v.numel())}") print(_numel == torch.prod(torch.tensor(_shape))) The code block above with print(f"{_numel := v.numel()}") instead does not parse. What about the parsing / AST creation mandates this?
This behavior was explicitly specified in the original PEP for the assignment expressions (aka the walrus operator). The reason for this was to preserve backward compatibility with formatted string literals. Before assignment expressions were added, you could already write f-strings like f"{x:=y}", which meant "format x using the format specification =y". Quoting PEP 572 – Assignment Expressions: Assignment expressions inside of f-strings require parentheses. Example: >>> f'{(x:=10)}' # Valid, uses assignment expression '10' >>> x = 10 >>> f'{x:=10}' # Valid, passes '=10' to formatter ' 10' This shows that what looks like an assignment operator in an f-string is not always an assignment operator. The f-string parser uses : to indicate formatting options. To preserve backward compatibility, assignment operator usage inside of f-strings must be parenthesized. As noted above, this usage of the assignment operator is not recommended.
16
24
76,886,257
2023-8-11
https://stackoverflow.com/questions/76886257/how-to-validate-access-token-from-azuread-in-python
What is the recommended way to validate the access token in backend? Any library that handles it? Another team has implemented the frontend they send the access token in the Bearer attributed in the header. I found https://github.com/odwyersoftware/azure-ad-verify-token but it has only 17 Stars. I thought microsoft should have support for it in MSAL (https://github.com/AzureAD/microsoft-authentication-library-for-python) but seems not. Any suggestions on how to implement it in a secure way? Or any good libs that handles the validation. I have tried write the code my self but I get problems but worried its not secured and the code got messy. Also tried above lib but should like to have some more popular so its not a security risk.
Microsoft does not have a Python library to validate access tokens. Nevertheless, I found this official sample. You can check the requires_auth() function, which is used to validate the access token.
9
8
76,880,224
2023-8-11
https://stackoverflow.com/questions/76880224/error-using-using-docarrayinmemorysearch-in-langchain-could-not-import-docarray
Here is the full code. It runs perfectly fine on https://learn.deeplearning.ai/ notebook. But when I run it on my local machine, I get an error about ImportError: Could not import docarray python package I have tried reinstalling/force installing langchain and lanchain[docarray] (both pip and pip3). I use mini conda virtual environment. python version 3.11.4 from langchain.vectorstores import DocArrayInMemorySearch from langchain.schema import Document from langchain.indexes import VectorstoreIndexCreator import openai import os os.environ['OPENAI_API_KEY'] = "xxxxxx" #not needed in DLAI docs = [ Document( page_content="""[{"API_Name":"get_invoice_transactions","API_Description":"This API when called will provide the list of transactions","API_Inputs":[],"API_Outputs":[]}]""" ), Document( page_content="""[{"API_Name":"get_invoice_summary_year","API_Description":"this api summarizes the invoices by vendor, product and year","API_Inputs":[{"API_Input":"Year","API_Input_Type":"Text"}],"API_Outputs":[{"API_Output":"Purchase Volume","API_Output_Type":"Float"},{"API_Output":"Vendor Name","API_Output_Type":"Text"},{"API_Output":"Year","API_Output_Type":"Text"},{"API_Output":"Item","API_Output_Type":"Text"}]}]""" ), Document( page_content="""[{"API_Name":"loan_payment","API_Description":"This API calculates the monthly payment for a loan","API_Inputs":[{"API_Input":"Loan_Amount","API_Input_Type":"Float"},{"API_Input":"Interest_Rate","API_Input_Type":"Float"},{"API_Input":"Loan_Term","API_Input_Type":"Integer"}],"API_Outputs":[{"API_Output":"Monthly_Payment","API_Output_Type":"Float"},{"API_Output":"Total_Interest","API_Output_Type":"Float"}]}]""" ), Document( page_content="""[{"API_Name":"image_processing","API_Description":"This API processes an image and applies specified filters","API_Inputs":[{"API_Input":"Image_URL","API_Input_Type":"URL"},{"API_Input":"Filters","API_Input_Type":"List"}],"API_Outputs":[{"API_Output":"Processed_Image_URL","API_Output_Type":"URL"}]}]""" ), Document( page_content="""[{"API_Name":"movies_catalog","API_Description":"This API provides a catalog of movies based on user preferences","API_Inputs":[{"API_Input":"Genre","API_Input_Type":"Text"},{"API_Input":"Release_Year","API_Input_Type":"Integer"}],"API_Outputs":[{"API_Output":"Movie_Title","API_Output_Type":"Text"},{"API_Output":"Genre","API_Output_Type":"Text"},{"API_Output":"Release_Year","API_Output_Type":"Integer"},{"API_Output":"Rating","API_Output_Type":"Float"}]}]""" ), # Add more documents here ] index = VectorstoreIndexCreator( vectorstore_cls=DocArrayInMemorySearch ).from_documents(docs) api_desc = "do analytics about movies" query = f"Search for related APIs based on following API Description: {api_desc}\ Return list of API page_contents as JSON objects." print(index.query(query)) Here is the error: (streamlit) C02Z8202LVDQ:sage_response praneeth.gadam$ /Users/praneeth.gadam/opt/miniconda3/envs/streamlit/bin/python /Users/praneeth.gadam/sage_response/docsearch_copy.py Traceback (most recent call last): File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py", line 19, in _check_docarray_import import docarray ModuleNotFoundError: No module named 'docarray' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/praneeth.gadam/sage_response/docsearch_copy.py", line 30, in <module> ).from_documents(docs) ^^^^^^^^^^^^^^^^^^^^ File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/indexes/vectorstore.py", line 88, in from_documents vectorstore = self.vectorstore_cls.from_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 420, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py", line 67, in from_texts store = cls.from_params(embedding, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py", line 38, in from_params _check_docarray_import() File "/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py", line 29, in _check_docarray_import raise ImportError( ImportError: Could not import docarray python package. Please install it with `pip install "langchain[docarray]"`.
Try to install this way: pip install docarray
5
3
76,905,132
2023-8-15
https://stackoverflow.com/questions/76905132/separate-a-string-between-each-two-neighbouring-different-digits-via-re-split
For instance, I'd like to convert "91234 5g556οΌ—\t7₇89^" into ["9","1","2","3","4 5g55","6οΌ—\t7₇8","9^"]. Of course this can be done in a for loop without using any regular expressions, but I want to know if this can be done via a singular regular expression. At present I find two ways to do so: >>> import re >>> def way0(char: str): ... delimiter = "" ... while True: ... delimiter += " " ... if delimiter not in char: ... substitution = re.compile("([0-9])(?!\\1)([0-9])") ... replacement = "\\1"+delimiter+"\\2" ... cin = [char] ... while True: ... cout = [] ... for term in cin: cout.extend(substitution.sub(replacement,term).split(delimiter)) ... if cout == cin: ... return cin ... else: ... cin = cout ... >>> way0("91234 5g556οΌ—\t7₇89^") ['9', '1', '2', '3', '4 5g55', '6οΌ—\t7₇8', '9^'] >>> import functools >>> way1 = lambda w: ["".join(list(y)) for x, y in itertools.groupby(re.split("(0+|1+|2+|3+|4+|5+|6+|7+|8+|9+)", w), lambda z: z != "") if x] >>> way1("91234 5g556οΌ—\t7₇89^") ['9', '1', '2', '3', '4 5g55', '6οΌ—\t7₇8', '9^'] However, neither way0 nor way1 is concise (and ideal). I have read the help page of re.split; unfortunately, the following code does not return the desired output: >>> re.split(r"(\d)(?!\1)(\d)","91234 5g556οΌ—\t7₇89^") ['', '9', '1', '', '2', '3', '4 5g5', '5', '6', 'οΌ—\t7₇', '8', '9', '^'] Can re.split solve this problem directly (that is, without extra conversions)? (Note that here I don't focus on the efficiency.) There are some questions of this topic before (for example, Regular expression of two digit number where two digits are not same, Regex to match 2 digit but different numbers, and Regular expression to match sets of numbers that are not equal nor reversed), but they are about "RegMatch". In fact, my question is about "RegSplit" (rather than "RegMatch" or "RegReplace").
If you want to solve this using re.split without capturing and any further processing in one step, an idea is to use only lookarounds and in the lookbehind disallow two same digits looking ahead. (?=[0-9])(?<=(?!00|11|22|33|44|55|66|77|88|99)[0-9]) See this demo at regex101 or the Python demo at tio.run The way it works is obvious. The lookarounds find any position between two digits. Inside the lookbehind the negative lookahead prevents matching (before) if two same digits are ahead. I used [0-9] and not \d because unsure if \d matches unicode digits in your Python version.
2
2
76,906,469
2023-8-15
https://stackoverflow.com/questions/76906469/langchain-zero-shot-react-agent-uses-memory-or-not
I'm experimenting with LangChain's AgentType.CHAT_ZERO_SHOT_REACT agent. By its name I'd assume this is an agent intended for chat use and I've given it memory but it doesn't seem able to access its memory. What else do I need to do so that this will access its memory? Or have I incorrectly assumed that this agent can handle chats? Here is my code and sample output: llm = ChatOpenAI(model_name="gpt-4", temperature=0) tools = load_tools(["llm-math", "wolfram-alpha", "wikipedia"], llm=llm) memory = ConversationBufferMemory(memory_key="chat_history") agent_test = initialize_agent( tools=tools, llm=llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=True, memory=memory, verbose=True ) >>> agent_test.run("What is the height of the empire state building?") 'The Empire State Building stands a total of 1,454 feet tall, including its antenna.' >>> agent_test.run("What was the last question I asked?") "I'm sorry, but I can't provide the information you're looking for."
It does not. that's indicated by zero-shot which means just look at the current prompt. from here Zero-shot means the agent functions on the current action only β€” it has no memory. It uses the ReAct framework to decide which tool to use, based solely on the tool’s description. I think when you work with this agent type, you should add description to the tool. so that based on the description, llm will infer which tool used. that is the description part of ""zero-shot-react-description". example from same link above: math_tool = Tool( name='Calculator', func=llm_math.run, description='Useful for when you need to answer questions about math.' ) When the llm sees the prompt, if it infers that prompt is related to math, it will use the math_tool If you want to use memory, you should be using chat-conversational-react-description from langchain.memory import ConversationBufferMemory from langchain.chat_models import ChatOpenAI memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0) agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
2
6
76,886,954
2023-8-11
https://stackoverflow.com/questions/76886954/multiple-file-loading-and-embeddings-with-openai
I am trying to load a bunch of pdf files and query them using OpenAI APIs. from langchain.text_splitter import CharacterTextSplitter #from langchain.document_loaders import UnstructuredFileLoader from langchain.document_loaders import UnstructuredPDFLoader from langchain.vectorstores.faiss import FAISS from langchain.embeddings import OpenAIEmbeddings import pickle import os print("Loading data...") pdf_folder_path = "content/" print(os.listdir(pdf_folder_path)) # Load multiple files # location of the pdf file/files. loaders = [UnstructuredPDFLoader(os.path.join(pdf_folder_path, fn)) for fn in os.listdir(pdf_folder_path)] print(loaders) alldocument = [] vectorstore = None for loader in loaders: print("Loading raw document..." + loader.file_path) raw_documents = loader.load() print("Splitting text...") text_splitter = CharacterTextSplitter( separator="\n\n", chunk_size=800, chunk_overlap=100, length_function=len, ) documents = text_splitter.split_documents(raw_documents) #alldocument = alldocument + documents print("Creating vectorstore...") embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(documents, embeddings) #with open("vectorstore.pkl", "wb") as f: with open("vectorstore.pkl", "ab") as f: pickle.dump(vectorstore, f) f.close() I am trying to load multiple files for QnA but the index only remembers the last file uploaded from a folder. Do I need to change the structure of for loop or have another parameter with the Open Method?
The problem is that with each iteration of the loop, you're overwriting the previous vectorstore when you create a new one. Then, when saving to "vectorstore.pkl", you're only saving the last vectorstore. print("Loading data...") pdf_folder_path = "content/" print(os.listdir(pdf_folder_path)) # Load multiple files loaders = [UnstructuredPDFLoader(os.path.join(pdf_folder_path, fn)) for fn in os.listdir(pdf_folder_path)] print(loaders) all_documents = [] for loader in loaders: print("Loading raw document..." + loader.file_path) raw_documents = loader.load() print("Splitting text...") text_splitter = CharacterTextSplitter( separator="\n\n", chunk_size=800, chunk_overlap=100, length_function=len, ) documents = text_splitter.split_documents(raw_documents) all_documents.extend(documents) print("Creating vectorstore...") embeddings = OpenAIEmbeddings() vectorstore = FAISS.from_documents(all_documents, embeddings) with open("vectorstore.pkl", "wb") as f: pickle.dump(vectorstore, f)
2
3
76,901,063
2023-8-14
https://stackoverflow.com/questions/76901063/issues-with-creating-a-custom-colorbar
I am trying to create a custom colorbar with discrete intervals using this resource (https://matplotlib.org/3.1.1/tutorials/colors/colorbar_only.html), but I am running into this error which references my 'cb2' line in my code: Error: "AttributeError: 'GeoContourSet' object has no attribute 'set'" import xarray as xr from sklearn.linear_model import LinearRegression import proplot as pplt import cartopy as ct import matplotlib as mpl import matplotlib.pyplot as plt import colormaps as cmaps import matplotlib.colors as colors fig, axs = pplt.subplots(ncols = 2, nrows = 1, axwidth = 7, proj='pcarree') ax1, ax2 = axs a = ax1.contourf(era_cape['lon'], era_cape['lat'], regression, add_colorbar = False, extend = 'both') axs.format(coast=True, latlim = (20,51), lonlim = (234,293), innerborders = True) axs.add_feature(ct.feature.OCEAN, zorder=100, edgecolor='k', color = 'white') axs.add_feature(ct.feature.COASTLINE, zorder=100, edgecolor='k') cmap1 = mpl.colors.ListedColormap(['purple','navy','slateblue','blue','skyblue','lightblue','aliceblue','yellow','gold','orange','orangered','red','firebrick','darksalmon']) bounds = [-600, -480, -320, -160, -80, -40, -20, 0, 20, 40, 80, 160, 320, 480, 600] norm = mpl.colors.BoundaryNorm(bounds, cmap1.N, extend = 'neither') cb2 = mpl.colorbar.ColorbarBase(a, cmap = cmap1, norm = norm, ticks = bounds, spacing = 'uniform') In this example, I am relatively new to coding and I am not sure if there is anything I need to do to my data to get it to 'fit' into my custom colorbar. If I I changed the cb2 line to read: cb2 = fig.colorbar(a, cmap = cmap, norm = norm, ticks = bounds) This gives me a colorbar but it is set to an old, cmap (RdBu) rather than the new one I have created. The ticks are visible but only go up about 1/4 way on the colorbar. If I change the line back to cb2 = mpl.colorbar.ColorbarBase() I get the attribute error mentioned above again. In this example, era_cape['lat'] and era_cape['lon'] is 2D data over the United States with data spanning from roughly -600 to 600. {'coords': {'lon': {'dims': ('lon',), 'attrs': {'units': 'degrees_east', 'short_name': 'lon', 'long_name': 'longitude'}, 'data': [0.0, 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75, 4.0, 4.25, 4.5, 4.75, 5.0, 5.25, 5.5, 5.75, 6.0, 6.25, 6.5, 6.75, 7.0, 7.25, 7.5, 7.75, 8.0, 8.25, 8.5, 8.75, 9.0, 9.25, 9.5, 9.75]}, 'lat': {'dims': ('lat',), 'attrs': {'qualifier': 'Gaussian', 'units': 'degrees_north', 'short_name': 'lat', 'long_name': 'latitude'}, 'data': [90.0, 89.75, 89.5, 89.25, 89.0, 88.75, 88.5, 88.25, 88.0, 87.75, 87.5, 87.25, 87.0, 86.75, 86.5, 86.25, 86.0, 85.75, 85.5, 85.25, 85.0, 84.75, 84.5, 84.25, 84.0, 83.75, 83.5, 83.25, 83.0, 82.75, 82.5, 82.25, 82.0, 81.75, 81.5, 81.25, 81.0, 80.75, 80.5, 80.25]}}, 'attrs': {'title': 'p_cal_daily2monthly_era5.ncl', 'program'
There's a lot going on in your sample code, and without data to replicate the error it requires some guessing. So perhaps you can simplify it a little and use toy data (or publicly available otherwise)? I suspect the error comes from the fact that you pass the result from ax.contourf to mpl.colorbar.ColorBase. The latter expects an axes object as the argument, and that should be a dedicated axes for the colorbar, not the axes of you actual plot. But instead of an axes, you provide the result of contourf (a GeoContourSet), which is not appropriate and I'm guessing triggers the error you experienced. Often more high-level functions are used to avoid having to create a colorbar axes (cax) explicitly (see below). I'm not familiar with proplot myself, so in the example below I have removed it for simplicity. I don't think that matters in the end and you could probably add it back in for the functionality that you need. A simplified example, focusing on the colorbar specifically, is shown below: import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np # generate toy data, example data from: # https://matplotlib.org/stable/gallery/images_contours_and_fields/contourf_demo.html x = y = np.arange(-3.0, 3.01, 0.025) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 # scale toy example data to global lon = x * 180/3 lat = y * 90/3 regression = Z * 300 # define the colormap and scaling cmap = mpl.colors.ListedColormap([ 'purple','navy','slateblue','blue','skyblue','lightblue', 'aliceblue', 'yellow','gold','orange','orangered','red','firebrick','darksalmon', ]) bounds = [-600, -480, -320, -160, -80, -40, -20, 0, 20, 40, 80, 160, 320, 480, 600] norm = mpl.colors.BoundaryNorm(bounds, cmap.N) # create the figure and axes fig, ax = plt.subplots( 1,1, figsize=(8,5), facecolor="w", layout="compressed", subplot_kw=dict(projection=ccrs.PlateCarree()), ) # plot the contours, note that "cf" is also a mappable that could # be used instead of "im" below cf = ax.contourf( lon, lat, regression, cmap=cmap, norm=norm, transform=ccrs.PlateCarree(), ) # create the colorbar im = mpl.cm.ScalarMappable(cmap=cmap, norm=norm) cb = fig.colorbar( im, ax=ax, ticks=bounds, spacing="uniform", orientation="horizontal", shrink=0.8, ) ax.add_feature(cfeature.OCEAN, ec="k", fc="w", zorder=100) ax.add_feature(cfeature.COASTLINE, ec="k", zorder=100) Which results in: The use of fig.colorbar allows you to pass your main axes (for plotting), and makes Matplotlib automatically create a reasonable colorbar axes for you. You can also provide that axes yourself (cax=cax), but that's mainly useful if you need more control on the specific location for example, like placing it in/over your main axes.
3
1
76,886,864
2023-8-11
https://stackoverflow.com/questions/76886864/unable-to-install-pyinstaller-in-anaconda-environment
I have an environment in Anaconda called myEnv. I am trying to install pyinstaller to it. I tried all 3 of the options given here https://anaconda.org/conda-forge/pyinstaller for installing pyinstaller, however, none of them worked. This is what the message looks like in Anaconda Prompt: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: -pyinstaller This is weird since I do have conda-forge as a channel: conda config --show channels The response is: channels: - defaults - conda-forge As a last resort, I tried pip install pyinstaller and that gave an error. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required'))': /simple/pyinstaller/ ERROR: Could not find a version that satisfies the requirement pyinstaller (from versions: none) ERROR: No matching distribution found for pyinstaller WARNING: There was an error checking the latest version of pip. Edit: I also tried conda update conda but that still didn't seem to do the trick. Installing it from the Anaconda channel also didn't work: conda install -c anaconda pyinstaller It just gave me the same error when I tried to install pyinstaller from the conda-forge channel. Some Extra Details: My Python version: 3.10.9 Laptop: Windows 64
You can directly install pyinstaller from GitHub python -m pip install git+https://github.com/pyinstaller/pyinstaller If you're using a conda env file, you can also configure the package to be installed in the same way. name: myEnv channels: dependencies: - pip: - "--editable=git+https://github.com/pyinstaller/pyinstaller" # other dependencies
2
3
76,903,859
2023-8-15
https://stackoverflow.com/questions/76903859/a-quick-way-to-find-the-first-matching-submatrix-from-the-matrix
My matrix is simple, like: # python3 numpy >>> A array([[0., 0., 1., 1., 1.], [0., 0., 1., 1., 1.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) >>> P array([[0., 0., 0., 0.]]) I need to find an all-zero region(one is enough) in A with the same size as P (1x4). So the right answer include: (2, 0) # The vertex coordinates of the all-zero rectangular region that P can be matched (2, 1) (3, 0) (3, 1) (4, 0) (4, 1) # Just get any 1 answer Actually my A matrix will reach a size of 30,000*30,000. I'm worried that it will be slow if written as a loop statement. Is there any quick way? The size of P is uncertain, from 10*30 to 4000*80. At the same time, the A matrix lacks regularity, and looping from any point may require traversing the entire matrix to successfully match
As @Julien pointed out in the comment, in general, we can use sliding windows for this kind of task. def find_all_zero_region_by_sliding_window(a, shape): x, y = np.nonzero(np.lib.stride_tricks.sliding_window_view(a, shape).max(axis=-1).max(axis=-1) == 0) return np.stack((x, y), axis=-1) find_all_zero_region_by_sliding_window(A, P.shape) However, unfortunately, this requires a lot of memory. numpy.core._exceptions.MemoryError: Unable to allocate 11.3 TiB for an array with shape (26001, 29921, 4000) and data type float32 ^^^^^^^^ As an alternative, I think using the Summed-area table is a good idea. It is similar to the sliding window approach above, but instead of finding the maximum value, we can calculate the sum (very efficiently) and search for the position where it is zero. Note that this assumes that A does not contain any negative values. Otherwise, you would have to use numpy.abs. Since we do not need to be able to calculate the sum of any given position, I adapted this idea and implemented it to require only a single-line cache. import numpy as np from typing import Tuple def find_all_zero_region(arr: np.ndarray, kernel_size: Tuple[int, int]) -> np.ndarray: input_height, input_width = arr.shape kernel_height, kernel_width = kernel_size matches = [] # Calculate summed_line for y==0. summed_line = arr[:kernel_height].sum(axis=0) for y in range(input_height - kernel_height + 1): # Update summed_line for row y. if y != 0: # Except y==0, which already calculated above. # Adding new row and subtracting old row. summed_line += arr[y + kernel_height - 1] - arr[y - 1] # Calculate kernel_sum for (y, 0). kernel_sum = summed_line[:kernel_width].sum() if kernel_sum == 0: matches.append((y, 0)) # Calculate kernel_sum for (y, 1) to (y, right-edge). # Using the idea of a summed-area table, but in 1D (horizontally). (all_zero_region_cols,) = np.nonzero(kernel_sum + np.cumsum(summed_line[kernel_width:] - summed_line[:-kernel_width]) == 0) for col in all_zero_region_cols: matches.append((y, col + 1)) if not matches: # For Numba, output must be a 2d array. return np.zeros((0, 2), dtype=np.int64) return np.array(matches, dtype=np.int64) As you can see, this uses loops, but it should be much faster than you think because the required memory is relatively small and the number of calculations/comparisons is greatly reduced. Here is some timing code. import time rng = np.random.default_rng(0) A = rng.integers(0, 2, size=(30000, 30000)).astype(np.float32) P = np.zeros(shape=(4000, 80)) # Create an all-zero region in the bottom right corner which will be searched last. A[-P.shape[0] :, -P.shape[1] :] = 0 started = time.perf_counter() result = find_all_zero_region(A, P.shape) print(f"{time.perf_counter() - started} sec") print(result) # 3.541154200000001 sec # [[26000 29920]] Moreover, this function can be even faster by using Numba. Just add annotations as follows: import numba @numba.njit("int64[:,:](float32[:,:],UniTuple(int64,2))") def find_all_zero_region_with_numba(arr: np.ndarray, kernel_size: Tuple[int, int]) -> np.ndarray: ... started = time.perf_counter() find_all_zero_region_with_numba(A, P.shape) print(f"{time.perf_counter() - started} sec") # 1.6005743999999993 sec Note that I implemented it to find all positions of the all-zero regions, but you can also make it return on the first one. Since it uses loops, the average execution time will be even faster.
3
2
76,907,845
2023-8-15
https://stackoverflow.com/questions/76907845/python-split-function-to-extract-the-first-element-and-last-element
python code --> ",".join("one_two_three".split("_")[0:-1]) I want to take the 0th element and last element of delimited string and create a new string put of it . The code should delimited string of any length Expected output : "one,three" The above code gives me 'one,two' Can some one please help me
[0:-1] is a slice, it returns all the elements from 0 to the 2nd-to-last element. Without the third stride parameter, which is used for getting every Nth element, a slice can only return a contiguous portion of a list, not two separate items. You should select the first and last separately, then concatenate them. items = "one_two_three".split("_") result = f"{items[0]},{items[-1]}"
2
4
76,904,666
2023-8-15
https://stackoverflow.com/questions/76904666/find-all-decompositions-in-two-factors-of-a-number
For a given N, I am trying to find every positive integers a and b such that N = a*b. I start decomposing into prime factors using sympy.ntheory.factorint, it gives me a dict factor -> exponent. I have this code already, but I don't want to get duplicates (a and b play the same role): import itertools from sympy.ntheory import factorint def find_decompositions(n): prime_factors = factorint(n) cut_points = {f: [i for i in range(1+e)] for f, e in prime_factors.items()} cuts = itertools.product(*cut_points.values()) decompositions = [((a := np.prod([f**e for f, e in zip(prime_factors, cut)])), n//a) for cut in cuts] return decompositions Example: In [235]: find_decompositions(12) Out[235]: [(1, 12), (3, 4), (2, 6), (6, 2), (4, 3), (12, 1)] What I would like to get instead: Out[235]: [(1, 12), (3, 4), (2, 6)] I tried to reduce halve the range in cut_points with range extends such as e//2, 1 + e//2, (1+e)//2, 1 + (1+e)//2. None of it ended up working. A simple solution is obviously to compute the same and return: decompositions[:(len(decompositions)+1)//2] but I am looking for an eventual solution that reduces the number of computations instead.
You're using module sympy, which already has a divisors function: from sympy import divisors print(divisors(12)) # [1, 2, 3, 4, 6, 12] def find_decompositions(n): divs = divisors(n) half = (len(divs) + 1) // 2 # +1 because perfect squares have odd number of divisors return list(zip(divs[:half], divs[::-1])) print(find_decompositions(12)) # [(1, 12), (2, 6), (3, 4)] print(find_decompositions(100)) # [(1, 100), (2, 50), (4, 25), (5, 20), (10, 10)]
4
2
76,907,660
2023-8-15
https://stackoverflow.com/questions/76907660/sslerror-when-accessing-sidra-ibge-api-using-python-ssl-unsafe-legacy-rene
I've been using a Python script to access the SIDRA (IBGE) API and fetch data. It was working perfectly fine, but recently, without any changes on my part, I started encountering an SSL error. Here's the code I've been using: import requests url = 'https://servicodados.ibge.gov.br/api/v3/agregados' response = requests.get(url) print(response.json()) Upon running the code, I get the following error: SSLError: HTTPSConnectionPool(host='servicodados.ibge.gov.br', port=443): Max retries exceeded with url: /api/v3/agregados (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1007)'))) Here are the things I've tried: Using the verify=False option with requests.get(). Updating the requests and urllib3 libraries. Trying other libraries like httpx and aiohttp. Checking the OpenSSL version. Nothing seems to solve the issue. I'm at a loss since the code used to work without any issues. Has anyone encountered this and knows a fix?
Try: import ssl import requests class TLSAdapter(requests.adapters.HTTPAdapter): def init_poolmanager(self, *args, **kwargs): ctx = ssl.create_default_context() ctx.set_ciphers("DEFAULT@SECLEVEL=1") ctx.options |= 0x4 # <-- the key part here, OP_LEGACY_SERVER_CONNECT kwargs["ssl_context"] = ctx return super(TLSAdapter, self).init_poolmanager(*args, **kwargs) url = "https://servicodados.ibge.gov.br/api/v3/agregados" with requests.session() as s: s.mount("https://", TLSAdapter()) print(s.get(url).json()) Prints: [{'id': 'D5', 'nome': 'Áreas Urbanizadas', 'agregados': [{'id': '8418', 'nome': 'Áreas urbanizadas, ...
3
2
76,880,837
2023-8-11
https://stackoverflow.com/questions/76880837/how-do-i-communicate-using-gql-with-apollo-websocket-protocol
I want to connect to the websocket. When I inspect the traffic between the client and the server I see that the first message is the handshake: {"type": "connection_init", "payload": { "accept-language": "en", "ec-version": "5.1.88", "referrer": URL, } } Based on the format (the keys of the dict) I conclude that the websocket uses an Apollo websocket transport prototcol. Next, I am following the websocket-example of gql's documentation. import asyncio import logging from gql import gql, Client from gql.transport.websockets import WebsocketsTransport logging.basicConfig(level=logging.INFO) async def main(): transport = WebsocketsTransport(url=URL, init_payload={'accept-language': 'en', 'ec-version': '5.1.88'}) async with Client(transport=transport, fetch_schema_from_transport=False) as session: # do something asyncio.run(main()) After reading more about the protocol here I still don't understand how I can send messages to the server within my Python-script How do I send the the below message to the websocket? { "id": "1073897396", "type": "start", "operationName": operation, "eventId": 488 }
The EuropeanTour website does not use the Apollo websocket protocol for its API so you cannot use the standard WebsocketTransport class. What I found out: instead of returning a connection_ack message after the connection_init, it returns a wsIdentity message for a subscription, the client will first send a start message containing an operationName and an eventId, without any GraphQL query the server will then answer with a request-full-subscription message, requiring the client to send the full subscription query the client has then to send a full-subscription message containing the full requested query, including the operationName and a new field subscriptionName the id is not actually random, it is generated from hashing together the subscriptionName and the variables, from a json format with the dictionary keys sorted. It took me a while to figure it out as no error is generated if the id is not correct the server seems to cache the GraphQL subscription based on the IP address probably so it will not request the subscription with a request-full-subscription message if it has already received it previously Here is a Chrome screenshot showing the request-full-subscription messages: You can then create your own transport by inheriting the WebsocketTransport class and making the necessary modifications. Here is an example of working code: import asyncio import json import logging from typing import Any, AsyncGenerator, Dict, Optional, Tuple from graphql import DocumentNode, ExecutionResult, print_ast from gql import Client, gql from gql.transport.exceptions import TransportProtocolError from gql.transport.websockets import WebsocketsTransport logging.basicConfig(level=logging.INFO) class EuropeanTourWebsocketsTransport(WebsocketsTransport): def _hash(self, e): t = 5381 r = len(e) while r: r -= 1 t = t * 33 ^ ord(e[r]) return t & 0xFFFFFFFF def _calc_id(self, subscription_name, variables): obj = { "subscriptionName": subscription_name, "variables": variables, } obj_stringified = json.dumps( obj, separators=(",", ":"), sort_keys=True, ) hashed_value = self._hash(obj_stringified) return hashed_value async def _send_query( self, document: DocumentNode, variable_values: Optional[Dict[str, Any]] = None, operation_name: Optional[str] = None, ) -> int: # Calculate the id by hashing the subscription name and the variables query_id = self._calc_id(self.latest_subscription_name, variable_values) # Creating the payload for the full subscription payload: Dict[str, Any] = {"query": print_ast(document)} if variable_values: payload["variables"] = variable_values if operation_name: payload["operationName"] = operation_name payload["subscriptionName"] = self.latest_subscription_name # Saving the full query first and waiting for the server to request it later self.saved_full_subscriptions[str(query_id)] = payload # Then first start to request the subscription only with the operation name query_str = json.dumps( { "id": str(query_id), "type": "start", "operationName": operation_name, "eventId": self.latest_event_id, } ) await self._send(query_str) return query_id async def subscribe( self, document: DocumentNode, *, variable_values: Optional[Dict[str, Any]] = None, operation_name: str, subscription_name: str, event_id: int, send_stop: Optional[bool] = True, ) -> AsyncGenerator[ExecutionResult, None]: self.latest_event_id = event_id self.latest_subscription_name = subscription_name async for result in super().subscribe( document, variable_values=variable_values, operation_name=operation_name, send_stop=send_stop, ): yield result async def _wait_ack(self) -> None: self.saved_full_subscriptions = {} while True: init_answer = await self._receive() answer_type, answer_id, execution_result = self._parse_answer(init_answer) if answer_type == "wsIdentity": return raise TransportProtocolError( "Websocket server did not return a wsIdentity response" ) def _parse_answer( self, answer: str ) -> Tuple[str, Optional[int], Optional[ExecutionResult]]: try: json_answer = json.loads(answer) except ValueError: raise TransportProtocolError( f"Server did not return a GraphQL result: {answer}" ) if "wsIdentity" in json_answer: return ("wsIdentity", json_answer["wsIdentity"], None) elif ( "type" in json_answer and json_answer["type"] == "request-full-subscription" ): return ("request-full-subscription", json_answer["id"], None) else: return self._parse_answer_apollo(json_answer) async def send_full_subscription(self, answer_id: str): if answer_id not in self.saved_full_subscriptions: raise Exception(f"Full subscription not found for id {answer_id}") payload = self.saved_full_subscriptions[answer_id] query_str = json.dumps( {"id": answer_id, "type": "full-subscription", "payload": payload} ) await self._send(query_str) async def _handle_answer( self, answer_type: str, answer_id: Optional[int], execution_result: Optional[ExecutionResult], ) -> None: if answer_type == "request-full-subscription": await self.send_full_subscription(answer_id) else: await super()._handle_answer(answer_type, answer_id, execution_result) async def main(): transport = EuropeanTourWebsocketsTransport( url="wss://btec-websocket.services.imgarena.com", init_payload={ "accept-language": "en", "ec-version": "5.1.88", "operator": "europeantour", "referrer": "https://www.europeantour.com/", "sport": "GOLF", }, ) async with Client( transport=transport, fetch_schema_from_transport=False ) as session: query = gql( """ subscription ShotTrackerSubscribeToGolfTournamentGroupScores($input: SubscribeToGolfTournamentGroupScoresInput!) { subscribeToGolfTournamentGroupScores(input: $input) { groupId l1Course teamId players { id lastName firstName } roundScores { courseId roundNo toParToday { value } holesThrough { value } startHole holes { holePar holeStrokes holeOrder holeNumber } isPlayoff } toPar { value } tournamentPosition { format value displayValue } status } } """ ) variables = { "input": { "teamId": 21, "tournamentId": 488, "roundNo": 4, }, } async for result in session.subscribe( query, operation_name="ShotTrackerSubscribeToGolfTournamentGroupScores", variable_values=variables, subscription_name="subscribeToGolfTournamentGroupScores", event_id=488, ): print(result) asyncio.run(main())
4
3
76,906,166
2023-8-15
https://stackoverflow.com/questions/76906166/is-there-a-way-for-me-to-dry-common-imports-in-my-python-files
In C# the concept of global using can be used to DRY (do not repeat yourself) many common using statements across many files. Now we have many similar imports across many Python files. Is there a similar strategy that we can use to reduce our boilerplate code?
You could create a "prelude" module whose only purpose is to be glob imported by other modules. This idea can be found in other programming languages, like std::prelude from Rust and Prelude from Haskell. For example, create a prelude.py file with contents like from .sprockets import LeftSprocket, RightSprocket, make_sprocket from .widgets import FooWidget, BarWidget from .util import sprocket2widget, widget2sprokect Then from whatever script you want to use it from, import it as from my_package.my_module.prelude import * All of the names from the prelude (LeftSprocket, FooWidget, etc.) will be immediately available. This approach has some nice advantages: It's entirely optional to the end users. Users can freely decide between using the prelude vs explicitly importing the modules they need. It's non-invasive to the rest of your package. Providing a prelude doesn't require modifying any other modules or otherwise re-organizing your package. That said, I wouldn't necessarily recommend creating preludes for all your projects. There are some significant disadvantages that it comes with: There's now multiple modules that export the same names, so IDEs may have a harder time automatically completing imports. You may end up with your IDE suggesting things like my_package.my_module.prelude import FooWidget instead of my_package.my_module.widget import FooWidget. This can also be confusing to users, who won't know if the FooWidgets from both modules are the same or different without investigating both. It makes it less obvious where global names are coming from. If you glob-import a prelude, there's no easy way to tell what names are available and what modules they come from. This goes against the Zen of Python, Explicit is better than implicit. There's a risk of name collisions. Consider for example this recent question. The issue there had to do with the OP including both the imports from PIL import Image and from tkinter import *, where the latter silently re-defined Image to be tkinter.Image. See also Why is "import *" bad?.
7
13
76,884,972
2023-8-11
https://stackoverflow.com/questions/76884972/let-a-passed-function-object-be-called-as-a-method
EDIT: I extended the example to show the more complex case that I was talking about before. Thanks for the feedback in the comments. Some more context: I am mostly interested in this on a theoretical level. I would be happy with an answer like "This is not possible because ...", giving some detailed insights about python internals. For my concrete problem I have a solution, as the library allows to pass a class which then can do what I want. However, if there would be a way to use the simpler interface of just passing the function which gets bound, I would save many lines of code. (For the interested: This question is derived from the sqlalchemy extension associationproxy and the creator function parameter.) # mylib.py from typing import Callable class C: def __init__(self, foo: Callable): self.foo = foo def __get__(self, instance, owner): # simplified return CConcrete(self.foo) class CConcrete: foo: Callable def __init__(self, foo: Callable): self.foo = foo def bar(self): return self.foo() # main.py from mylib import C def my_foo(self): return True if self else False class House: window = C(my_foo) my_house = House() print(my_house.window.bar()) This code gives the error my_foo() missing 1 required positional argument: 'self' How can I get my_foo be called with self without changing the class C itself? The point is that a class like this exists in a library, so I can't change it. In fact it's even more complicated, as foo gets passed down and the actual object where bar exists and calls self.foo is not C anymore. So the solution can also not include assigning something to c after creation, except it would also work for the more complex case described.
You can bind the method to the instance manually, since it won't be living in the class __dict__: self.foo = foo.__get__(self) This is assuming you don't want to monkeypatch the entire class. If you do, assign c.foo = my_foo instead of passing to the instance initializer. You could also conceivably use inheritance: class C(C): def __init__(self): super().__init__(self.foo) def foo(self): return True
4
2
76,892,486
2023-8-13
https://stackoverflow.com/questions/76892486/does-pynacl-release-gil-and-should-it-be-used-with-multithreading
Does PyNaCl release the Global Interpreter Lock? Will it be suitable to use multithreading for encryption using PyNaCl's SecretBox? I want to encrypt relatively large amounts of data (~500 MB) using PyNaCl. For this, I divide it into chunks of around 5MB and encrypt them using ThreadPoolExecutor(). Is this ideal? I don't know if PyNaCl releases Python's GIL and whether it will actually encrypt the chunks in parallel resulting in performance gains. Edit: To avoid confusion, let me clear the experimental results. After testing the current implementation a bunch of times, I found out that it was slightly faster than encrypting the entire data directly, and extremely faster than a simple for loop. However, I need hard evidence (reference to documentation, or some kind of test) to prove that the task is indeed running in parallel and GIL is not blocking performance. This is my current implementation using ThreadPoolExecutor() from concurrent.futures import ThreadPoolExecutor from os import urandom from typing import Tuple from nacl import secret from nacl.bindings import sodium_increment from nacl.secret import SecretBox def encrypt_chunk(args: Tuple[bytes, SecretBox, bytes, int]): chunk, box, nonce, macsize = args try: outchunk = box.encrypt(chunk, nonce).ciphertext except Exception as e: err = Exception("Error encrypting chunk") err.__cause__ = e return err if not len(outchunk) == len(chunk) + macsize: return Exception("Error encrypting chunk") return outchunk def encrypt( data: bytes, key: bytes, nonce: bytes, chunksize: int, macsize: int, ): box = SecretBox(key) args = [] total = len(data) i = 0 while i < total: chunk = data[i : i + chunksize] nonce = sodium_increment(nonce) args.append((chunk, box, nonce, macsize,)) i += chunksize executor = ThreadPoolExecutor(max_workers=4) out = executor.map(encrypt_chunk, args) executor.shutdown(wait=True) return out
PyNaCl uses the Common Foreign Function Interface (CFFI) to provide bindings to the C library Libsodium. We can see that the SecretBox function is basically a binding for the crypto_secretbox() function of the Libsodium library. As per CFFI documentation: [2] C function calls are done with the GIL released. Since most of the functions from PyNaCl are bindings to the Libsodium library using CFFI, they will release the Global Interpreter Lock during the execution of the C function. This should explain the performance improvements from multithreading.
3
5
76,904,262
2023-8-15
https://stackoverflow.com/questions/76904262/get-the-value-of-a-column-based-on-min-max-values-of-another-column-of-a-pandas
The pandas dataframe: data = pd.DataFrame ({ 'group': ['A', 'A', 'B', 'B', 'C', 'C'], 'date': ['2023-01-15', '2023-02-20', '2023-01-10', '2023-03-05', '2023-02-01', '2023-04-10'], 'value': [10, 15, 5, 25, 8, 12]} ) Trying to get the values of the 'value' column based on the min and max values of 'date' column for each 'group' in an aggregate function: ## the following doesn't work output = ( df .groupby(['group'],as_index=False).agg( ## there are some other additional aggregate functions happening here too. value_at_min = ('value' , lambda x: x.loc[x['date'].idxmin()]) , value_at_max = ('value' , lambda x: x.loc[x['date'].idxmax()]) )) This doesn't work, even with converting date to datetime (in fact, my original date column is in datetime format). Desired output should be: group min_date max_date value_at_min value_at_max 0 A 2023-01-15 2023-02-20 10 15 1 B 2023-01-10 2023-03-05 5 25 2 C 2023-02-01 2023-04-10 8 12
Sort the dataframe by date then groupby and aggregate with nth to get the rows corresponding to min and max dates g = data.sort_values(['date']).groupby('group') g.nth(0).merge(g.nth(-1), on='group', suffixes=['_min', '_max']) date_min value_min date_max value_max group A 2023-01-15 10 2023-02-20 15 B 2023-01-10 5 2023-03-05 25 C 2023-02-01 8 2023-04-10 12
2
3
76,881,483
2023-8-11
https://stackoverflow.com/questions/76881483/whats-the-correct-way-to-type-hint-an-empty-list-as-a-literal-in-python
I have a function that always returns an empty list (it's a long story why), I could type hint as usual with just "list", but it would be useful to indicate that the list is always going to be the same. My first thought was to use literal like this: from typing import Literal def get_empty_list() -> Literal[[]]: return [] Mypy flags this as an invalid type, is there a correct way to type hint an always empty list? (obviously, I could type hint just a list, but this is less helpful) To be explicit, this is a list that is always empty and doesn't expect to have any elements. (As seperate for example, from a list that is currently empty, but might have elements of some type added later on).
If you want a type that expresses "list that doesn't have elements now, but might have elements added later", then that's fundamentally not a static type. Appending elements to such a list would be valid, but would cause the list to stop being an element of that type, which should not happen with static types in a program that respects static typing. If you want something that expresses "list that has no elements, and will never have elements", you can kind of do it: from typing import Never def get_empty_list() -> list[Never]: return [] Here, we've annotated the element type as typing.Never, a type with no values. This is actually the type mypy infers for an empty list literal if there's no context to suggest another type, as you can see with reveal_type: from typing import Never x: list[Never] reveal_type(x) reveal_type([]) mypy output: main.py:4: note: Revealed type is "builtins.list[<nothing>]" main.py:5: note: Revealed type is "builtins.list[<nothing>]" Success: no issues found in 1 source file Here are the implications of using a list[Never] type: Static type checkers will not allow operations that add elements to a list annotated as list[Never]. They will allow operations like get_empty_list().append(sys.exit()), where the argument to append is an expression that cannot evaluate to a value. They will recognize that get_empty_list()[0] cannot evaluate to a value... but this is very different from considering it a prohibited operation. In fact, rather than prohibiting the operation, they will allow it, in almost any context. After all, if an expression cannot produce a value, then it cannot produce a value of the wrong type, no matter what the right type is. So you can do stuff like x: int = get_empty_list()[0] and from a static typing perspective, that's perfectly valid. ([][0] is considered valid too.) They will not recognize that len(get_empty_list()) must be 0.
4
10
76,902,113
2023-8-14
https://stackoverflow.com/questions/76902113/why-are-there-no-arrays-of-objects-in-python
The Python module array supports arrays, which are, unlike lists, stored in a contiguous manner, giving performance characteristics which are often more desirable. But the types of elements are limited to those listed in the documentation. I can see why the types have to be ones with constant (or at least bounded from above) representation size. But don't pointers fall into that category? (The main implementation is written in C which, admittedly, allows pointers to different types to have different sizes, but they're all (perhaps except pointers to C functions, but that's not an issue for this question) convertible to intptr_t.) So, given boxing, arrays of arbitrary Python objects could be easily implemented, right? So why aren't they?
because we have lists, which are exactly an array of pointers. python's array module stores data contiguously, python objects have dynamic sizes and cannot be stored contiguously, so we have lists which store the pointers. the main use of array is for passing around C-style arrays between different C functions, arrays are slower than lists for numeric types due to the boxing and unboxing, so it has no use in any pure python code.
3
4
76,901,667
2023-8-14
https://stackoverflow.com/questions/76901667/why-cast-return-value-of-len-to-int-in-python
In the source code of Stable Baselines3, in common/preprocessing.py, on line 158, there is: return (int(len(observation_space.nvec)),). As far as I know, in Python, len can only return an int, and regardless if this is not true, would return an int here (might be wrong on both counts). If this is the case, the cast to int would not make sense to me. Am I missing something here?
The call to int is not necessary. len always returns an int. If a class' __len__ implemention returns something other than an int, Python will attempt to convert it to an int. If that can't be done, a TypeError will be raised. There's no scenario where len can return something other than an int. You can try to violate this yourself: class Foo: def __len__(self): return "bad" >>> len(Foo()) Traceback (most recent call last) ... TypeError: 'str' object cannot be interpreted as an integer
2
3
76,878,713
2023-8-10
https://stackoverflow.com/questions/76878713/tkinter-extension-was-not-compiled-and-gui-subsystem-has-been-detected-missing
I am trying to install (through pyenv) Python-3.11.4 under CentOS-7. It installs but without GUI. I get the following error message: Installing Python-3.11.4... Traceback (most recent call last): File "<string>", line 1, in <module> File "/.../pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 38, in <module> import _tkinter # If this fails your Python may not be configured for Tk ^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named '_tkinter' WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit? Installed Python-3.11.4 to /.../pyenv/versions/3.11.4 While Python-3.9.16 installs successfully on the same machine. According to Python 3.11 Build Changes, the requirement is to have "Tcl/Tk version 8.5.12 or newer" installed. I have $ rpm -q tk tk-devel tcl tcl-devel tk-8.5.13-6.el7.x86_64 tk-devel-8.5.13-6.el7.x86_64 tcl-8.5.13-8.el7.x86_64 tcl-devel-8.5.13-8.el7.x86_64 The same page says "Tcl/Tk, and uuid flags are detected by pkg-config (when available). tkinter now requires a pkg-config command to detect development settings for Tcl/Tk headers and libraries.", which is also installed: $ rpm -q pkgconfig pkgconfig-0.27.1-4.el7.x86_64 Could you please help me to understand what might be the reason of the failure to install _tkinter? Thank you very much for your help!
To build successfully Python-3.11.4 under CentOS-7, one needs to set the following environment variables: export CPPFLAGS="$(pkg-config --cflags openssl11) -I/usr/include" export LDFLAGS="$(pkg-config --libs openssl11) -L/usr/lib64 -ltcl8.5 -ltk8.5" where the openssl11 parts are needed for the _ssl module, and the rest is needed for the _tkinter module. The pieces needed to build Python with the Tk/Tcl were found in /usr/lib64/tclConfig.sh /usr/lib64/tkConfig.sh To check if tkinter was built successfully, do the following: /path/to/python/3.11.4/bin/python3 -m tkinter It seems, the root cause of the problem is that tcl and tk rpm packages in CentOS-7 do not provide the corresponding pkg-config files: /usr/lib64/pkgconfig/tcl.pc /usr/lib64/pkgconfig/tk.pc and so one has to provide the corresponding information manually.
9
3
76,898,926
2023-8-14
https://stackoverflow.com/questions/76898926/polars-exceptions-shapeerror-unable-to-vstack-dtypes-for-column-a-dont-mat
Unable to concatenate the polars dataframes with data types of a column being f64 and i64. I have two pandas dataframes df1, df2 in pandas, where column 'a' in df1 is float and in df2 is int, when I perform pd.concat([df1, df2]) it works. However, when I try the same operation on polars dataframes, it is throwing the following error: exceptions.ShapeError: unable to vstack, dtypes for column "a" don't match: f64 and i64 pandas code: import pandas as pd df1 = pd.DataFrame({'a': [1.0, 2.0, 3.0], 'b': [1, 2, 3]}) df2 = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]}) pd.concat() produces the following output: pd.concat([pd_df1, pd_df2]) a b 0 1.00000000 1 1 2.00000000 2 2 3.00000000 3 0 1.00000000 1 1 2.00000000 2 2 3.00000000 3 polars code: import polars as pl df1 = pl.DataFrame({'a': [1.0, 2.0, 3.0], 'b': [1, 2, 3]}) df2 = pl.DataFrame({'a': [1, 2, 3], 'b': [1, 2, 3]}) pl.concat() is producing the error, unlike pandas. pl.concat([df1, df2]) Traceback (most recent call last): File "C:\Users\user\.conda\envs\dev\lib\site-packages\IPython\core\interactiveshell.py", line 3362, in run_code async def run_code(self, code_obj, result=None, *, async_=False): File "<ipython-input-16-4301449ba376>", line 1, in <cell line: 1> pl.concat([df1, df2]) File "C:\Users\user\.conda\envs\dev\lib\site-packages\polars\functions\eager.py", line 22, in concat def concat( exceptions.ShapeError: unable to vstack, dtypes for column "a" don't match: `f64` and `i64` Here, I am fetching the data from database for various tables and creating a list of dataframes before concatenating them. Kindly help me with a solution where I could have the feasibility of not hard coding the column name in such scenarios.
You can use the vertical_relaxed strategy. pl.concat([df1, df2], how="vertical_relaxed") shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ 1 β”‚ β”‚ 2.0 ┆ 2 β”‚ β”‚ 3.0 ┆ 3 β”‚ β”‚ 1.0 ┆ 1 β”‚ β”‚ 2.0 ┆ 2 β”‚ β”‚ 3.0 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
3
4
76,898,029
2023-8-14
https://stackoverflow.com/questions/76898029/ultralytics-yolov8-probs-attribute-returning-none-for-object-detection
I'm using the Ultralytics YOLOv8 implementation to perform object detection on an image. However, when I try to retrieve the classification probabilities using the probs attribute from the results object, it returns None. Here's my code: from ultralytics import YOLO # Load a model model = YOLO('yolov8n.pt') # pretrained YOLOv8n model # Run batched inference on a list of images results = model('00000.png') # return a list of Results objects # Process results list for result in results: boxes = result.boxes # Boxes object for bbox outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs print(probs) When I run the above code, the output for print(probs) is None. The remaining output is image 1/1 00000.png: 640x640 1 person, 1 zebra, 7.8ms Speed: 2.6ms preprocess, 7.8ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640) Why is the probs attribute returning None, and how can I retrieve the classification probabilities for each detected object? Is there a specific design reason behind this behavior in the Ultralytics YOLOv8 implementation?
I think you should be able to get the confidences with results[0].boxes.conf. The probs property seems not to be working in Yolov8
3
2
76,897,416
2023-8-14
https://stackoverflow.com/questions/76897416/avoiding-loops-in-dictionary-generation-from-child-parent-relationships
I am trying to create a dictionary tree from child-parent relathionships but the issue here is that the child can contain one of his predecesors generating loops that I would like to avoid. list_child_parent = [(2,1),(3,2),(4,2),(5,3),(6,4),(2,6)] def make_map(list_child_parent): has_parent = set() all_items = {} for child, parent in list_child_parent: if parent not in all_items: all_items[parent] = {} if child not in all_items: all_items[child] = {} if parent not in all_items[child]: all_items[parent][child] = all_items[child] has_parent.add(child) else: continue result = {} for key, value in all_items.items(): if key not in has_parent: result[key] = value return result make_map(list_child_parent) The code works well if a son contains his father but not if it is the granfather... I think another approach is needed. The expected results could be any if it makes sense: {1: {2: {3: {5: {}}, 4: {6: {}}}}} {1: {2: {3: {5: {}}, 4: {6: 2}}}} {1: {2: {3: {5: {}}, 4: {6: {2:{}}}}}} The current result is: {1: {2: {3: {5: {}}, 4: {6: {2: {...}}}}}}
You can use a set to keep track of all the children, so that you can make the dict of descendants an empty dict if the current child has previously been a child of another parent: def make_map(list_child_parent): mapping = {} children = set() for child, parent in list_child_parent: mapping.setdefault(parent, {})[child] = ( {} if child in children else mapping.setdefault(child, {}) ) children.add(child) return { parent: mapping[parent] for parent in mapping.keys() - children } so that: make_map([(2, 1), (3, 2), (4, 2), (5, 3), (6, 4), (2, 6)]) returns: {1: {2: {3: {5: {}}, 4: {6: {2: {}}}}}} Demo: Try it online!
3
2
76,896,195
2023-8-14
https://stackoverflow.com/questions/76896195/python-unpack-list-of-dicts-containing-individual-np-arrays
Is there a pure numpy way that I can use to get to this expected outcome? Right now I have to use Pandas and I would like to skip it. import pandas as pd import numpy as np listOfDicts = [{'key1': np.array(10), 'key2': np.array(10), 'key3': np.array(44)}, {'key1': np.array(2), 'key2': np.array(15), 'key3': np.array(22)}, {'key1': np.array(25), 'key2': np.array(25), 'key3': np.array(11)}, {'key1': np.array(35), 'key2': np.array(55), 'key3': np.array(22)}] Use pandas to parse: # pandas can unpack simply df = pd.DataFrame(listOfDicts) # get all values under the same key xd = df.to_dict('list') # ultimate goal np.stack([v for k, v in xd.items() if k not in ['key1']], axis=1) array([[10, 44], [15, 22], [25, 11], [55, 22]]) # I would like listOfDicts to transform temporarily into this with pure numpy, # from which I could do basically anything to it: {'key1': [np.array([10, 2, 25, 35])], 'key2': [np.array([10, 15, 25, 55])], 'key3': [np.array([44, 22, 11, 22])] }
One way to turn your dataframe into a dictionary of numpy arrays, is to transpose it and use DataFrame.agg() to merge the columns: import numpy as np import pandas as pd listOfDicts = [{'key1': np.array(10), 'key2': np.array(10), 'key3': np.array(44)}, {'key1': np.array(2), 'key2': np.array(15), 'key3': np.array(22)}, {'key1': np.array(25), 'key2': np.array(25), 'key3': np.array(11)}, {'key1': np.array(35), 'key2': np.array(55), 'key3': np.array(22)}] df.transpose().agg(np.stack, axis=1).to_dict() # {'key1': array([10, 2, 25, 35]), # 'key2': array([10, 15, 25, 55]), # 'key3': array([44, 22, 11, 22])} If you just want the values, you can pull those out and stack them and then slice and dice with numpy: np.stack(df.transpose().agg(np.stack, axis=1).values) # array([[10, 2, 25, 35], # [10, 15, 25, 55], # [44, 22, 11, 22]])
2
1