question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,260,585 | 2024-12-7 | https://stackoverflow.com/questions/79260585/why-selenium-webdriver-is-not-imported | >>> import selenium >>> selenium.webdriver Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'selenium' has no attribute 'webdriver' >>> from selenium import * >>> webdriver Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'webdriver' is not defined >>> from selenium import webdriver >>> webdriver <module 'selenium.webdriver' from 'C:\\Python312\\Lib\\site-packages\\selenium\\webdriver\\__init__.py'> In this code why selenium doesn't import webdriver unless if asked explicitly. According to my current understanding, if not asked, in python all classes/functions should load. then why webdriverd was not imported when we import all classes/functions from module unless asked explicitly. I was expecting webdriver to be load when i load selenium module or all classes/functions from selenium module but webdriver wasn't imported and is only loaded when asked explicitly. Edit: To add more context, I am using Windows OS. Also, I don't have a file named selenium.py in the current directory, which could conflict with the selenium module. | According to my current understanding, if not asked, in python all classes/functions should load. The flaw in your understanding is that selenium.webdriver is not a class or function. In reality, selenium.webdriver is a module, and it has to be imported, rather than used as if it was an object. Note that importing the selenium module doesn't import its child modules; i.e. webdriver, webdriver.common and so on. If you thought that it would or should do that, your understanding is incorrect. Try getting started with Selenium by copying the "simple usage" example provided in https://selenium-python.readthedocs.io/getting-started.html#simple-usage: from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By driver = webdriver.Firefox() driver.get("http://www.python.org") assert "Python" in driver.title elem = driver.find_element(By.NAME, "q") elem.clear() elem.send_keys("pycon") elem.send_keys(Keys.RETURN) assert "No results found." not in driver.page_source driver.close() It seems to me that you need to do some more work on your understanding of Python modules, how they work, and how you use them. | 1 | 4 |
79,260,345 | 2024-12-7 | https://stackoverflow.com/questions/79260345/try-to-write-a-generic-free-function-in-cython-but-get-an-error | error info: .\tools.c(17775): error C2069: Cast from "void" to non-void .\tools.c(17775): error C2036: "void *" : unknown size my code: This function attempts to free the memory of one-dimensional array or two-dimensional array, but do not work. cdef void free_memory(void* arr, int ndim, int rows): cdef int i if ndim == 1: free(<double*>arr) elif ndim == 2: for i in range(rows): free(<double*>arr[i]) free(<double**>arr) else: raise ValueError("Unsupported number of dimensions (ndim must be 1 or 2).") example: cdef double** array = <double**> malloc(n * sizeof(double*)) free_memory(array , 2, n) I have tried to modify it many times, including using gpt assistance, but it still doesn't work. | This: free(<double*>arr[i]) is attempting to index a void*, and then cast the result to the type you're expecting. But you can't index a void*. The compiler doesn't have any information about what type arr points to. You're casting too late. What you need to do is cast arr itself to the right type, and then index that: free((<double**>arr)[i]) Aside from that, if you want to free the row pointers, you need to actually have row pointers. Your example: cdef double** array = <double**> malloc(n * sizeof(double*)) free_memory(array , 2, n) doesn't actually allocate any memory for the rows. | 1 | 3 |
79,259,261 | 2024-12-6 | https://stackoverflow.com/questions/79259261/generic-of-class-ignored-by-generic-pydantic-constructor | I have a generic class with a function that returns a Pydantic model where one of the fields is the generic type. What follows is a code snippet that defines two classes the generic GetValue and GetInt. I don't understand why the behavior of GetValue[int] is not the same as GetInt. from typing import Any, Generic, TypeVar from pydantic import BaseModel X = TypeVar("X") class TestModel(BaseModel, Generic[X]): a: X T = TypeVar("T") class GetValue(Generic[T]): @classmethod def get_value(cls, x: Any) -> TestModel[T]: return TestModel[T](a=x) class GetInt: @classmethod def get_value(cls, x: Any) -> TestModel[int]: return TestModel[int](a=x) res = GetValue[int].get_value("1") assert type(res.a) == str res_2 = GetInt.get_value("1") assert type(res_2.a) == int assert res.a != res_2.a # I want these two to be equal, not sure why they are different. x: Any = "1" res_3 = TestModel[int](a=x) assert type(res_3.a) == int Inspecting things in my debugger I see that in this function call GetValue[int].get_value("1") T is just a TypeVar. This causes Pydantic to fall back on an Any annotation for a so it stays as a str. res_3 shows that when the type is supplied to the Pydantic class it will convert the string to an int. I'm not sure why this construction isn't working how I'd expect. Still new to python's typing system, was hoping someone could shed light on this. Python version: Python 3.12.5 pydantic version: 2.10.0 | Well, Python doesn't really support a simple way to determine the type of a TypeVar at runtime. pydantic models are a bit special because they override the standard behaviour of generic classes such that they can more or less easily determine a generic type. However, I just realized that in your code you're not trying around with a pydantic generic model (which I thought, when I wrote the comment yesterday). Instead, you want that for any "standard" generic class, right? You may want to inform yourself about all this stuff with __args__ and _GenericAlias in the typing module. But I actually created a package (python-generics) some time ago to solve this problem. With that, the solution would look like: from typing import Any, Generic, TypeVar from generics import get_filled_type from pydantic import BaseModel X = TypeVar("X") class TestModel(BaseModel, Generic[X]): a: X T = TypeVar("T") class GetValue(Generic[T]): @classmethod def get_value(cls, x: Any) -> TestModel[T]: return TestModel[get_filled_type(cls, GetValue, T)](a=x) One last note: It is generally possible for type checkers like mypy to narrow the type for generic classes through e.g. constructors elements. Meaning that the generic type doesn't necessarily have to be explicitly defined in the code to pass the type check. But if you want the solution to work you must always define the type explicitly. | 2 | 3 |
79,251,301 | 2024-12-4 | https://stackoverflow.com/questions/79251301/creating-a-decaying-halo-around-a-cluster-in-an-image-with-python | I'm trying to modify the sorrounding values around clusters in an image so that the neighbours pixels decrease to zero following a exponential decay. I need to find a way to control the decaying rate with a parameter. I also want to keep the original non-zero values in the image. I have created an example were I used convolution (with average kernel) to create the haloe of decaying values. Unfortunately, it does not create what I really want, since the values decay too quickly, and the non zero values are not kept. I tried as well using a gaussian kernel and varying the sigma value so I can control the spread, the problem is that it reduces the value too much near the clusters. Maybe using a custom filter could help? import matplotlib.pyplot as plt import numpy as np from scipy.ndimage import convolve matrix = np.zeros((100, 100)) # Create clusters of values matrix[45:55, 40:50] = 0.5 for i, value in enumerate(np.linspace(0.5, 0.2, 20)): matrix[70:90, 20 + i] = value # Decreasing along the columns # Create an average kernel (15x15 window) size = 3 kernel = np.ones((size, size)) / ( size * size ) # Normalize the kernel to get the average # Apply the average filter filtered_matrix = convolve(matrix, kernel, mode="nearest") # Combine the original values with the filtered (halo) values result = np.where(matrix > 0, matrix, filtered_matrix) # Plot the original, filtered, and combined matrices fig, ax = plt.subplots(1, 3, figsize=(18, 6)) # Original matrix im1 = ax[0].imshow(matrix, cmap="viridis") ax[0].set_title("Original Matrix") plt.colorbar(im1, ax=ax[0], label="Increment [m]", fraction=0.046, pad=0.04) # Filtered matrix (after applying the average kernel) im2 = ax[1].imshow(filtered_matrix, cmap="viridis") ax[1].set_title("Filtered Matrix with Average Kernel") plt.colorbar(im2, ax=ax[1], label="Increment [m]", fraction=0.046, pad=0.04) # Final result with original values preserved and haloes included im3 = ax[2].imshow(result, cmap="viridis") ax[2].set_title("Result with Haloes Included") plt.colorbar(im3, ax=ax[2], label="Increment [m]", fraction=0.046, pad=0.04) plt.tight_layout() plt.show() | Note that Convolution with Gaussian kernel represents an isotropic diffusion process that will decrease the intensity of objects isotropically, so pasting the original image on top will destroy the halo. Something like the following, using mathematical morphology should work. Iteratively expand the contour of the objects (e.g., with dilation). Everytime you expand the contour of objects , ensure that the added pixel intensities are decayed exponentially (with a decay factor with which you can control the spread of the halo). The decay_factor, num_iter (the number of iterations) and the structuring element (footprint) type (e.g., disk or square) and size (se_sz) are the parameters you can vary to get the desired effect. from skimage.morphology import dilation, disk, square decay_factor = 0.05 se_sz = 3 num_iter = 7 out_matrix = matrix.copy() prev_filtered_matrix = matrix.copy() for i in range(num_iter): filtered_matrix = dilation(prev_filtered_matrix, square(se_sz)) decayed_expansion = (filtered_matrix - prev_filtered_matrix)*np.exp(-decay_factor*i) out_matrix = out_matrix + decayed_expansion # add exp-decayed expansion prev_filtered_matrix = filtered_matrix # Plot the original, filtered, and combined matrices fig, ax = plt.subplots(1, 2, figsize=(18, 6)) plt.gray() # Original matrix im1 = ax[0].imshow(matrix) ax[0].set_title("Original Matrix") plt.colorbar(im1, ax=ax[0], label="Increment [m]", fraction=0.046, pad=0.04) # Final result with original values preserved and haloes included im3 = ax[1].imshow(out_matrix) ax[1].set_title("Result with Haloes Included") plt.colorbar(im3, ax=ax[1], label="Increment [m]", fraction=0.046, pad=0.04) plt.tight_layout() plt.show() The output obtained with square SE is the one shown below: Note that here we are not changing the original image (so no change in original image intensity), only adding a fringe layer (with decayed intensity) at every iteration, eliminating the need to paste the original image to the output in the end. You can optionally keep the images normalized (e.g., scaling the maximum intensity value to 1) at each iteration. The next animation shows halo expansion: | 1 | 1 |
79,259,917 | 2024-12-7 | https://stackoverflow.com/questions/79259917/python-type-for-dict-like-object | I have some function that accepts a dict-like object. from typing import Dict def handle(x: Dict[str, str]): pass # Do some processing here... I still get a type warning if I try passing a Shelf to the function, even though the function supports it. How to I specify the type of a dict-like object? | Use collections.abc.Mapping (typing.Mapping is deprecated): from collections.abc import Mapping def handle(x: Mapping[str, str]): pass # Do some processing here... | 1 | 1 |
79,258,525 | 2024-12-6 | https://stackoverflow.com/questions/79258525/plotting-quiver-plots-in-matplotlib | I want to plot the slope field for: 0.5*sin(0.5*pi*x)*sqrt(y+7) import numpy as np import matplotlib.pyplot as plt # Specify the grid of dots x = np.arange(-3,3,0.3) y = np.arange(-2,4,0.3) X, Y = np.meshgrid(x,y) # Create unit vectors at each dot with correct slope dy = 0.5*(np.sin(x*np.pi*0.5))*np.sqrt(y+7) dx = np.ones(dy.shape) norm = np.sqrt(X**2 + Y**2) dyu = dy/norm dxu = dx/norm # Plot everything plt.quiver(X,Y,dxu,dyu,) plt.show() and I got the second image below. I am trying to replicate the first image. How can I make uniform slope lines like that? Why am I getting variable-length lines anyway? | So, the first thing to do is remove the arrowheads, which @jasonharper shows in the comments can be done by adding these options to your quiver call: headwidth=0, headlength=0, headaxislength=0. Next is to deal with the length. You're currently normalizing by X and Y when you should be normalizing by dx and dy. I would actually redefine dx and dy to use X and Y so they have the same shapes. And you will also want to make the pivot="mid" change that @jasonharper mentioned in their comment. With these changes, your code would look like this: import numpy as np import matplotlib.pyplot as plt plt.close("all") x = np.arange(-3, 3, 0.3) y = np.arange(-2, 4, 0.3) X, Y = np.meshgrid(x, y) dX = np.ones_like(Y) dY = 0.5*np.sin(X*np.pi*0.5)*np.sqrt(Y+7) norm = np.sqrt(dX**2 + dY**2) dXu = dX/norm dYu = dY/norm plt.quiver(X, Y, dXu, dYu, headwidth=0, headlength=0, headaxislength=0, pivot="mid") plt.show() Result: To control the length, you'll need to change two things. The first thing would be to multiply dX and dY by the desired length. The second thing would be to add the following options to quiver: angles="xy", scale_units="xy", scale=1 to make sure the vectors are the desired length. import numpy as np import matplotlib.pyplot as plt plt.close("all") x = np.arange(-3, 3, 0.3) y = np.arange(-2, 4, 0.3) X, Y = np.meshgrid(x, y) dX = np.ones_like(Y) dY = 0.5*np.sin(X*np.pi*0.5)*np.sqrt(Y+7) norm = np.sqrt(dX**2 + dY**2) dXu = dX/norm dYu = dY/norm L= 0.2 dXs = L*dXu dYs = L*dYu plt.quiver(X, Y, dXs, dYs, headwidth=0, headlength=0, headaxislength=0, angles="xy", scale_units="xy", scale=1, pivot="mid") plt.show() | 3 | 2 |
79,256,403 | 2024-12-5 | https://stackoverflow.com/questions/79256403/telebot-bot-callback-query-handler-isnt-working | Can somebody tell me what am I doing wrong? The bot sends me an inline keyboard, but after clicking the button, I do not receive any callbacks (even the logger doesn't send anything). Also, if I pass URL to inlineKeyboardButton it works properly @bot.message_handler() def add_city(message): if user_status[message.chat.id] == Statuses.AddCity: logger.info("User adding the city") request = { "name": message.text } response = requests.post(f'{api_url}/api/users/{message.chat.id}/add-city', json=request, headers=headers) if response.status_code == 200: logger.info("User got city. Waiting for confirmation") data = response.json() user_status[message.chat.id] = Statuses.ConfirmCity markup = InlineKeyboardMarkup() markup.add(InlineKeyboardButton("Confirm", callback_data="Confirm")) markup.add(InlineKeyboardButton("Deny", callback_data="Deny")) bot.send_message(message.chat.id, f'{data.get("name")}') bot.send_message(message.chat.id, f'ΠΡΠ΄ΡΠ²Π΅ΡΠ΄ΠΈ ΡΠΎ ΡΠ΅ ΡΠ°ΠΌΠ΅ ΡΠ΅ ΠΌΡΡΡΠΎ, ΡΠΊΠ΅ ΡΠΈ Ρ
ΠΎΡΡΠ² Π΄ΠΎΠ΄Π°ΡΠΈ:', reply_markup=markup) else: logger.info("Bad Request") bot.send_message(message.chat.id, f'ΠΠΈΠ½ΠΈΠΊΠ»Π° ΠΏΠΎΠΌΠΈΠ»ΠΊΠ° ΠΏΡΠΈ Π΄ΠΎΠ΄Π°Π²Π°Π½Π½Ρ ΠΌΡΡΡΠ° :(') @bot.callback_query_handler(func=lambda call: call.data in ("Confirm", "Deny")) def handle_city_confirmation(call): logger.info(call) if user_status[call.message.chat.id] == Statuses.ConfirmCity: if call.data.split(' ')[0] == "Confirm": bot.send_message(call.message.chat.id, 'success') else: bot.send_message(call.message.chat.id, "deny") else: bot.send_message(call.message.chat.id, "fail") | The solution was updating the token. Hope it will help someone | 2 | 0 |
79,257,762 | 2024-12-6 | https://stackoverflow.com/questions/79257762/how-to-run-dependence-tasks-concurrently-with-non-dependence-ones-and-tasks-ins | I am learning asyncio and there is a problem of running a dependence task concurrently with non-dependence ones. So far I couldn't make it work. This is my code: import asyncio import random def first_execution(choice): if choice==1: print(f"First result {choice}") return choice else: print(f"First result {0}") return 0 async def check_first_execution(result_from_first): # Computationally complex function which take a lot of time to compute await asyncio.sleep(10) print(f"First check of value {result_from_first} complete") async def second_execution(result_from_first): # Moderately complex computation await asyncio.sleep(5) print(f"Second result {result_from_first+1}") return result_from_first+1 async def check_second_execution(result_from_second): # Computationally complex function which take a lot of time to compute await asyncio.sleep(10) print(f"Second check of value {result_from_second} complete") async def third_execution(result_from_first): # Moderately complex computation await asyncio.sleep(5) print(f"Third result {result_from_first+2}") return result_from_first+2 async def check_third_execution(result_from_third): # Computationally complex function which take a lot of time to compute await asyncio.sleep(10) print(f"Third check of value {result_from_third} complete") async def main(): choice = random.choice([0, 1]) result_from_first = first_execution(choice) # First part coroutine_1 = check_first_execution(result_from_first) if result_from_first==1: coroutine_2 = second_execution(result_from_first) results = await asyncio.gather(coroutine_1, coroutine_2) elif result_from_first==0: coroutine_3 = third_execution(result_from_first) results = await asyncio.gather(coroutine_1, coroutine_3) # Second part list_results_from_first = [result_from_first+i for i in range(5)] for first_result in list_results_from_first: second_result = await second_execution(first_result) check_second = await check_second_execution(second_result) asyncio.run(main()) In the first part (# First part), my code works but it only runs sequentially, meaning: first_execution -> check_first_execution -> second_execution (with choice==1) : >> First result 1 >> First check of 1 complete >> Second result 2 whereas, either with choice==1 (or choice==0), I want the check_first_executionand second_execution (or check_first_executionand third_execution) to happen in parallel, for example with choice==1 : >> First result 1 >> Second result 2 >> First check of 1 complete The function check_first_execution takes longer to execute so it should finish later than the second_execution function. In the second part (# Second part), it also happened sequentially: second_execution -> check_second_execution -> second_execution -> check_second_execution ..., for example (with choice==1) : >> Second result 3 >> Second check of 3 complete >> Second result 4 >> Second check of 4 complete >> Second result 5 >> Second check of 5 complete what I want is something like this: second_execution -> second_execution -> check_second_execution -> second_execution..., example: >> Second result 3 >> Second result 4 >> Second check 3 complete >> Second result 5 >> Second check 4 complete How do I achieve the two points above? Any help is appreciated. | First, when I run your code I get either First result 0 -> Third result 2 -> First check of value 1 complete or First result 1 -> Second result 2 -> First check of value 1 complete depending on the random choice generated, which is my understanding of what you want. So I cannot duplicate what you say is happening as far as the first part is concerned. For the second part it is clear that the statement check_second = await check_second_execution(second_result) cannot be executed until second_result is returned from the statement second_result = await second_execution(first_result). So these two statements must by necessity be executed sequentially. But my understanding is that you want to run 5 instances of these two statement concurrently, one instance for each element of list_results_from_first. If my understanding is correct, the simplest approach is to create a new coroutine run_dependent_coroutines that will run the two sequential coroutines one after another: ... async def main(): ... # Second part async def run_dependent_coroutines(first_result): second_result = await second_execution(first_result) check_second = await check_second_execution(second_result) list_results_from_first = [result_from_first+i for i in range(5)] coroutines = [ run_dependent_coroutines(first_result) for first_result in list_results_from_first ] await asyncio.gather(*coroutines) Prints: First result 0 Third result 2 First check of value 0 complete Second result 1 Second result 3 Second result 5 Second result 2 Second result 4 Second check of value 1 complete Second check of value 5 complete Second check of value 4 complete Second check of value 3 complete Second check of value 2 complete | 1 | 1 |
79,258,814 | 2024-12-6 | https://stackoverflow.com/questions/79258814/numpythonic-way-of-float-to-signed-integer-normalization | What is the faster numpythonic way of this normalization: def normalize_vector(x, b, axis): """ Normalize real vector x and outputs an integer vector y. Parameters: x (numpy.ndarray): Input real vector. (batch_size, seq_len) b (int): Unsigned integer defining the scaling factor. axis (int/None): if None, perform flatenned version, if axis=-1, perform relative normalization across batch. Returns: numpy.ndarray: Integer vector y. """ # Find the maximum absolute value in x m = np.max(np.abs(x)) # Process each element in x y = [] for xi in x: if xi > 0: y.append(int((2**b - 1) * xi / m)) elif xi < 0: y.append(int(2**b * xi / m)) else: y.append(0) return np.array(y) Can np.digitize make it faster? I have similar question, but it's not about NumPy. I'm also expecting it supports axis parameter for batch vector. | there is np.piecewise to transform data based on multiple conditions. def normalize_vector2(x, b, axis): # Step 1: Find the maximum absolute value in `x` m = np.max(np.abs(x), axis=axis) y = np.piecewise(x, [x > 0, x < 0], [ lambda xi: ((2**b - 1) * xi / m), lambda xi: (2**b * xi / m) ]) return y.astype(int) if your paths are close then you can just simplify it with multiplies. def normalize_vector3(x, b, axis): # Step 1: Find the maximum absolute value in x m = np.max(np.abs(x), axis=axis, keepdims=True) m[m==0] = 1 y = (2**b - 1 * (x > 0)) * x / m return y.astype(int) comparison: import numpy as np import time def normalize_vector2(x, b): # Step 1: Find the maximum absolute value in `x` m = np.max(np.abs(x)) y = np.piecewise(x, [x > 0, x < 0], [ lambda xi: ((2**b - 1) * xi / m), lambda xi: (2**b * xi / m) ]) return y.astype(int) def normalize_vector3(x, b, axis): # Step 1: Find the maximum absolute value in x m = np.max(np.abs(x), axis=axis, keepdims=True) m[m==0] = 1 y = (2**b - 1 * (x > 0)) * x / m return y.astype(int) def normalize_vector(x, b): # Find the maximum absolute value in x m = np.max(np.abs(x)) # Process each element in x y = [] for xi in x: if xi > 0: y.append(int((2**b - 1) * xi / m)) elif xi < 0: y.append(int(-2**b * xi / m)) else: y.append(0) return np.array(y) for elements in [10, 100, 1000, 10000]: iterations = int(100000 / elements) x = np.random.random(elements) * 256-128 t1 = time.time() for i in range(iterations): normalize_vector(x,7) t2 = time.time() for i in range(iterations): normalize_vector2(x, 7) t3 = time.time() for i in range(iterations): normalize_vector3(x, 7, 0) t4 = time.time() print(f"{(t2-t1)/iterations:.7f}, {elements} elements python") print(f"{(t3-t2)/iterations:.7f}, {elements} elements numpy") print(f"{(t4-t3)/iterations:.7f}, {elements} elements numpy maths") 0.0000109, 10 elements python 0.0000331, 10 elements numpy 0.0000158, 10 elements numpy maths 0.0000589, 100 elements python 0.0000399, 100 elements numpy 0.0000168, 100 elements numpy maths 0.0005812, 1000 elements python 0.0000515, 1000 elements numpy 0.0000255, 1000 elements numpy maths 0.0045110, 10000 elements python 0.0003255, 10000 elements numpy 0.0001083, 10000 elements numpy maths numpy is slower than pure python for small lists (mostly < 50 elements). | 1 | 3 |
79,257,679 | 2024-12-6 | https://stackoverflow.com/questions/79257679/dropping-duplicates-by-column-in-pyspark | I have a PySpark dataframe like this but with a lot more data: user_id event_date 123 '2024-01-01 14:45:12.00' 123 '2024-01-02 14:45:12.00' 456 '2024-01-01 14:45:12.00' 456 '2024-03-01 14:45:12.00' I drop duplicates of users, leaving the last event. I am using something like this: df = df.orderBy(['user_id', 'event_date'], ascending=False).dropDuplicates(['user_id']) When I was searching for the solution to some other problem, I found information that this approach may be non-deterministic. Am I doing it wrong? Should I use window functions instead? | When you call dropDuplicates() without passing any columns to it - it just drops the identical rows, no matter what, for all columns (so we may kind of call it "deterministic" - as all columns will have same values in different rows being dropped and only one is kept - doesn't matter which one). This non-deterministic behaviour of dropDuplicates(), when passing a subset of columns, is a known issue and using orderBy() doesn't enforce determinism either because it's not guaranteed that the ordering will be maintained after applying dropDuplicates() due to Spark's internal implementations (partitioning, logical and physical plans etc). It's recommended to use Windows functions instead: from pyspark.sql import functions as F from pyspark.sql.window import Window window_spec = Window.partitionBy("user_id").orderBy(F.col("event_date").desc()) df_with_rank = df.withColumn("row_number", F.row_number().over(window_spec)) df_last_event = df_with_rank.filter(F.col("row_number") == 1).drop("row_number") | 2 | 2 |
79,255,413 | 2024-12-5 | https://stackoverflow.com/questions/79255413/how-does-python-threadpoolexecutor-switch-between-concurrent-threads | How does Python ThreadPoolExecutor switch between concurrent threads? In the case of the async/awaint event-loop, the switching between different pieces of the code happens at the await calls. Does the ThreadPoolExecutor run each submitted task for a random amount of time> Or until something somewhere calls Thread.sleep()? Or the OS temporarily switches to do something else and the thread is forced to release the GIL thus allowing some other thread to grab the GIL next time? | Does the ThreadPoolExecutor run each submitted task for a random amount of time? No. The executor runs a thread pool with a work queue. When you add a new task, a thread will pick it up and run it to completion. Individual threads do not switch tasks before the previous task has completed. As long as the global interpreter lock (GIL) is still around, only one thread can execute Python bytecode at any given moment (within one process; not counting multiple sub-interpreters). Since Python-3.2 the GIL switches threads every 5 ms (configurable) as opposed to every 100 instructions like it was before. For comparison, the Linux kernel typically runs its scheduler with time slices between 0.75 and 6 ms. Or until something somewhere calls Thread.sleep()? Every system call that may block / sleep and every call into compiled code that may run for extended periods of time is typically written to release the GIL. This works in addition to the 5 ms time slice and is the intended work regime for thread pools. If the background threads spend most of their time with the GIL released waiting on IO or expensive library calls (such as numpy), they parallelize effectively. If they mostly execute Python bytecode, they just compete with the main event loop for the GIL. That can still work fine if the event loop is mostly idle but it isn't exactly best-practice and it doesn't scale well. Or the OS temporarily switches to do something else and the thread is forced to release the GIL thus allowing some other thread to grab the GIL next time? The OS doesn't know the GIL. If the OS interrupts the thread currently holding the GIL, it will continue to hold it. No other thread can acquire it during that time. Once the thread wakes back up it will likely have exceeded its 5 ms time slice and will release the GIL at the next opportunity. | 1 | 2 |
79,258,033 | 2024-12-6 | https://stackoverflow.com/questions/79258033/how-to-query-a-reverse-foreign-key-multiple-times-in-django-orm | Assuming the following models: class Store(models.Model): name = models.CharField(max_length=255) class Stock(models.Model): name = models.CharField(max_length=255) store = models.ForeignKey(Store, on_delete=models.PROTECT) class Consignment(models.Model): cost = models.FloatField() class Order(models.Model): quantity = models.IntegerField() stock = models.ForeignKey(Stock, on_delete=models.PROTECT) consignment = models.ForeignKey(Consignment, on_delete=models.PROTECT) How to create a queryset of all 'consignments' of all instances of a specific 'stores' queryset like so: target_stores = Store.objects.filter(name__startswith="One") consignments = target_stores.consignments.all() | You query in reverse by spanning over multiple relations with the __ separator: Consignment.objects.filter(order__stock__store__name__startswith='One') | 1 | 1 |
79,255,406 | 2024-12-5 | https://stackoverflow.com/questions/79255406/python-wagtail-crashes-6-3-1-streamfield-object-has-no-attribute-bind-to-model | While updating an old Wagtail website to the current version, I encounter this error, in admin/panels/group.py line 74: AttributeError: 'StreamField' object has no attribute 'bind_to_model' Since this is apparently in the Wagtail software as distributed, I am quite confused. The full traceback is as follows: Exception in thread django-main-thread: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/threading.py", line 1041, in _bootstrap_inner self.run() ~~~~~~~~^^ File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/threading.py", line 992, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/management/commands/runserver.py", line 134, in inner_run self.check(display_num_errors=True) ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/management/base.py", line 486, in check all_issues = checks.run_checks( app_configs=app_configs, ...<2 lines>... databases=databases, ) File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/core/checks/registry.py", line 88, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/checks.py", line 69, in get_form_class_check edit_handler = cls.get_edit_handler() File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/utils/decorators.py", line 54, in __call__ return self.value ^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/django/utils/functional.py", line 47, in __get__ res = instance.__dict__[self.name] = self.func(instance) ~~~~~~~~~^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/utils/decorators.py", line 50, in value return self.fn(self.cls) ~~~~~~~^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/page_utils.py", line 73, in _get_page_edit_handler return edit_handler.bind_to_model(cls) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/base.py", line 146, in bind_to_model new.on_model_bound() ~~~~~~~~~~~~~~~~~~^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/group.py", line 74, in on_model_bound self.children = [child.bind_to_model(self.model) for child in self.children] ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/base.py", line 146, in bind_to_model new.on_model_bound() ~~~~~~~~~~~~~~~~~~^^ File "/Users/mike/.virtualenvs/sdenv/lib/python3.13/site-packages/wagtail/admin/panels/group.py", line 74, in on_model_bound self.children = [child.bind_to_model(self.model) for child in self.children] ^^^^^^^^^^^^^^^^^^^ AttributeError: 'StreamField' object has no attribute 'bind_to_model' | The error indicates that you have a StreamField object inside a panels definition such as content_panels. This isn't valid - it should only contain panel objects such as FieldPanel. As part of the Wagtail 2.x to 6.x upgrade process, you would have to replace any instances of StreamFieldPanel with a plain FieldPanel - my best guess is that you made a mistake while doing this, and changed StreamFieldPanel to StreamField instead. | 1 | 1 |
79,256,797 | 2024-12-6 | https://stackoverflow.com/questions/79256797/plot-variables-from-xml-file-in-python | I have and XML file (please see attached). I made a python script to try to obtain information from the XML to later do some plots. The aim of this code is: a. To iterate over the XML file to find the </event> and then </origin> and then <quality> to reach <azimuthalGap>348.000</azimuthalGap> b. To save each azimutalGap in the gaps=[] variable c. To plot an Histograms of azimutalGap. So far I tried the code: import xml.etree.ElementTree as ET def parse_azimuthal_gaps(xml_file): gaps = [] # Parse the XML file tree = ET.parse(xml_file) root = tree.getroot() # Iterate through each event for event in root.findall(".//event"): origin = event.find(".//origin") if origin is not None: # Find the azimuthal gap (if present) azimuthal_gap = origin.find(".//quality/azimuthalGap") if azimuthal_gap is not None: gap_value = azimuthal_gap.text if gap_value is not None: try: gaps.append(float(gap_value)) except ValueError: continue # Skip if not a valid float return gaps # Parse azimuthal gaps from the ISC XML file gaps = parse_azimuthal_gaps("SCB_earthquakes.xml") print("Azimuthal Gaps:", gaps[:10]) # Print the first 10 gaps for inspection But I always get empty gaps varible. The input file has this format: </event> <event publicID="smi:ISC/evid=602401243"> <preferredOriginID>smi:ISC/origid=601808754</preferredOriginID> <description> <text>Peru-Bolivia border region</text> <type>Flinn-Engdahl region</type> </description> <type>earthquake</type> <typeCertainty>known</typeCertainty> <comment> <text>Event reviewed by the ISC</text> </comment> <creationInfo> <agencyID>ISC</agencyID> <author>ISC</author> </creationInfo> <origin publicID="smi:ISC/origid=601808754"> <time> <value>2011-01-23T09:13:37.30Z</value> <uncertainty>0.96</uncertainty> </time> <latitude> <value>-11.9240</value> </latitude> <longitude> <value>-68.9133</value> </longitude> <depth> <value>58000.0</value> </depth> <depthType>operator assigned</depthType> <quality> <usedPhaseCount>8</usedPhaseCount> <associatedStationCount>6</associatedStationCount> <standardError>0.4800</standardError> <azimuthalGap>353.000</azimuthalGap> <minimumDistance>4.260</minimumDistance> <maximumDistance>5.070</maximumDistance> </quality> <creationInfo> <author>SCB</author> <agencyID>SCB</agencyID> </creationInfo> <originUncertainty> <preferredDescription>uncertainty ellipse</preferredDescription> <minHorizontalUncertainty>20399.9996185303</minHorizontalUncertainty> <maxHorizontalUncertainty>96000</maxHorizontalUncertainty> <azimuthMaxHorizontalUncertainty>83.0</azimuthMaxHorizontalUncertainty> </originUncertainty> <arrival publicID="smi:ISC/pickid=637614753/hypid=601808754"> <pickID>smi:ISC/pickid=637614753</pickID> <phase>Pn</phase> <azimuth>170.165</azimuth> <distance>4.404</distance> <timeResidual>3.6</timeResidual> </arrival> <arrival publicID="smi:ISC/pickid=637614754/hypid=601808754"> <pickID>smi:ISC/pickid=637614754</pickID> <phase>Sn</phase> <azimuth>170.165</azimuth> <distance>4.404</distance> <timeResidual>4.6</timeResidual> </arrival> </origin> <pick publicID="smi:ISC/pickid=637614753"> <time> <value>2011-01-23T09:14:46.10Z</value> </time> <waveformID networkCode="IR" stationCode="LPAZ"></waveformID> <onset>impulsive</onset> <polarity>positive</polarity> <phaseHint>Pn</phaseHint> </pick> <pick publicID="smi:ISC/pickid=637614754"> <time> <value>2011-01-23T09:15:37.60Z</value> </time> <waveformID networkCode="IR" stationCode="LPAZ"></waveformID> <onset>emergent</onset> <phaseHint>Sn</phaseHint> </pick> <magnitude publicID="smi:ISC/magid=602398394"> <mag> <value>3.80</value> <uncertainty>0.40</uncertainty> </mag> <type>Ml</type> <originID>smi:ISC/origid=601808754</originID> <stationCount>2</stationCount> <creationInfo> <author>SCB</author> </creationInfo> </magnitude> <preferredMagnitudeID>smi:ISC/magid=602398394</preferredMagnitudeID> Will you have any idea to improve the code? Tonino | If you have a list of events, you can create the interested gaps list with root.findall(). import xml.etree.ElementTree as ET root = ET.parse("your_file.xml").getroot() gaps = root.findall(".//azimuthalGap") print([x.text for x in gaps]) Output: ['353.000', '453.000', β¦] For the 3. point of your question, you will find a answer here. | 1 | 2 |
79,256,824 | 2024-12-6 | https://stackoverflow.com/questions/79256824/why-are-enums-incompatible-across-python-packages | An enum is declared in an imported package and identically in the importer. Same value, but Python treats the imported enum value as different for some reason. Package 1 is a parser that I wrote which outputs a dictionary containing some values from this enum declared in the parser package: class NodeFace(Enum): TOP = 0 BOTTOM = 1 RIGHT = 2 LEFT = 3 So, it parses along and then maps some text to these values in a dictionary that will be accessible to the importer. In case it matters, the parser is built using Python 3.13. Now the importer, built using Python 3.12, also declares the same enum in its own local files. Identical to above. Additionally, I use this dictionary to find opposite sides, declared in the importer: OppositeFace = { NodeFace.TOP: NodeFace.BOTTOM, NodeFace.BOTTOM: NodeFace.TOP, NodeFace.LEFT: NodeFace.RIGHT, NodeFace.RIGHT: NodeFace.LEFT } In the importer I assign enum values from the parser output dict to variables tface and pface. And, I test to make sure they look right by printing to console and sure enough I get values like: <NodeFace.BOTTOM: 1>. In fact, I'm looking at them in the debugger, no problem so far. Now the fun starts... if OppositeFace[tface] == pface: This fails on a KeyError on the OppositeFace lookup. BUT, if I do the lookup in the debugger console with: OppositeFace[NodeFace.TOP], it works just fine. I did all kinds of tests and all I could figure is that the problem was in defining the enum twice, once in the parser and again in the importer and, despite all the values matching in the debugger and console, some internal value is different and causing the dict lookup to fail. My solution was to eliminate the Enum from my parser and just pass strings 'TOP', 'BOTTOM', etc. Then, on the importer side I do OppositeFace[NodeFace[tface]] where tface is now the string and it works fine. Can anyone tell me why exactly this happens? Just curious at this point. | This is just how Python normally works. Defining two classes the same way doesn't make them the same class, or make their instances equal. Even if they print the same, Python doesn't compare objects based on how they print. You have two separate enums with separate members. Unless you implement different behavior (which you didn't), enums inherit the default identity-based __eq__ and __hash__ implementations from object. That means an enum member is only equal to itself. The only relevant enum magic is that writing TOP = 0 in an enum definition makes TOP an instance of the enum class, rather than the int 0. Integers compare by numeric value rather than identity (and also there's a CPython implementation detail that would make the ints the same object anyway), so if your enum members really were just ints, they would compare equal. They're not, though. | 2 | 2 |
79,256,675 | 2024-12-6 | https://stackoverflow.com/questions/79256675/dividing-nested-calls-into-several-lines | I have a function like this in Python (the capital letters can represent constants, functions, anything, but not function calls): def f(x): a = foo1(A, B, foo3(E, foo2(A, B))) b = foo3(a, E) return b and I want to break it up into "atomic" operations like this: def f(x): tmp1 = foo2(A, B) tmp2 = foo3(E, tmp1) a = foo1(A, B, tmp2) b = foo3(a, E) return b In other words, exactly one function call and one assignment per line. Is there a way I can implement this source code transformation in Python? A program that takes in the string representation of such a function and returns the transformed version. I know I need to use the AST representation, but I don't really know how to proceed. | I think this is what you want exactly: s = """def f(x): a = foo1(A, B, foo3(E, foo2(A, B))) b = foo3(a, E) return b""" import re result = [] reg = re.compile(r'^(\s+).*?(\w+\([^\(\)]*\))') count = 1 for l in s.splitlines(): r = reg.search(l) if not r: result.append(l) continue while r: indent = r[1] var = f'tmp_{count}' result.append(f'{indent}{var} = {r[2]}') count += 1 l = l[:r.start(2)] + var + l[r.end(2):] r = reg.search(l) else: if l.strip(): result.append(l) print('\n'.join(result)) and it will show: def f(x): tmp_1 = foo2(A, B) tmp_2 = foo3(E, tmp_1) tmp_3 = foo1(A, B, tmp_2) a = tmp_3 tmp_4 = foo3(a, E) b = tmp_4 return b | 2 | 4 |
79,255,383 | 2024-12-5 | https://stackoverflow.com/questions/79255383/python-ctypes-and-np-array-ctypes-data-as-unexpected-behavior-when-indexing | Using Python ctypes and numpy library, I pass data to a shared library and encounter a very weird behavior C function : #include <stdio.h> typedef struct { double *a; double *b; } s_gate; void printdouble(s_gate*, int); void printdouble(s_gate *gate, int n) { for (int i =0; i < n; i++) { printf("gate->a[%d] = %f\n", i, gate->a[i]); } for (int i =0; i < n; i++) { printf("gate->b[%d] = %f\n", i, gate->b[i]); } } Python code : import ctypes import numpy as np class s_gate(ctypes.Structure): _fields_ = [('a', ctypes.POINTER(ctypes.c_double)), ('b', ctypes.POINTER(ctypes.c_double))] def __init__(self, mydict:dict): mask = [True, False, True, True, True, True, False, False, False, True] a = np.ascontiguousarray(mydict['a'], dtype=np.double) b = np.ascontiguousarray(mydict['b'], dtype=np.double) setattr(self, 'a', a[0,:].ctypes.data_as(ctypes.POINTER(ctypes.c_double))) setattr(self, 'b', b[0,:].ctypes.data_as(ctypes.POINTER(ctypes.c_double))) self.size = 10 if __name__ == "__main__": a = np.array([[1,2,3,4,5,6,7,8,9,10], [10,9,8,7,6,5,4,3,2,1]], dtype=np.double).T b = a + 100 data = {'a': a, 'b': b} mylib = ctypes.CDLL('./mwe.so') mylib.printdouble.argstype = [ctypes.POINTER(s_gate), ctypes.c_int] mylib.printdouble.restype = ctypes.c_void_p print(f'Sending \n{a} and \n{b}') gate = s_gate(data) mylib.printdouble(ctypes.byref(gate), gate.size) Running this code, I got the expected result, which is : [[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] [10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]] and [[101. 102. 103. 104. 105. 106. 107. 108. 109. 110.] [110. 109. 108. 107. 106. 105. 104. 103. 102. 101.]] gate->a[0] = 1.000000 gate->a[1] = 2.000000 gate->a[2] = 3.000000 gate->a[3] = 4.000000 gate->a[4] = 5.000000 gate->a[5] = 6.000000 ... gate->b[0] = 101.000000 gate->b[1] = 102.000000 gate->b[2] = 103.000000 gate->b[3] = 104.000000 gate->b[4] = 105.000000 gate->b[5] = 106.000000 ... Now, let's use the mask variable in the __init__ method of the s_gate class. So let's replace lines 11 and 12 with : setattr(self, 'a', a[0,mask].ctypes.data_as(ctypes.POINTER(ctypes.c_double))) setattr(self, 'b', b[0,mask].ctypes.data_as(ctypes.POINTER(ctypes.c_double))) self.size = sum(mask) The results is now : gate->a[0] = 0.000000 gate->a[1] = 0.000000 gate->a[2] = 0.000000 gate->a[3] = 0.000000 gate->a[4] = 0.000000 gate->a[5] = 0.000000 gate->b[0] = 101.000000 gate->b[1] = 103.000000 gate->b[2] = 104.000000 gate->b[3] = 105.000000 gate->b[4] = 106.000000 gate->b[5] = 110.000000 All gate.a data is zeroed ! The expected result is of course [1, 3, 4, 5, 6, 10] for gate.a What I tried so far : Use np.require, np.ascontiguousarray to ensure C contiguous array, perform integer indexing instead of logical indexing, perform 2D logical indexing (array[slices_xy] instead of array[slice_x, slice_y]). Use various copy, deepcopy methods and create an intermediate data with the first slicing... Nothing worked, as soon as the slice that maskdescribe in this code appears (instead of :), this behavior appears. What goes wrong ? | The memory is being freed for the masked arrays creating undefined behavior. Likely the a pointer's memory is reused but the b pointer happens to still be the same. Both are freed though. Create the masked arrays and hold a reference to them in the s_gate object, then it works: import ctypes as ct import numpy as np PDOUBLE = ct.POINTER(ct.c_double) class s_gate(ct.Structure): _fields_ = [('a', PDOUBLE), ('b', PDOUBLE)] def __init__(self, a, b): mask = [True, False, True, True, True, True, False, False, False, True] self.tmpa = a[0,mask] # hold a reference to the arrays self.tmpb = b[0,mask] self.a = self.tmpa.ctypes.data_as(PDOUBLE) # get pointers to the arrays self.b = self.tmpb.ctypes.data_as(PDOUBLE) self.size = sum(mask) a = np.array([[1,2,3,4,5,6,7,8,9,10], [10,9,8,7,6,5,4,3,2,1]], dtype=np.double) b = a + 100 mylib = ct.CDLL('./test') mylib.printdouble.argstype = ct.POINTER(s_gate), ct.c_int mylib.printdouble.restype = ct.c_void_p print(f'Sending \n{a} and \n{b}') gate = s_gate(a, b) mylib.printdouble(ct.byref(gate), gate.size) Output: [[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] [10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]] and [[101. 102. 103. 104. 105. 106. 107. 108. 109. 110.] [110. 109. 108. 107. 106. 105. 104. 103. 102. 101.]] gate->a[0] = 1.000000 gate->a[1] = 3.000000 gate->a[2] = 4.000000 gate->a[3] = 5.000000 gate->a[4] = 6.000000 gate->a[5] = 10.000000 gate->b[0] = 101.000000 gate->b[1] = 103.000000 gate->b[2] = 104.000000 gate->b[3] = 105.000000 gate->b[4] = 106.000000 gate->b[5] = 110.000000 | 2 | 1 |
79,249,769 | 2024-12-4 | https://stackoverflow.com/questions/79249769/why-is-the-plot-of-the-refractive-index-wavelength-dependent-fresnels-equation | I want to reproduce the reflectance spectrum of a thin film whose complex refractive index is wavelength-dependent (the complex refractive index data, named N2 in code, can be obtained from here). Using fresnel's equations for medium 1: air, medium 2: thin film, and medium 3: air : ; The reflection coefficient where \delta is defined as And finally the reflectance equation Since the refractive indices have a dependence on the wavelength, then the refraction angle have the same dependence, using the Snell equation: I wrote the following code: import numpy as np import pandas as pd import matplotlib.pyplot as plt ## Function definition def Ang_Refrac(na,nb,angle_a): ang_refrac = np.arcsin( (na/nb).real * np.sin(angle_a) ) return ang_refrac # Data Thin_film = pd.read_csv("Si_refractive index.txt", delimiter='\t') ## Variable definition # thin film thickness, [d]: nm d = 200 # Wave-length lamb = Thin_film[Thin_film.columns[0]] # Complex refractive index N1 = np.ones(len(lamb)) N2 = Thin_film[Thin_film.columns[1]] + Thin_film[Thin_film.columns[1]]*1j N3 = np.ones(len(lamb)) # Angle: ang_1_s = 0 #sexagesimal ang_1 = ang_1_s*np.pi/180 #radians ang_2 = [] #radians ang_3 = [] #radians for i in range(len(N2)): ang_refrac_12 = Ang_Refrac(N1[i],N2[i],ang_1) ang_refrac_23 = Ang_Refrac(N2[i],N3[i],ang_refrac_12) ang_2.append(ang_refrac_12) ang_3.append(ang_refrac_23) ## Reflectance R_s = np.zeros(len(lamb)) R_p = np.zeros(len(lamb)) for i in range(len(lamb)): # S-type polarization r12_s = (N1[i]*np.cos(ang_1) - N2[i]*np.cos(ang_2[i]))/(N1[i]*np.cos(ang_1) + N2[i]*np.cos(ang_2[i])) r23_s = (N2[i]*np.cos(ang_2[i]) - N3[i]*np.cos(ang_3[i]))/(N2[i]*np.cos(ang_2[i]) + N3[i]*np.cos(ang_3[i])) # P-type polarization r12_p = (N2[i]*np.cos(ang_1) - N1[i]*np.cos(ang_2[i])) / (N2[i]*np.cos(ang_1) + N1[i]*np.cos(ang_2[i])) r23_p = (N3[i]*np.cos(ang_2[i]) - N2[i]*np.cos(ang_3[i])) / (N3[i]*np.cos(ang_2[i]) + N2[i]*np.cos(ang_3[i])) # Phase shift delta = 2 * np.pi * (1/lamb[i]) * d * np.sqrt(abs(N2[i])**2 - (np.sin(ang_1)**2)) # Reflection coefficient r123_s = (r12_s + r23_s*np.exp(2j*delta)) / (1 - r12_s*r23_s*np.exp(2j*delta)) r123_p = (r12_p + r23_p*np.exp(2j*delta)) / (1 - r12_p*r23_p*np.exp(2j*delta)) # Reflectance R_s[i] = abs(r123_s)**2 R_p[i] = abs(r123_p)**2 # Reflectance normalization R_s_normalized = (R_s - min(R_s)) / (max(R_s) - min(R_s)) R_p_normalized = (R_p - min(R_p)) / (max(R_p) - min(R_p)) # Plotting plt.title("RefletÒncia $\phi=" + str(ang_1_s) + "$°") plt.xlim(300, 1000) # Definindo os limites do eixo x plt.plot(lamb, R_s_normalized, 'go', label="Polarização S", markersize=4) plt.plot(lamb, R_p_normalized, 'b-', label="Polarização P") plt.legend() plt.show() Resulting graph: Expected graph: Obtained from here Why are the results not the same? | You can try this. The major changes are: N2 is given by n + i.k, rather than n + i.n as you had there is just N2 ** 2 in the expression for delta, not abs(N2)**2, which would lose an important complex part I think the reflection coefficients should have a '+', not a '-' sign in the denominator (though I could be wrong). I found a little bit of background (and a few typos) in https://ir.lib.nycu.edu.tw/bitstream/11536/27329/1/000186822000019.pdf Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt ## Function definition def Ang_Refrac(na,nb,angle_a): ang_refrac = np.arcsin( (na/nb) * np.sin(angle_a) ) return ang_refrac # Data Thin_film = pd.read_csv("Si_refractive index.txt", delimiter='\t') d = 200 # thin film thickness, [d]: nm lamb = Thin_film[Thin_film.columns[0]] # Wave-length # Complex refractive index n + ik num = len(lamb) N1 = np.ones(num) N2 = Thin_film[Thin_film.columns[1]] + Thin_film[Thin_film.columns[2]]*1j # <=========== NOTE ====== N3 = np.ones(num) # Angle: ang_1_s = 0 ang_1 = ang_1_s * np.pi / 180 ang_2 = [] #radians ang_3 = [] #radians for i in range(num): ang_refrac_12 = Ang_Refrac(N1[i],N2[i],ang_1) ang_refrac_23 = Ang_Refrac(N2[i],N3[i],ang_refrac_12) ang_2.append(ang_refrac_12) ang_3.append(ang_refrac_23) ## Reflectance R_s = np.zeros(num) R_p = np.zeros(num) for i in range(num): # S-type polarization r12_s = (N1[i]*np.cos(ang_1) - N2[i]*np.cos(ang_2[i]))/(N1[i]*np.cos(ang_1) + N2[i]*np.cos(ang_2[i])) r23_s = (N2[i]*np.cos(ang_2[i]) - N3[i]*np.cos(ang_3[i]))/(N2[i]*np.cos(ang_2[i]) + N3[i]*np.cos(ang_3[i])) # P-type polarization r12_p = (N2[i]*np.cos(ang_1) - N1[i]*np.cos(ang_2[i])) / (N2[i]*np.cos(ang_1) + N1[i]*np.cos(ang_2[i])) r23_p = (N3[i]*np.cos(ang_2[i]) - N2[i]*np.cos(ang_3[i])) / (N3[i]*np.cos(ang_2[i]) + N2[i]*np.cos(ang_3[i])) # Phase shift delta = 2 * np.pi * ( d / lamb[i] ) * np.sqrt( N2[i]**2 - np.sin(ang_1)**2) # <====== NOTE ==== # Reflection coefficient r123_s = (r12_s + r23_s*np.exp(2j*delta)) / (1 + r12_s*r23_s*np.exp(2j*delta)) # <====== NOTE ==== r123_p = (r12_p + r23_p*np.exp(2j*delta)) / (1 + r12_p*r23_p*np.exp(2j*delta)) # <====== NOTE ==== # Reflectance R_s[i] = abs( r123_s ) ** 2 R_p[i] = abs( r123_p ) ** 2 # Reflectance normalization R_s_normalized = ( R_s - min( R_s ) ) / ( max( R_s ) - min( R_s ) ) R_p_normalized = ( R_p - min( R_p ) ) / ( max( R_p ) - min( R_p ) ) # Plotting plt.title( r"Refletancia $\phi=" + str(ang_1_s) + "$ deg") plt.xlim(300, 1000) # Definindo os limites do eixo x plt.plot(lamb, R_s_normalized, 'go', label="Polarizacao S", markersize=4) plt.plot(lamb, R_p_normalized, 'b-', label="Polarizacao P") plt.legend() plt.show() Output (note: some systematic difference in the normalisation from your "expected") | 2 | 2 |
79,251,417 | 2024-12-4 | https://stackoverflow.com/questions/79251417/in-polars-how-can-you-update-several-columns-simultaneously | Suppose we have a Polars frame something like this lf = pl.LazyFrame([ pl.Series("a", ...), pl.Series("b", ...), pl.Series("c", ...), pl.Series("i", ...) ]) and a function something like this def update(a, b, c, i): s = a + b + c + i a /= s b /= s c /= s return a, b, c that depends of elements of columns a, b, c and also i. How can we update each row of a frame using the function? We could use with_columns to update the rows of each column independently but how can we do it with the dependency between columns? Edit In response to comments from @roman let's tighten up the question. Use this LazyFrame lf = pl.LazyFrame( [ pl.Series("a", [1, 2, 3, 4], dtype=pl.Int8), pl.Series("b", [5, 6, 7, 8], dtype=pl.Int8), pl.Series("c", [9, 0, 1, 2], dtype=pl.Int8), pl.Series("i", [3, 4, 5, 6], dtype=pl.Int8), pl.Series("o", [7, 8, 9, 0], dtype=pl.Int8), ] ) We want to update columns a, b & c in a way that depends on column i. Column o should be unaffected. We have a function that takes the values a, b, c & i and returns a, b, c & i, where the first three have been updated but i remains the same as the input. After updating, all columns should have the same dtype as before. The closest we can get is using an update function like this. def update(args): s = args["a"] + args["b"] + args["c"] + args["i"] args["a"] /= s args["b"] /= s args["c"] /= s return args.values() and applying it like this ( lf.select( pl.struct(pl.col("a", "b", "c", "i")) .map_elements(update, return_dtype=pl.List(pl.Float64)) .list.to_struct(fields=["a", "b", "c", "i"]) .alias("result"), ) .unnest("result") .collect() ) But this has some problems. We have lost column o. Column i has become Float64 It's pretty ugly. Is there a better way? | update lf.select( pl.exclude(["a","b","c"]), pl .struct(pl.all()).map_elements(update, return_dtype=pl.List(pl.Float64)) .list.to_struct(fields=["a","b","c"]) .alias("result") ).unnest("result").collect() shape: (4, 5) βββββββ¬ββββββ¬βββββββββββ¬βββββββββββ¬βββββββββ β i β o β a β b β c β β --- β --- β --- β --- β --- β β i8 β i8 β f64 β f64 β f64 β βββββββͺββββββͺβββββββββββͺβββββββββββͺβββββββββ‘ β 3 β 7 β 0.055556 β 0.277778 β 0.5 β β 4 β 8 β 0.166667 β 0.5 β 0.0 β β 5 β 9 β 0.1875 β 0.4375 β 0.0625 β β 6 β 0 β 0.2 β 0.4 β 0.1 β βββββββ΄ββββββ΄βββββββββββ΄βββββββββββ΄βββββββββ Or you can update your function to return dictionary and skip the lists: def update(args): s = args["a"] + args["b"] + args["c"] + args["i"] args["a"] /= s args["b"] /= s args["c"] /= s return args lf.select( pl .struct(pl.all()).map_elements(update, return_dtype=pl.Struct) .alias("result") ).unnest("result").collect() shape: (4, 5) ββββββββββββ¬βββββββββββ¬βββββββββ¬ββββββ¬ββββββ β a β b β c β i β o β β --- β --- β --- β --- β --- β β f64 β f64 β f64 β i64 β i64 β ββββββββββββͺβββββββββββͺβββββββββͺββββββͺββββββ‘ β 0.055556 β 0.277778 β 0.5 β 3 β 7 β β 0.166667 β 0.5 β 0.0 β 4 β 8 β β 0.1875 β 0.4375 β 0.0625 β 5 β 9 β β 0.2 β 0.4 β 0.1 β 6 β 0 β ββββββββββββ΄βββββββββββ΄βββββββββ΄ββββββ΄ββββββ original In general, I'd advice agains using python function and try to stay within pure polars expressions. So in your case it could look, for example, like this: lf = pl.LazyFrame([ pl.Series("a", [1,2,3]), pl.Series("b", [2,3,4]), pl.Series("c", [5,6,7]), pl.Series("i", [7,8,7]) ]) ( lf .with_columns(pl.exclude("i") / pl.sum_horizontal(pl.all())) ) shape: (3, 4) ββββββββββββ¬βββββββββββ¬βββββββββββ¬ββββββ β a β b β c β i β β --- β --- β --- β --- β β f64 β f64 β f64 β i64 β ββββββββββββͺβββββββββββͺβββββββββββͺββββββ‘ β 0.066667 β 0.133333 β 0.333333 β 7 β β 0.105263 β 0.157895 β 0.315789 β 8 β β 0.142857 β 0.190476 β 0.333333 β 7 β ββββββββββββ΄βββββββββββ΄βββββββββββ΄ββββββ if you really want to pass the function, you can do it: def update(row): a,b,c,i = [row[x] for x in list("abci")] s = a + b + c + i a /= s b /= s a /= s return a, b, c lf.select( pl.col.i, pl .struct(pl.all()).map_elements(update, return_dtype=pl.List(pl.Float64)) .list.to_struct(fields=["a","b","c"]) .alias("result") ).unnest("result") shape: (3, 4) βββββββ¬βββββββββββ¬βββββββββββ¬ββββββ β i β a β b β c β β --- β --- β --- β --- β β i64 β f64 β f64 β f64 β βββββββͺβββββββββββͺβββββββββββͺββββββ‘ β 7 β 0.004444 β 0.133333 β 5.0 β β 8 β 0.00554 β 0.157895 β 6.0 β β 7 β 0.006803 β 0.190476 β 7.0 β βββββββ΄βββββββββββ΄βββββββββββ΄ββββββ | 1 | 1 |
79,255,551 | 2024-12-5 | https://stackoverflow.com/questions/79255551/pandas-load-json-from-file-and-save-to-excel-valueerror | I'm running a simple python test to read json and output to excel. I'm get the following error: "ValueError("All arrays must be of the same length")" JSON file example (test data) { "content": [ { "id": "test_id", "url": "http://www.google.com/", "path1": "dir1/dir1_subdir_1/hello.txt", "path2": "external://hello.txt", "revision": "613592", "ChangedDate": "2024-01-01T17:00:00+00:00", "ChangedRevision": "12345", "tags": [], "Revision": { "id": "2468", "revision": "987654", "revisionSource": "google", "url": "http://www.google.com" } } ], "requestId": "12345", "errors": [], "warnings": [], "information": [], "isValid": true } I don't need the last 5 lines in the output file Python code import pandas as pd df = pd.read_json("simple.json",).to_excel("json_output.xlsx", index=False) | Assuming you only need content from the json file, you can first use json.load and then use it to create dataframe: import pandas as pd import json with open('simple.json') as json_data: data = json.load(json_data) df = pd.DataFrame(data['content']) # output to excel df.to_excel("json_output.xlsx", index=False) | 1 | 1 |
79,252,863 | 2024-12-4 | https://stackoverflow.com/questions/79252863/is-it-possible-to-speed-up-my-set-implementation | I am trying to make a fast and space efficient set implementation for 64 bit unsigned ints. I don't want to use set() as that converts everything into Python ints that use much more space than 8 bytes per int. Here is my effort: import numpy as np from numba import njit class HashSet: def __init__(self, capacity=1024): self.capacity = capacity self.size = 0 self.EMPTY = np.uint64(0xFFFFFFFFFFFFFFFF) # 2^64 - 1 self.DELETED = np.uint64(0xFFFFFFFFFFFFFFFE) # 2^64 - 2 self.table = np.full(capacity, self.EMPTY) # Initialize with a special value indicating empty def insert(self, key): if self.size >= self.capacity: raise RuntimeError("Hash table is full") if not self._insert(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash): print(f"Key already exists: {key}") else: self.size += 1 def contains(self, key): return self._contains(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash) def remove(self, key): if self._remove(self.table, key, self.capacity, self.EMPTY, self.DELETED, self._hash): self.size -= 1 def __len__(self): return self.size @staticmethod @njit def _hash(key, capacity): return key % capacity @staticmethod @njit def _insert(table, key, capacity, EMPTY, DELETED, hash_func): index = hash_func(key, capacity) while table[index] != EMPTY and table[index] != DELETED and table[index] != key: index = (index + 1) % capacity if table[index] == key: return False # Key already exists table[index] = key return True @staticmethod @njit def _contains(table, key, capacity, EMPTY, DELETED, hash_func): index = hash_func(key, capacity) while table[index] != EMPTY: if table[index] == key: return True index = (index + 1) % capacity return False @staticmethod @njit def _remove(table, key, capacity, EMPTY, DELETED, hash_func): index = hash_func(key, capacity) while table[index] != EMPTY: if table[index] == key: table[index] = DELETED return True index = (index + 1) % capacity return False I am using numba whereever I can to speed things up but it still isn't very fast. For example: hash_set = HashSet(capacity=204800) keys = np.random.randint(0, 2**64, size=100000, dtype=np.uint64) def insert_and_remove(hash_set, key): hash_set.insert(np.uint64(key)) hash_set.remove(key) %timeit insert_and_remove(hash_set, keys[0]) This gives: 16.9 ΞΌs Β± 407 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) The main cause is the code that I have failed to wrap with numba I think. How can this be sped up? EDIT @ken suggested defining _hash as a global function outside the class. This speeds things up so now it is only ~50% slower than set(). | As requested, here is the class, but using jitclass. I'm not sure how much value all the type annotations add. I had been playing around to see if could get any improvements. Overall, your original code had peak performance of 20 ΞΌs. Whereas, the code below had a peak performance of 2.3 ΞΌs (an order of magnitude faster. However, using a python set was an order of magnitude faster again at 0.34 ΞΌs. These timings are only with the test harness you provided. No other performance testing was done. The main things I had to do to get your code working with jitclass are: Adding type annotations and a spec to the attributes of the class. Casting the literal 1 to a numba.uint64. Without this, numba would promote the type of the expression to numba.float64, even when the local had a type annotation of uint64. Trying to index an array with a float caused the whole compilation step to fail. Remove all njit decorators from methods on the class. jitclass automatically applies an njit to all methods. jitclass errors out if any of the classes methods have already been jit-ed. The only bits you really need are: # Extra arg to decorator tells numba what the dtype of the array is expected to be @jitclass([('table', numba.uint64[:])]) class HashSet: capacity: numba.uint64 size: numba.uint64 table: np.ndarray and index = (index + numba.uint64(1)) % self.capacity In addition I also made EMPTY and DELETED global constants. This gets you a small space saving if you have lots of small sets, but without any less in performance. With numba they truly are constants, and not just global variables. Code import numpy as np import numba from numba.experimental import jitclass EMPTY = numba.uint64(0xFFFFFFFFFFFFFFFF) # 2^64 - 1 DELETED = numba.uint64(0xFFFFFFFFFFFFFFFE) # 2^64 - 2 @jitclass([('table', numba.uint64[:])]) class HashSet: capacity: numba.uint64 size: numba.uint64 table: np.ndarray def __init__(self, capacity: int = 1024) -> None: self.capacity = capacity self.size = 0 self.table = np.full(self.capacity, EMPTY) # Initialize with a special value indicating empty def __len__(self) -> int: return self.size @staticmethod def _hash(key: numba.uint64, capacity: numba.uint64) -> numba.uint64: return key % capacity def insert(self, key: numba.uint64) -> bool: if self.size >= self.capacity: raise RuntimeError("Hash table is full") index = self._hash(key, self.capacity) while self.table[index] != EMPTY and self.table[index] != DELETED and self.table[index] != key: index = (index + numba.uint64(1)) % self.capacity if self.table[index] == key: return False # Key already exists self.table[index] = key self.size += 1 return True def contains(self, key: numba.uint64) -> bool: index = self._hash(key, self.capacity) while self.table[index] != EMPTY: if self.table[index] == key: return True index = (index + numba.uint64(1)) % self.capacity return False def remove(self, key: numba.uint64) -> bool: index = self._hash(key, self.capacity) while self.table[index] != EMPTY: if self.table[index] == key: self.table[index] = DELETED self.size -= 1 return True index = (index + numba.uint64(1)) % self.capacity return False | 1 | 1 |
79,252,957 | 2024-12-4 | https://stackoverflow.com/questions/79252957/bars-almost-disappear-when-i-layer-and-facet-charts | import numpy as np import pandas as pd import altair as alt np.random.seed(0) model_keys = ['M1', 'M2'] data_keys = ['D1', 'D2'] scene_keys = ['S1', 'S2'] layer_keys = ['L1', 'L2'] ys = [] models = [] dataset = [] layers = [] scenes = [] for sc in scene_keys: for m in model_keys: for d in data_keys: for l in layer_keys: for s in range(10): data_y = list(np.random.rand(10) / 10) ys += data_y scenes += [sc] * len(data_y) models += [m] * len(data_y) dataset += [d] * len(data_y) layers += [l] * len(data_y) df = pd.DataFrame({ 'Y': ys, 'Model': models, 'Dataset': dataset, 'Layer': layers, 'Scenes': scenes}, ) bars = alt.Chart(df, width=100, height=90).mark_bar(tooltip=True).encode( x=alt.X("Scenes:N"), y=alt.Y("mean(Y):Q"), color=alt.Color("Scenes:N"), opacity=alt.Opacity( "Dataset:N", scale=alt.Scale( domain=['D1', 'D2'], ), legend=alt.Legend( labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'" ), ), xOffset=alt.XOffset("Dataset:N"), column=alt.Column('Layer:N'), row=alt.Row("Model:N") ) bars.save('test.html') This script first generates some random data where each configuration is determined by four values: model, dataset, layer, scene. Then, it stores it into a dataframe and make a chart plot. This works fine and gives me this. But I need to add error bars and text, and here is where things get wrong. First, I need to remove row and column from the chart or I can't layer it. Then I make an error bar chart and text chart, layer them, and facet them according to row and column again. bars = alt.Chart(df, width=100, height=90).mark_bar(tooltip=True).encode( x=alt.X("Scenes:N"), y=alt.Y("mean(Y):Q"), color=alt.Color("Scenes:N"), opacity=alt.Opacity( "Dataset:N", scale=alt.Scale( domain=['D1', 'D2'], ), legend=alt.Legend( labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'" ), ), xOffset=alt.XOffset("Dataset:N"), # column=alt.Column('Layer:N'), # row=alt.Row("Model:N") ) error_bars = alt.Chart(df).mark_errorbar(extent='ci').encode( x=alt.X('Scenes:N'), y=alt.Y('Y:Q'), ) text = alt.Chart(df).mark_text(align='center', baseline='line-bottom', color='black', dy=-5, # y-shift ).encode( x=alt.X('Scenes:N'), y=alt.Y('mean(Y):Q'), text=alt.Text('mean(Y):Q', format='.1f'), ) combined = alt.layer(bars, error_bars, text).facet( column=alt.Column('Layer:N'), row=alt.Row("Model:N"), spacing={"row": 0, "column": 15}, ).resolve_scale(x='independent') combined.save('test.html') The aggregation works, but the bars suddenly become extremely thin. How can I fix it? | This is not caused by the error bars but comes from using facet instead of row and column encodings. It's possible that this is a bug, but there is an easy enough work around: If you set the width as a step instead of a fixed size it works fine. Sharing the X scale also works, but I'm sure there are situations where that doesn't make sense. bars = alt.Chart(df, width=alt.Step(20), height=90).mark_bar(tooltip=True).encode( x=alt.X("Scenes:N"), y=alt.Y("mean(Y):Q"), color=alt.Color("Scenes:N"), opacity=alt.Opacity( "Dataset:N", scale=alt.Scale( domain=['D1', 'D2'], ), legend=alt.Legend( labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'" ), ), xOffset=alt.XOffset("Dataset:N"), # column=alt.Column('Layer:N'), # row=alt.Row("Model:N") ) error_bars = alt.Chart(df).mark_errorbar(extent='ci').encode( x=alt.X('Scenes:N'), y=alt.Y('Y:Q'), ) text = alt.Chart(df).mark_text(align='center', baseline='line-bottom', color='black', dy=-5, # y-shift ).encode( x=alt.X('Scenes:N'), y=alt.Y('mean(Y):Q'), text=alt.Text('mean(Y):Q', format='.1f'), ) combined = alt.layer(bars, error_bars, text).facet( column=alt.Column('Layer:N'), row=alt.Row("Model:N"), spacing={"row": 0, "column": 15}, ).resolve_scale(x='independent') combined | 2 | 2 |
79,251,752 | 2024-12-4 | https://stackoverflow.com/questions/79251752/altair-layered-chart-bars-and-lines-cannot-change-color-of-the-lines | I'm using Altair to create a layered chart with bars and lines. In this simple example, I'm trying to make the strokedash with different colors without success. I have tried using scale scheme for the strokedash, but it didn't change the colors. Can anyone help? import altair as alt import pandas as pd data = { 'Date': ['2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02'], 'Value': [50, 200, 300, 150, 250, 350, 200, 200, 200, 200, 200, 200, 300, 300, 300, 300], 'DESCR': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D', 'D'], 'Company': ['X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y'], } source = pd.DataFrame(data) source_aux = source[(source['DESCR']=='C') | (source['DESCR']=='D')] source = source[(source['DESCR']=='A') | (source['DESCR']=='B')] bars = alt.Chart().mark_bar().encode( x='Date:N', y='Value:Q', color=alt.Color('DESCR:N', legend=alt.Legend(title=None)) ) limits = alt.Chart(source_aux).mark_rule().encode( y='Value', strokeDash=alt.StrokeDash('DESCR', legend=alt.Legend(title=None)) ) chart = alt.layer( bars, limits, data=source ).facet( column='Company:N' ) chart | strokeDash controls the dash of the line but has no impact on the color. To change the color of the line you can either use the stroke or color encoding. import altair as alt import pandas as pd data = { 'Date': ['2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02'], 'Value': [50, 200, 300, 150, 250, 350, 200, 200, 200, 200, 200, 200, 300, 300, 300, 300], 'DESCR': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D', 'D'], 'Company': ['X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y'], } source = pd.DataFrame(data) source_aux = source[(source['DESCR']=='C') | (source['DESCR']=='D')] source = source[(source['DESCR']=='A') | (source['DESCR']=='B')] bars = alt.Chart().mark_bar().encode( x='Date:N', y='Value:Q', color=alt.Color('DESCR:N', legend=alt.Legend(title=None)) ) limits = alt.Chart(source_aux).mark_rule().encode( y='Value', strokeDash=alt.StrokeDash('DESCR', legend=alt.Legend(title=None)), stroke=alt.Stroke('DESCR', legend=alt.Legend(title=None)) ) chart = alt.layer( bars, limits, data=source ).facet( column='Company:N' ) chart | 2 | 1 |
79,252,144 | 2024-12-4 | https://stackoverflow.com/questions/79252144/scipy-curve-fit-how-would-i-go-about-improving-this-fit | I've been working on a standard potential which I am trying to fit with a given model: ax2 - bx3 + lx4 The x and y values for the fit are generated from the code as well, the x values are generated by numpy.linspace. I have bounded the a,b,c parameters such that they are always positive. I needed the fit to mimic the data in such a way that at minimum, the height of the local maxima and the position of the global minima are accurate. Instead this is what I'm getting (blue is the actual data and the dashed line is the fitted one with the given model): Here is the relevant part of my code: import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.interpolate import * v =246. x = np.linspace(0.1, 270.1, 27001) Xdata = x/v def func(x,a,b,l): return (a*(x**2)) - (b*(x**3)) + ((l)*(x**4)) temp = np.linspace(80.,80.,1) b = np.zeros_like(temp) a = np.zeros_like(temp) l = np.zeros_like(temp) Vfit = pot_giver(temp[0]) #External func. Ydata = (Vfit - Vfit[0])/(pow(v,4)) popt, pcov = curve_fit(func, Xdata, Ydata,bounds=((0.,0.,0.), (np.inf,np.inf,np.inf))) b[0],a[0],l[0] = popt plt.plot(Xdata,Ydata) plt.plot(Xdata, func(Xdata, *popt), 'g--') plt.show() I am doing this as an array because I am varying the temp over a wide range, this singular element 'temp' was created for troubleshooting. The Xdata and Ydata are here on the first and second columns respectively: Data | You can also use scipy.optimize.leastsq Here, you can also impose constraints (through a penalty added to the residuals when they are not satisfied). You asked for the following constraints: the maximum of the function is correct; the position of the global minimum is correct. It's a little sensitive to how you impose the constraints, but this just about gets those. import math import numpy as np import matplotlib.pyplot as plt from scipy.optimize import leastsq Xdata, Ydata = np.loadtxt( "Pot_data_online.txt", unpack=True ) ymax = Ydata.max() ymin = Ydata.min() xymin = Xdata[0] for x, y in zip( Xdata, Ydata ): if y == ymin: xymin = x # Fitting function def f( x, a, b, l ): return a * x ** 2 - b * x ** 3 + l * x ** 4 # Residuals, including a penalty to get the constraints def residuals( p, x, y ): a, b, l = p x1 = ( 3 * b - math.sqrt( max( 9 * b ** 2 - 32 * a * l, 0 ) ) ) / ( 8 * l ) x2 = ( 3 * b + math.sqrt( max( 9 * b ** 2 - 32 * a * l, 0 ) ) ) / ( 8 * l ) error = y - f( x, a, b, l ) penalty = 1e6 * ( abs( f(x1,a,b,l) / ymax - 1 ) + abs( x2 / xymin - 1 ) ) return abs( error ) + penalty popt, pcov = leastsq( func=residuals, x0=(0.01,0.3,0.01), args=(Xdata,Ydata) ) a, b, l = popt x1 = ( 3 * b - math.sqrt( max( 9 * b ** 2 - 32 * a * l, 0 ) ) ) / ( 8 * l ) x2 = ( 3 * b + math.sqrt( max( 9 * b ** 2 - 32 * a * l, 0 ) ) ) / ( 8 * l ) print( "a, b, l = ", a, b, l ) print( "Constraint on maximum: ymax , fmax = ", ymax , f( x1, a, b, l ) ) print( "Constraint on position of minimum: xymin, xfmin = ", xymin, x2 ) plt.plot( Xdata,Ydata, 'b-' ) plt.plot( Xdata, f( Xdata, *popt ), 'g--') plt.show() Output: a, b, l = 0.026786218092281242 0.0754627163616853 0.04481075395203686 Constraint on maximum: ymax , fmax = 0.0007404005868381408 0.0007404009309691355 Constraint on position of minimum: xymin, xfmin = 0.947308 0.947621778120905 | 3 | 3 |
79,251,914 | 2024-12-4 | https://stackoverflow.com/questions/79251914/convert-list-of-integers-into-a-specific-string-format | I'm dealing with a list of integers that represent the pages in which a keyword was found. I would like to build block of code that converts this list into a string with a specific format that follow some simple rules. Single integers are converted into string. Consecutive integers are considered as intervals (left bound, hyphen, right bound) and then converted into string. Each conversion is comma separated. This is an example of input and desired output: input = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139] desired_output = "4-10, 22, 23, 26, 62, 63, 113, 137-139" I wrote this code: res = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139] if len(res)>0: resStr = str(res[0]) isConsecutive = False for index in range(1, len(res)): diff = res[index] - res[index-1] if diff == 1: isConsecutive = True if index == len(res)-1: resStr = resStr + "-" + str(res[index]) continue else: if isConsecutive: isConsecutive = False resStr = resStr + "-" + str(res[index-1]) + ", " + str(res[index]) else: resStr = resStr + ", " + str(res[index]) print(res) print(resStr) This code gives me as a result: 4-10, 22-23, 26, 62-63, 113, 137-139 It doesn't recognize that only two consecutive numbers have not be considered as an interval: "22, 23" and not "22-23", as well as "62, 63" and not "62-63". How can be solved this issue? Is there a simpler or more efficient way to perform the conversion? | Track first and last of a range of consecutive numbers, collect the string of the range when a number is non-consecutive. I used a sub-function to avoid repeated code: res = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139] def span(numbers): if len(numbers) == 0: # empty case return '' result = [] def _process(first, last): if first != last: # range, but handle consecutive case result.append(f"{first}{'-' if last - first > 1 else ', '}{last}") else: result.append(str(first)) # first==last, single number first = last = numbers[0] # initialize range tracking to first number for item in numbers[1:]: # iterate remaining numbers if item <= last: # input not increasing raise ValueError('not consecutive numbers') if item == last + 1: # next number is consecutive last = item else: _process(first, last) # next number not consecutive first = last = item # reset for next range _process(first, last) # no more numbers, finish last range return ', '.join(result) print(res) print(span(res)) # Test cases print(span([])) print(span([1])) print(span([1,2])) print(span([1,2,3])) print(span([1,2,4])) print(span([1,3])) print(span([1,3,4])) print(span([1,3,4,5])) print(span([1,2,3,5])) print(span([1,2,4,5,6])) print(span([1,2,3,5,6])) print(span([1,2,3,5,6,7])) Output: [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139] 4-10, 22, 23, 26, 62, 63, 113, 137-139 1 1, 2 1-3 1, 2, 4 1, 3 1, 3, 4 1, 3-5 1-3, 5 1, 2, 4-6 1-3, 5, 6 1-3, 5-7 | 1 | 1 |
79,252,245 | 2024-12-4 | https://stackoverflow.com/questions/79252245/pandas-filter-and-sum-but-apply-to-all-rows | I have a dataframe that has user ID, code, and value. user code value 0001 P 10 0001 P 20 0001 N 10 0002 N 40 0002 N 30 0003 P 10 I am trying to add a new column that groups by User ID, filters for code = P and sums the value. However I want this value to be applied to every row. So for the example above, the output I'm looking for would be: user code value Sum_of_P 0001 P 10 30 0001 P 20 30 0001 N 10 30 0002 N 40 0 0002 N 30 0 0003 P 10 10 I tried doing df['Sum_of_P'] = df.loc[df['code'] == 'P', 'value'].groupby(df['user']).transform('sum'), but this applies only to the rows with code = P. Is there a way to have to have this apply to all rows? | Use a mask and where rather than loc: df['Sum_of_P'] = (df['value'].where(df['code'].eq('P'), 0) .groupby(df['user']).transform('sum') ) Variant with NaNs as masked values: df['Sum_of_P'] = (df['value'].where(df['code'].eq('P')) .groupby(df['user']).transform('sum') .convert_dtypes() ) If you want to use loc you should aggregate rather than transform, then map the values from the group: s = df.loc[df['code'] == 'P'].groupby('user')['value'].sum() df['Sum_of_P'] = df['user'].map(s).fillna(0).convert_dtypes() Output: user code value Sum_of_P 0 1 P 10 30 1 1 P 20 30 2 1 N 10 30 3 2 N 40 0 4 2 N 30 0 5 3 P 10 10 | 2 | 2 |
79,251,266 | 2024-12-4 | https://stackoverflow.com/questions/79251266/space-and-time-complexity-of-flattening-a-nested-list-of-arbitrary-depth | Given a python list that contains nested lists of arbitrary levels of nesting, the goal is to return a completely flattened list i.e for the sample input [1, [2], [[[3]]], 1], the output should be [1, 2, 3, 1]. My solution: def flatten(lst): stack = [[lst, 0]] result = [] while stack: current_lst, start_index = stack[-1] for i in range(start_index, len(current_lst)): if isinstance(current_lst[i], list): # Update the start_index of current list # to the next element after the nested list stack[-1][1] = i + 1 # Add nested list to stack # and initialize its start_index to 0 stack.append([current_lst[i], 0]) # Pause current_lst traversal break # non list item # add item to result result.append(current_lst[i]) else: # no nested list # remove current list from stack stack.pop() return result I would like to know the time and space complexity (auxiliary space) of my solution if correct. What I think Time Complexity: I believe the solution has a time complexity of O(m + n) where m is the total number of nested lists at all levels and n is the total number of atomic elements at all levels (non list elements) Space Complexity: I believe the space complexity is O(d), where d is the depth of the most nested list. This is because the stack tracks the current state of traversal, and its size is proportional to the nesting depth Is the solution correct? Is the time and space analysis correct? | Yes, the solution is correct. Yes, the time and space analysis are correct ... if you don't count the space used by result as auxiliary space, which is reasonable. Although note that result overallocates/reallocates, which you could regard as taking O(n) auxiliary space. You could optimize that by doing two passes over the whole input, one to count the atomic elements, then create result = [None] * n, and then another pass to fill it. Btw it's better to use iterators instead of your own list+index pairs (Attempt This Online!): def flatten(lst): stack = [iter(lst)] result = [] while stack: for item in stack[-1]: if isinstance(item, list): stack.append(iter(item)) break result.append(item) else: stack.pop() return result Or with the mentioned space optimization (Attempt This Online!): def flatten(lst): def atoms(): stack = [iter(lst)] while stack: for item in stack[-1]: if isinstance(item, list): stack.append(iter(item)) break yield item else: stack.pop() n = sum(1 for _ in atoms()) result = [None] * n i = 0 for result[i] in atoms(): i += 1 return result | 3 | 1 |
79,251,724 | 2024-12-4 | https://stackoverflow.com/questions/79251724/multiply-within-a-group-in-polars | I would like to take the product within a group in polars The following works but I suspect there is a more elegant/efficient way to perform this operation. Thank you import polars as pl import numpy as np D = pl.DataFrame({'g':['a','a','b','b'],'v':[1,2,3,4],'v2':[2,3,4,5]}) D.group_by('g').agg(pl.all().map_elements( lambda group: np.prod(group.to_numpy()),return_dtype=pl.Float64)) | You could use Expr.product: D.group_by('g').agg(pl.all().product()) Output: βββββββ¬ββββββ¬ββββββ β g β v β v2 β β --- β --- β --- β β str β i64 β i64 β βββββββͺββββββͺββββββ‘ β b β 12 β 20 β β a β 2 β 6 β βββββββ΄ββββββ΄ββββββ If you want Floats: D.group_by('g').agg(pl.all().product().cast(pl.Float64)) βββββββ¬βββββββ¬βββββββ β g β v β v2 β β --- β --- β --- β β str β f64 β f64 β βββββββͺβββββββͺβββββββ‘ β b β 12.0 β 20.0 β β a β 2.0 β 6.0 β βββββββ΄βββββββ΄βββββββ | 5 | 3 |
79,250,549 | 2024-12-4 | https://stackoverflow.com/questions/79250549/python-error-rv-generic-interval-missing-1-required-positional-argument-con | I have been trying to run the below code to calculate upper and lower confidence intervals using t distribution, but it keeps throwing the error in the subject. The piece of code is as below: def trans_threshold(Day): Tran_Cnt=Tran_Cnt_DF[['Sample',Day]].dropna() Tran_Cnt=Tran_Cnt.astype({'Sample':'str'}) Tran_Cnt.dtypes #Finding outliers in Materiality via IQR X_Tran = Tran_Cnt.drop('Sample', axis=1) Tran_arr1 = X_Tran.values #Finding the first quartile Tran_q1= np.quantile(Tran_arr1, 0.25) # finding the 3rd quartile Tran_q3 = np.quantile(Tran_arr1, 0.75) # finding the iqr region Tran_iqr = Tran_q3-Tran_q1 # finding upper and lower outliers Tran_upper_bound = Tran_q3+(1.5*Tran_iqr) Tran_lower_bound = Tran_q1-(1.5*Tran_iqr) # removing outliers Tran_arr2 = Tran_arr1[(Tran_arr1 >= Tran_lower_bound) & (Tran_arr1 <= Tran_upper_bound)] #Using t distribution for Materiality Limits Tran_Threshold_mat=st.t.interval(alpha=0.99999999999, df=len(Tran_arr2-1), loc=np.mean(Tran_arr2), scale=st.sem(Tran_arr2)) return Tran_Threshold_mat trn_lim_FullFeed_Mon = trans_threshold(Day) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[106], line 19 17 Tran_arr2 = Tran_arr1[(Tran_arr1 >= Tran_lower_bound) & (Tran_arr1 <= Tran_upper_bound)] 18 #Using t distribution for Materiality Limits ---> 19 Tran_Threshold_mat=st.t.interval(alpha=0.99999999999, df=len(Tran_arr2-1), 20 loc=np.mean(Tran_arr2), 21 scale=st.sem(Tran_arr2)) TypeError: rv_generic.interval() missing 1 required positional argument: 'confidence' The issue seems to be with piece of code below. However, I have provided all parameters required to calculate confidence intervals, including degrees of freedom, but it still gives this error. Where am I going wrong and what needs to be done? Tran_Threshold_mat=st.t.interval(alpha=0.99999999999, df=len(Tran_arr2-1), loc=np.mean(Tran_arr2), scale=st.sem(Tran_arr2)) Also, the Tran_arr2 list looks like below: array([12617., 12000., 1123., 537., 8605., 4365., 11292., 12231., 7640., 9583., 9257., 13864., 14682., 11744., 10501., 8694., 5327., 10066., 13022., 11092., 7444., 11658., 14920., 12849., 14681., 5719., 11029., 3814., 14703., 5593., 9772., 8851., 9551., 15975., 6532., 13827., 8547.]) Hence, there is no issue, up until the last like of the code block which estimates confidence intervals using t distribution. I have used the below packages: import pandas as pd import numpy as np import scipy.stats as st import matplotlib.pyplot as plt import matplotlib.ticker as tkr import matplotlib.scale as mscale from matplotlib.ticker import FixedLocator, NullFormatter pd.options.display.float_format = '{:.0f}'.format pd.options.mode.chained_assignment = None | Note that the signature of scipy.stats.t is interval(confidence, df, loc=0, scale=1). There is no alpha keyword, pass it as positional or relabel it to confidence. | 1 | 2 |
79,250,415 | 2024-12-4 | https://stackoverflow.com/questions/79250415/calculate-relative-difference-of-elements-in-a-1d-numpy-array | Say I have a 1D numpy-array given by np.array([1,2,3]). Is there a built-in command for calculating the relative difference between each element and display it in a 2D-array? The result would then be given by np.array([[0,-50,-100*2/3], [100,0,-100*1/3], [200,50,0]]) Alternatively I would have to use a for-loop. | Use numpy broadcasting: a = np.array([1,2,3]) out = (a[:, None]-a)/a*100 Output: array([[ 0. , -50. , -66.66666667], [100. , 0. , -33.33333333], [200. , 50. , 0. ]]) | 3 | 2 |
79,242,142 | 2024-12-1 | https://stackoverflow.com/questions/79242142/how-can-i-create-a-stylized-tree-chart | I have been making a tree of fraternity-adjacent bigs and littles and was looking for a way to automate it for changes as more people join. Everyone's names and years, big, and littles are in an Excel spreadsheet. What could I use to emulate the design I did here? Specifically, the stem style and ability to space nodes further away depending on their year. This is the design I want to automate: I tried using anytree and graphviz but couldn't find a way to emulate the stems or an easy solution for spacing based on years. Here's sample data: Name Year Instrument Extra Extra Extra Extra Big Little 1 Little 2 Little 3 T1P1 1990 Trumpet T1P2 T1P2 1991 Trumpet T1P1 T2P1 1997 Trumpet T2P2 T2P2 2001 Trumpet T2P1 T2P3 T2P4 T2P5 T2P3 2003 Trumpet T2P2 T2P4 2004 Trumpet T2P2 T2P5 2006 Trumpet T2P2 T3P1 2000 Trumpet T3P2 T3P2 2004 Trumpet T3P1 T3P3 T3P4 T3P3 2005 Trumpet T3P2 T3P5 T3P6 T3P5 2006 Trumpet T3P3 T3P6 2007 Trumpet T3P3 T3P4 2006 Trumpet T3P2 T3P7 T3P7 2010 Flute T3P4 Here's my basic approach using anytree and the results: import openpyxl from PIL import Image, ImageDraw, ImageFont import re from anytree import Node, RenderTree from collections import Counter import os # Create a directory to store the individual name card images cards_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/cards" os.makedirs(cards_dir, exist_ok=True) # Load the .xlsx file file_path = 'C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/sampletrees.xlsx' workbook = openpyxl.load_workbook(file_path) sheet = workbook.active # Read the data starting from row 2 to the last row with data (max_row) in columns A to N people_data = [] for row in sheet.iter_rows(min_row=2, max_row=sheet.max_row, min_col=1, max_col=14): person_info = [cell.value for cell in row] people_data.append(person_info) # Tree Data Making # Dictionary to hold people by their names people_dict = {} # List to hold the root nodes of multiple trees root_nodes = [] # Sets to track parents and children parents_set = set() children_set = set() # Dictionary to track parent-child relationships for conflict detection parent_child_relationships = {} # List to store the individual trees as objects family_trees = [] # List to hold each separate family tree # Iterate over the people data and create nodes for each person for i, person_info in enumerate(people_data, start=2): # i starts at 2 for row index name = person_info[0] # Assuming name is in the first column (column A) column_b_data = person_info[1] # Column B data (second column) parent_name = person_info[7] # Column H for parent (8th column) children_names = person_info[8:14] # Columns I to N for children (9th to 14th columns) # Check if this name is already in the people_dict if name not in people_dict: # Create the person node (this is the current node) without column B info at this point person_node = Node(name) # Create the person node with just the name # If parent_name is empty, this is a root node for a new tree if parent_name: if parent_name in people_dict: parent_node = people_dict[parent_name] else: parent_node = Node(parent_name) people_dict[parent_name] = parent_node # Add the parent to the dictionary person_node.parent = parent_node # Set the parent for the current person # Add to the parents set parents_set.add(parent_name) else: # If no parent is referenced, this could be the root or top-level node root_nodes.append(person_node) # Add to root_nodes list # Store the person node in the dictionary (this ensures we don't create duplicates) people_dict[name] = person_node # Create child nodes for the person and add them to the children set for child_name in children_names: if child_name: # Create child node without modifying its name with additional info from the parent if child_name not in people_dict: child_node = Node(child_name, parent=person_node) people_dict[child_name] = child_node # Store the child in the dictionary children_set.add(child_name) # Add the parent-child relationship for conflict checking if child_name not in parent_child_relationships: parent_child_relationships[child_name] = set() parent_child_relationships[child_name].add(name) # Print out the family trees for each root node (disconnected trees) for root_node in root_nodes: family_tree = [] for pre, fill, node in RenderTree(root_node): family_tree.append(f"{pre}{node.name}") family_trees.append(family_tree) # Save each tree as a separate list of names print(f"\nFamily Tree starting from {root_node.name}:") for pre, fill, node in RenderTree(root_node): print(f"{pre}{node.name}") # Tree Chart Making # Extract the years from the first four characters in Column B years = [] for person_info in people_data: column_b_data = person_info[1] if column_b_data: year_str = str(column_b_data)[:4] if year_str.isdigit(): years.append(int(year_str)) # Calculate the range of years (from the minimum year to the maximum year) min_year = min(years) if years else 0 max_year = max(years) if years else 0 year_range = max_year - min_year + 1 if years else 0 # Create a base image with a solid color (header space) base_width = 5000 base_height = 300 + (100 * year_range) # Header (300px) + layers of 100px strips based on the year range base_color = "#B3A369" base_image = Image.new("RGB", (base_width, base_height), color=base_color) # Create a drawing context draw = ImageDraw.Draw(base_image) # Define the text and font for the header text = "The YJMB Trumpet Section Family Tree" font_path = "C:/Windows/Fonts/calibrib.ttf" font_size = 240 font = ImageFont.truetype(font_path, font_size) # Get the width and height of the header text using textbbox bbox = draw.textbbox((0, 0), text, font=font) text_width = bbox[2] - bbox[0] text_height = bbox[3] - bbox[1] # Calculate the position to center the header text horizontally x = (base_width - text_width) // 2 y = (300 - text_height) // 2 # Vertically center the text in the first 300px # Add the header text to the image draw.text((x, y), text, font=font, fill=(255, 255, 255)) # List of colors for the alternating strips colors = ["#FFFFFF", "#003057", "#FFFFFF", "#B3A369"] strip_height = 100 # Font for the year text year_font_size = 60 year_font = ImageFont.truetype(font_path, year_font_size) # Add the alternating colored strips beneath the header y_offset = 300 # Start just below the header text for i in range(year_range): strip_color = colors[i % len(colors)] # Draw the strip draw.rectangle([0, y_offset, base_width, y_offset + strip_height], fill=strip_color) # Calculate the text to display (the year for this strip) year_text = str(min_year + i) # Get the width and height of the year text using textbbox bbox = draw.textbbox((0, 0), year_text, font=year_font) year_text_width = bbox[2] - bbox[0] year_text_height = bbox[3] - bbox[1] # Calculate the position to center the year text vertically on the strip year_text_x = 25 # Offset 25px from the left edge year_text_y = y_offset + (strip_height - year_text_height) // 2 - 5 # Vertically center the text # Determine the text color based on the strip color year_text_color = "#003057" if strip_color == "#FFFFFF" else "white" # Add the year text to the strip draw.text((year_text_x, year_text_y), year_text, font=year_font, fill=year_text_color) # Move the offset for the next strip y_offset += strip_height # Font for the names on the name cards (reduced to size 22) name_font_size = 22 name_font = ImageFont.truetype("C:/Windows/Fonts/arial.ttf", name_font_size) # Initialize counters for each year (based on the range of years) year_counters = {year: 0 for year in range(min_year, max_year + 1)} # Create a list of names from the spreadsheet, split on newlines where appropriate for i, person_info in enumerate(people_data): name = person_info[0] # Assuming name is in the first column (column A) original_name = name column_b_data = person_info[1] # Column B data (second column) column_c_data = person_info[2] # Column C data (third column) # Choose the correct name card template based on Column C if column_c_data and "Trumpet" not in column_c_data: # Use the blue name card template if Column C doesn't include "Trumpet" name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_blue_name_card.png") else: # Use the default name card template if Column C includes "Trumpet" name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_name_card.png") if column_b_data: year_str = str(column_b_data)[:4] if year_str.isdigit(): year = int(year_str) year_index = year - min_year # Find the corresponding year index (from 0 to year_range-1) person_node.year = year person_node.name = name # Check if the name contains "VET" or "RAT" if "VET" in name or "RAT" in name: # Replace the first space with a newline name_lines = name.split(' ', 1) name = name_lines[0] + '\n' + name_lines[1] elif name == "Special Case": # Special case for "Special Case" name_lines = name.split('-') name = name_lines[0] + '\n' + name_lines[1] # Add newline after the hyphen else: # Split on the last space if it doesn't contain "VET" or "RAT" name_lines = name.split(' ') if len(name_lines) > 1: name = ' '.join(name_lines[:-1]) + '\n' + name_lines[-1] else: name_lines = [name] # Create a copy of the name card for each person name_card_copy = name_card_template.copy() card_draw = ImageDraw.Draw(name_card_copy) # Calculate the total height of all the lines combined (with some padding between lines) line_heights = [] total_text_height = 0 for line in name.split('\n'): line_bbox = card_draw.textbbox((0, 0), line, font=name_font) line_height = line_bbox[3] - line_bbox[1] line_heights.append(line_height) total_text_height += line_height # Shift the text up by 8 pixels and calculate the vertical starting position start_y = (name_card_template.height - total_text_height) // 2 - 6 # Shifted up by 8px # Draw each line centered horizontally current_y = start_y first_line_raised = False # To track if the first line has 'gjpqy' characters for i, line in enumerate(name.split('\n')): line_bbox = card_draw.textbbox((0, 0), line, font=name_font) line_width = line_bbox[2] - line_bbox[0] # Calculate the horizontal position to center this line line_x = (name_card_template.width - line_width) // 2 # Draw the line at the correct position card_draw.text((line_x, current_y), line, font=name_font, fill="black") if i == 0 and any(char in line for char in 'gjpqy'): # If the first line contains any of the letters, lower it by 7px (5px padding + 2px extra) current_y += line_heights[i] + 7 # 5px for space, 2px additional for g, j, p, q, y first_line_raised = True elif i == 0: # If the first line doesn't contain those letters, add 7px space current_y += line_heights[i] + 7 else: # For subsequent lines, add the usual space if first_line_raised: # If first line was adjusted for 'gjpqy', raise second line by 2px current_y += line_heights[i] - 2 # Raise second line by 2px else: current_y += line_heights[i] + (5 if i == 0 else 0) # Position for the name card in the appropriate year strip card_x = 25 + year_text_x + year_text_width # 25px to the right of the year text card_y = 300 + (strip_height * year_index) + (strip_height - name_card_template.height) // 2 # Vertically center in the strip based on year # Assign card and y position attributes to each person person_node.card = name_card_copy person_node.y = card_y # print(person_node.y) # Use the counter for the corresponding year to determine x_offset x_offset = card_x + year_counters[year] * 170 # Add offset for each subsequent name card year_counters[year] += 1 # Increment the counter for this year # print(f"{year_counters[year]}") card_file_path = os.path.join(cards_dir, f"{original_name}.png") person_node.card.save(card_file_path) # Paste the name card onto the image at the calculated position base_image.paste(name_card_copy, (x_offset, person_node.y), name_card_copy) # Save the final image with name cards base_image.save("final_image_with_name_cards_updated.png") base_image.show() example of output mirroring background aestetics of original work Here's my approach with graphviz: import openpyxl from anytree import Node, RenderTree import os from graphviz import Digraph from PIL import Image # Create a directory to store the family tree images trees_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/trees" cards_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/cards" os.makedirs(trees_dir, exist_ok=True) # Load the .xlsx file file_path = 'C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/sampletrees.xlsx' workbook = openpyxl.load_workbook(file_path) sheet = workbook.active # Read the data starting from row 2 to the last row with data (max_row) in columns A to N people_data = [] for row in sheet.iter_rows(min_row=2, max_row=sheet.max_row, min_col=1, max_col=14): person_info = [cell.value for cell in row] people_data.append(person_info) # Tree Data Making people_dict = {} # Dictionary to hold people by their names root_nodes = [] # List to hold the root nodes of multiple trees parents_set = set() # Sets to track parents and children children_set = set() parent_child_relationships = {} # Dictionary to track parent-child relationships # Create nodes for each person for i, person_info in enumerate(people_data, start=2): # i starts at 2 for row index name = person_info[0] parent_name = person_info[7] children_names = person_info[8:14] # Columns I to N for children if name not in people_dict: person_node = Node(name) # If no parent is mentioned, add as a root node if parent_name: parent_node = people_dict.get(parent_name, Node(parent_name)) people_dict[parent_name] = parent_node # Add the parent to the dictionary person_node.parent = parent_node # Set the parent for the current person parents_set.add(parent_name) else: root_nodes.append(person_node) people_dict[name] = person_node # Store the person node # Create child nodes for the person for child_name in children_names: if child_name: if child_name not in people_dict: child_node = Node(child_name, parent=person_node) people_dict[child_name] = child_node children_set.add(child_name) if child_name not in parent_child_relationships: parent_child_relationships[child_name] = set() parent_child_relationships[child_name].add(name) # Function to generate the family tree graph using Graphviz def generate_tree_graph(root_node): graph = Digraph(format='png', engine='dot', strict=True) def add_node_edges(node): # Image file path image_path = os.path.join(cards_dir, f"{node.name}.png") # Assuming each person has a PNG image named after them if os.path.exists(image_path): # If the image exists, replace the node with the image, and remove any text label graph.node(node.name, image=image_path, shape="none", label='') else: # Fallback to text if no image is found (this can be further adjusted if needed) graph.node(node.name, label=node.name, shape='rect') # Add edges (parent-child relationships) if node.parent: graph.edge(node.parent.name, node.name) for child in node.children: add_node_edges(child) add_node_edges(root_node) return graph # Generate and save tree images tree_images = [] for root_node in root_nodes: tree_graph = generate_tree_graph(root_node) tree_image_path = os.path.join(trees_dir, f"{root_node.name}_family_tree") tree_graph.render(tree_image_path, format='png') tree_images.append(tree_image_path) # Resize all tree images to be the same size target_width = 800 # Target width for each tree image target_height = 600 # Target height for each tree image resized_images = [] for image_path in tree_images: image = Image.open(f"{image_path}.png") resized_images.append(image) # Create a new image large enough to hold all resized tree images side by side total_width = target_width * len(resized_images) max_height = max(image.height for image in resized_images) # Create a blank white image to paste the resized trees into combined_image = Image.new('RGB', (total_width, max_height), color='white') # Paste each resized tree image into the combined image x_offset = 0 for image in resized_images: combined_image.paste(image, (x_offset, 0)) x_offset += image.width # Save the final combined image as a single PNG file combined_image_path = 'C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/final_combined_family_tree.png' combined_image.save(combined_image_path) # Show the final combined image combined_image.show() example of output showing trees using proper node visuals | Solved it. from anytree import Node, RenderTree from collections import Counter import os import openpyxl from PIL import Image, ImageDraw, ImageFont import re # Create a directory to store the individual name card images cards_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/cards" os.makedirs(cards_dir, exist_ok=True) # Load the .xlsx file file_path = 'C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/YJMB Trumpet Trees.xlsx' workbook = openpyxl.load_workbook(file_path) sheet = workbook.active # Read the data starting from row 2 to the last row with data (max_row) in columns A to N people_data = [] for row in sheet.iter_rows(min_row=2, max_row=sheet.max_row, min_col=1, max_col=14): person_info = [cell.value for cell in row] people_data.append(person_info) # Tree Data Making # Dictionary to hold people by their names people_dict = {} # List to hold the root nodes of multiple trees root_nodes = [] # Sets to track parents and children parents_set = set() children_set = set() # Dictionary to track parent-child relationships for conflict detection parent_child_relationships = {} # List to store the individual trees as objects family_trees = [] # List to hold each separate family tree # Variable to track the current tree number tree_number = 0 # Start with tree 1 # A counter for nodes without children end_id_counter = 1 years = [] x_max = 0 # Iterate over the people data and create nodes for each person for i, person_info in enumerate(people_data, start=2): # i starts at 2 for row index name = person_info[0] # Name is in the first column (column A) rat_year = str(person_info[1])[:4] # Year they joined the marching band (second column) if rat_year.isdigit(): years.append(int(rat_year)) instrument = person_info[2] parent_name = person_info[7] # Column H for VET (8th column) children_names = person_info[8:14] # Columns I to N for RATs (9th to 14th columns) # Determine if the node has children (if any of the children_names is non-empty) has_children = any(child_name for child_name in children_names if child_name) if i < len(people_data) and not person_info[7]: # Parent is empty in that row tree_number += 1 # Increment tree number for the next family tree # Check if this name is already in the people_dict if name in people_dict: # If the person already exists in the dictionary, retrieve their node person_node = people_dict[name] # Update the rat_year for the existing person node if necessary person_node.rat_year = rat_year person_node.instrument = instrument else: # If the person does not exist in the dictionary, create a new node person_node = Node(name, tree_number=tree_number, id=0, has_children=has_children, rat_year=rat_year, x_coord=None, y_coord=None, instrument=instrument, children_nodes=[]) # Added children_nodes # If parent_name is empty, this is a root node for a new tree if parent_name: if parent_name in people_dict: parent_node = people_dict[parent_name] else: parent_node = Node(parent_name, tree_number=tree_number, id=0, has_children=False, rat_year=None, x_coord=None, y_coord=None, instrument=None, children_nodes=[]) # Added children_nodes people_dict[parent_name] = parent_node # Add the parent to the dictionary person_node.parent = parent_node # Set the parent for the current person parents_set.add(parent_name) # After setting the parent, update the parent's has_children flag parent_node.has_children = True # Set has_children to True for the parent node parent_node.children_nodes.append(person_node) # Add to parent's children_nodes else: root_nodes.append(person_node) # Add to root_nodes list people_dict[name] = person_node # Add the new person node to the dictionary # Now create child nodes for the given children names for child_name in children_names: if child_name: if child_name not in people_dict: child_node = Node(child_name, parent=person_node, tree_number=tree_number, id=0, has_children=False, rat_year=rat_year, x_coord=None, y_coord=None, instrument=instrument, children_nodes=[]) # Added children_nodes people_dict[child_name] = child_node children_set.add(child_name) # If the child node has been created, we need to ensure the parent's has_children flag is True person_node.has_children = True person_node.children_nodes.append(people_dict[child_name]) # Add child to parent's children_nodes if child_name not in parent_child_relationships: parent_child_relationships[child_name] = set() parent_child_relationships[child_name].add(name) # After all nodes are created, we calculate x and y coordinates for each node new_id = 1 start_x_coord = 200 curr_tree = 1 min_year = min(years) if years else 0 max_year = max(years) if years else 0 year_range = max_year - min_year + 1 if years else 0 end_id_counter = 1 # Print out the family trees for each root node (disconnected trees) for root_node in root_nodes: family_tree = [] for pre, fill, node in RenderTree(root_node): family_tree.append(f"{pre}{node.name}") family_trees.append(family_tree) # print(f"\nFamily Tree starting from {root_node.name}:") for pre, fill, node in RenderTree(root_node): node.id = new_id new_id += 1 if not node.has_children: new_tree = node.tree_number if new_tree != curr_tree: start_x_coord += 200 curr_tree = node.tree_number node.end_id = end_id_counter end_id_counter += 1 node.x_coord = start_x_coord start_x_coord += 170 else: node.end_id = 0 if getattr(node, 'x_coord', 'N/A') and getattr(node, 'x_coord', 'N/A') > x_max: x_max = node.x_coord # Print details for each node # print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, X Coord: {getattr(node, 'x_coord', 'N/A')}, Y Coord: {getattr(node, 'y_coord', 'N/A')}, Rat Year: {getattr(node, 'rat_year', 'N/A')}, Instrument: {getattr(node, 'children_nodes', 'N/A')})") # Now assign X coordinates to nodes where X is None (based on their children) while any(node.x_coord is None for node in people_dict.values()): for node in people_dict.values(): if node.has_children: children_with_coords = [child for child in node.children if child.x_coord is not None] if len(children_with_coords) == len(node.children): # Check if all children have x_coord average_x_coord = sum(child.x_coord for child in children_with_coords) / len(children_with_coords) node.x_coord = round(average_x_coord) # Set the parent's x_coord to the average # Print out the family trees for each root node (disconnected trees) for root_node in root_nodes: family_tree = [] for pre, fill, node in RenderTree(root_node): family_tree.append(f"{pre}{node.name}") family_trees.append(family_tree) # print(f"\nFamily Tree starting from {root_node.name}:") # for pre, fill, node in RenderTree(root_node): # print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Children Nodes: {getattr(node, 'children_nodes', 'N/A')})") # fix the rat_year attribute for even-numbered generations (done) # use that to determine y value (done) # determine x values from the bottom up recursively (done) # # Print duplicate ids, if any # if duplicates: # print("\nDuplicate IDs found:", duplicates) # else: # print("\nNo duplicates found.") #----------------------------------------------------------# # Tree Chart Making # Extract the years from the first four characters in Column B (done in lines 51-53 now) # Calculate the range of years (from the minimum year to the maximum year) (107-109) # Create a base image with a solid color (header space) base_width = x_max + 200 base_height = 300 + (100 * year_range) # Header (300px) + layers of 100px strips based on the year range base_color = "#B3A369" base_image = Image.new("RGB", (base_width, base_height), color=base_color) # Create a drawing context draw = ImageDraw.Draw(base_image) # Define the text and font for the header text = "The YJMB Trumpet Section Family Tree" font_path = "C:/Windows/Fonts/calibrib.ttf" font_size = 240 font = ImageFont.truetype(font_path, font_size) # Get the width and height of the header text using textbbox bbox = draw.textbbox((0, 0), text, font=font) text_width = bbox[2] - bbox[0] text_height = bbox[3] - bbox[1] # Calculate the position to center the header text horizontally x = (base_width - text_width) // 2 y = (300 - text_height) // 2 # Vertically center the text in the first 300px # Add the header text to the image draw.text((x, y), text, font=font, fill=(255, 255, 255)) # List of colors for the alternating strips colors = ["#FFFFFF", "#003057", "#FFFFFF", "#B3A369"] strip_height = 100 # Font for the year text year_font_size = 60 year_font = ImageFont.truetype(font_path, year_font_size) # Add the alternating colored strips beneath the header y_offset = 300 # Start just below the header text for i in range(year_range): strip_color = colors[i % len(colors)] # Draw the strip draw.rectangle([0, y_offset, base_width, y_offset + strip_height], fill=strip_color) # Calculate the text to display (the year for this strip) year_text = str(min_year + i) # Get the width and height of the year text using textbbox bbox = draw.textbbox((0, 0), year_text, font=year_font) year_text_width = bbox[2] - bbox[0] year_text_height = bbox[3] - bbox[1] # Calculate the position to center the year text vertically on the strip year_text_x = 25 # Offset 25px from the left edge year_text_y = y_offset + (strip_height - year_text_height) // 2 - 5 # Vertically center the text # Determine the text color based on the strip color year_text_color = "#003057" if strip_color == "#FFFFFF" else "white" # Add the year text to the strip draw.text((year_text_x, year_text_y), year_text, font=year_font, fill=year_text_color) # Move the offset for the next strip y_offset += strip_height # Font for the names on the name cards (reduced to size 22) name_font_size = 22 name_font = ImageFont.truetype("C:/Windows/Fonts/arial.ttf", name_font_size) # Initialize counters for each year (based on the range of years) year_counters = {year: 0 for year in range(min_year, max_year + 1)} # Create a list of names from the spreadsheet, split on newlines where appropriate for node in people_dict.values(): # Choose the correct name card template based on Column C if node.instrument and "Trumpet" not in node.instrument: name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_blue_name_card.png") else: name_card_template = Image.open("C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/blank_name_card.png") if node.rat_year: year_string = str(node.rat_year)[:4] if year_string.isdigit(): year = int(year_string) year_index = year - min_year # Find the corresponding year index (from 0 to year_range-1) name = node.name # Check if the name contains "VET" or "RAT" if "VET" in name or "RAT" in name: name_lines = name.split(' ', 1) name = name_lines[0] + '\n' + name_lines[1] elif name == "Xxx Xxxxxx-Xxxxxxx": name_lines = name.split('-') name = name_lines[0] + '\n' + name_lines[1] # Add newline after the hyphen else: name_lines = name.split(' ') if len(name_lines) > 1: name = ' '.join(name_lines[:-1]) + '\n' + name_lines[-1] else: name_lines = [name] # Create a copy of the name card for each person name_card_copy = name_card_template.copy() card_draw = ImageDraw.Draw(name_card_copy) # Calculate the total height of all the lines combined (with some padding between lines) line_heights = [] total_text_height = 0 for line in name.split('\n'): line_bbox = card_draw.textbbox((0, 0), line, font=name_font) line_height = line_bbox[3] - line_bbox[1] line_heights.append(line_height) total_text_height += line_height # Shift the text up by 8 pixels and calculate the vertical starting position start_y = (name_card_template.height - total_text_height) // 2 - 6 # Shifted up by 8px # Draw each line centered horizontally current_y = start_y first_line_raised = False for i, line in enumerate(name.split('\n')): line_bbox = card_draw.textbbox((0, 0), line, font=name_font) line_width = line_bbox[2] - line_bbox[0] line_x = (name_card_template.width - line_width) // 2 card_draw.text((line_x, current_y), line, font=name_font, fill="black") if i == 0 and any(char in line for char in 'gjpqy'): current_y += line_heights[i] + 7 first_line_raised = True elif i == 0: current_y += line_heights[i] + 7 else: if first_line_raised: current_y += line_heights[i] - 2 else: current_y += line_heights[i] + (5 if i == 0 else 0) # Position for the name card in the appropriate year strip card_y = 300 + (strip_height * year_index) + (strip_height - name_card_template.height) // 2 # Vertically center in the strip based on year node.y_coord = card_y # Assign card and y position attributes to each person person_node.card = name_card_copy person_node.y_coord = card_y # Use the counter for the corresponding year to determine x_offset year_counters[year] += 1 card_file_path = os.path.join(cards_dir, f"{node.name}.png") person_node.card.save(card_file_path) # Paste the name card onto the image at the calculated position base_image.paste(name_card_copy, (node.x_coord, node.y_coord), name_card_copy) # Create a list of names from the spreadsheet, split on newlines where appropriate for node in people_dict.values(): # Add black rectangle beneath the name card if the node has children if node.has_children: if len(node.children_nodes) == 1: child_node = getattr(node, 'children_nodes', 'N/A')[0] # Only one child, so get the first (and only) child # print(getattr(child_node, 'y_coord', 'N/A')) # Coordinates for the rectangle (centered beneath the name card) rect_x = node.x_coord + (name_card_template.width - 6) // 2 # Center the rectangle rect_y = node.y_coord + (name_card_template.height - 2) # Just below the name card rect_y_bottom = int(getattr(child_node, 'y_coord', 'N/A')) + 1 # Bottom of rectangle is aligned with the y_coord of the child # Draw the rectangle draw.rectangle([rect_x - 1, rect_y, rect_x + 6, rect_y_bottom], fill=(111, 111, 111)) else: # Calculate the leftmost and rightmost x-coordinates of the child nodes min_x = min(getattr(child, 'x_coord', 0) for child in node.children_nodes) max_x = max(getattr(child, 'x_coord', 0) for child in node.children_nodes) # Calculate the center of the rectangle (between the leftmost and rightmost child nodes) rect_x = (min_x + max_x) // 2 # Center x-coordinate between the children rect_y = (node.y_coord + min(getattr(child, 'y_coord', node.y_coord) for child in node.children_nodes)) // 2 rect_width = max_x - min_x draw.rectangle([rect_x - rect_width // 2 + 75, rect_y + 36, rect_x + rect_width // 2 + 75, rect_y + 6 + 37], fill=(111, 111, 111)) parent_y_bottom = rect_y + 36 # Coordinates for the rectangle (centered beneath the name card) rect_x = node.x_coord + (name_card_template.width - 6) // 2 # Center the rectangle rect_y = node.y_coord + (name_card_template.height - 2) # Just below the name card draw.rectangle([rect_x - 1, rect_y, rect_x + 6, parent_y_bottom], fill=(111, 111, 111)) # Now create a vertical rectangle for each child node for child in node.children_nodes: child_x = getattr(child, 'x_coord', 0) child_center_x = child_x + (name_card_template.width - 6) // 2 # x-center of the child child_y_bottom = parent_y_bottom # The bottom of the rectangle should align with the parent's bottom # Draw the rectangle from the center of the child node up to the parent's y-bottom draw.rectangle([child_center_x - 1, child_y_bottom, child_center_x + 6, getattr(child, 'y_coord', 0) + 1], fill=(111, 111, 111)) # 6px wide # Print out the family trees for each root node (disconnected trees) for root_node in root_nodes: family_tree = [] for pre, fill, node in RenderTree(root_node): family_tree.append(f"{pre}{node.name}") family_trees.append(family_tree) print(f"\nFamily Tree starting from {root_node.name}:") for pre, fill, node in RenderTree(root_node): # print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Children Nodes: {getattr(node, 'children_nodes', 'N/A')})") print(f"{pre}{node.name} (ID: {node.id}, Tree Number: {node.tree_number}, Has Children: {node.has_children}, End ID: {getattr(node, 'end_id', 'N/A')}, Y Coord: {getattr(node, 'y_coord', 'N/A')}, Children: {len(getattr(node, 'children_nodes', 'N/A'))})") # Save the final image with name cards and black rectangles base_image.save("YJMB_Trumpet_Section_Family_Trees_2024.png") base_image.show() | 2 | 0 |
79,243,117 | 2024-12-2 | https://stackoverflow.com/questions/79243117/moviepy-subtitlesclip-will-not-accept-any-provided-font-python | I am currently trying out moviepy to burn subtitles on a video. However I keep getting the same error message no matter what I do. This is the code I am using: from moviepy import TextClip from moviepy.video.tools.subtitles import SubtitlesClip ... generator = lambda txt: TextClip( text = self.txt, font = self.font_path, font_size = 100, color= self.text_color, stroke_color="black", stroke_width=5, ) subtitles = SubtitlesClip(self.subtitles_path, generator) where self.font_path is currently the string "/fonts/my_font.otf". Regardless of the font however, I keep getting the error message: ValueError: Invalid font <function _spyderpdb_code.<locals>.<lambda> at 0x7b89f4d60e00>, pillow failed to use it with error 'function' object has no attribute 'read' | I was facing the same issue and I managed to solve it. Corrected code from moviepy import TextClip from moviepy.video.tools.subtitles import SubtitlesClip ... generator = lambda txt: TextClip( self.font_path, text = self.txt, font_size = 100, color= self.text_color, stroke_color="black", stroke_width=5, ) subtitles = SubtitlesClip(self.subtitles_path, make_textclip=generator) Explanation The error occurs on this portion of code in the VideoClip.py from MoviePy: try: _ = ImageFont.truetype(font) except Exception as e: raise ValueError( "Invalid font {}, pillow failed to use it with error {}".format(font, e) ) The thing is that font is interpreted as a your generator function. As said AKX: pass your generator as make_textclip But this did not fully solve the issue because you will face another error: TypeError: multiple values for argument 'font' To solve it you need to look at the TextClip class: class TextClip(ImageClip): ... @convert_path_to_string("filename") def __init__( self, font, text=None, filename=None, font_size=None, size=(None, None), margin=(None, None), color="black", bg_color=None, stroke_color=None, stroke_width=0, method="label", text_align="left", horizontal_align="center", vertical_align="center", interline=4, transparent=True, duration=None, ): The first argument needed to initialise a TextClip object is the font as a positional argument and then the text as a named argument. Note You also don't need to pass the full path of the font based on PIL's truetype doctring: :param font: A filename or file-like object containing a TrueType font. If the file is not found in this filename, the loader may also search in other directories, such as: * The :file:`fonts/` directory on Windows, * :file:`/Library/Fonts/`, :file:`/System/Library/Fonts/` and :file:`~/Library/Fonts/` on macOS. * :file:`~/.local/share/fonts`, :file:`/usr/local/share/fonts`, and :file:`/usr/share/fonts` on Linux; or those specified by the ``XDG_DATA_HOME`` and ``XDG_DATA_DIRS`` environment variables for user-installed and system-wide fonts, respectively. It will automatically search for the font in the correct folder according to your OS. | 2 | 1 |
79,240,178 | 2024-11-30 | https://stackoverflow.com/questions/79240178/python-set-and-get-windows-11-volume | I have this script: from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume from ctypes import cast, POINTER from comtypes import CLSCTX_ALL, CoInitialize, CoUninitialize CLSCTX_ALL = 7 import time def set_windows_volume(value_max_100): devices = AudioUtilities.GetSpeakers() interface = devices.Activate(IAudioEndpointVolume._iid_, CLSCTX_ALL, None) volume = cast(interface, POINTER(IAudioEndpointVolume)) scalarVolume = int(value_max_100) / 100 volume.SetMasterVolumeLevelScalar(scalarVolume, None) def get_windows_volume(): devices = AudioUtilities.GetSpeakers() interface = devices.Activate(IAudioEndpointVolume._iid_, CLSCTX_ALL, None) windows_volume = cast(interface, POINTER(IAudioEndpointVolume)) volume_percentage = int(round(windows_volume.GetMasterVolumeLevelScalar() * 100)) return volume_percentage for i in range(0,100): set_windows_volume(i) time.sleep(2) print(get_windows_volume()) but sometimes it raises errors: Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable ======== Running on http://192.168.1.188:8080 ======== Exception ignored in: <function _compointer_base.__del__ at 0x000002B538E96B90> Traceback (most recent call last): File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 426, in __del__ self.Release() File "C:\Python\lib\site-packages\comtypes\_post_coinit\unknwn.py", line 559, in Release return self.__com_Release() # type: ignore ValueError: COM method call without VTable Basically i use this script in a multiprocessing.Process with CoInitialize and CoUninitialize and safe release, but the error is still there. Any help or alternatives? | I have downloaded some tools from https://www.nirsoft.net/ SoundVolumeView.exe svcl.exe nircmd.exe python script: import csv import subprocess import os def find_active_audio_device(): try: subprocess.run("SoundVolumeView.exe /scomma devices_list.csv", shell=True, check=True) if not os.path.exists("devices_list.csv"): raise FileNotFoundError("devices_list.csv not found. Ensure SoundVolumeView.exe is working correctly.") with open("devices_list.csv", "r", encoding="utf-8") as csvfile: reader = csv.DictReader(csvfile) for row in reader: if row["Device State"] == "Active" and row["Type"] == "Device" and row["Direction"] == "Render": return row["Command-Line Friendly ID"] except subprocess.CalledProcessError as e: return f"Error running SoundVolumeView: {e}" except FileNotFoundError: return "CSV file not generated." except KeyError: return "CSV format is invalid or missing required fields." return None def get_audio_volume(): device_id = find_active_audio_device() if not device_id: return "No active audio device found." try: command = f'svcl.exe /Stdout /GetPercent "{device_id}"' result = subprocess.run(command, capture_output=True, text=True, shell=True, check=True) return result.stdout.strip() except subprocess.CalledProcessError as e: return f"Error getting volume: {e}" def set_audio_volume(percent=50): if not 0 <= percent <= 100: raise ValueError("Volume percentage must be between 0 and 100.") try: max_volume = 65535 volume = int(max_volume * percent / 100) subprocess.run(["nircmd.exe", "setsysvolume", str(volume)], check=True) except subprocess.CalledProcessError as e: return f"Error setting volume: {e}" except FileNotFoundError: return "nircmd.exe not found. Ensure it is in the same directory or added to PATH." # Example usage set_audio_volume(34) volume = get_audio_volume() print(f"Current Volume: {volume}") I am not the maker of this exe tools, so use at your own risk. | 2 | 0 |
79,249,385 | 2024-12-3 | https://stackoverflow.com/questions/79249385/xlsxwriter-creating-table-formulas-using-structural-references | I am trying to create a table in Excel using XLSX writer where a lot of the data is precomputed, but a few columns need running formulas. I am trying to use structural references (headers as reference) to improve readability of the formulas in the table. However, upon opening the generated file, I get a warning that the file has to be repaired and the formula is zero'd out. Here's code that has the same idea as my actual code and recreates my problem: import xlsxwriter workbook = xlsxwriter.Workbook('example.xlsx') worksheet = workbook.add_worksheet() data = [ ['SOH', 'SOO', 'Actual Order', 'Total'], # Headers [10, 5, 3, None], [20, 10, 7, None], [15, 8, 6, None] ] for row, row_data in enumerate(data): worksheet.write_row(row, 0, row_data) worksheet.add_table('A1:D4', { 'name': 'orderTable', 'columns': [ {'header': 'SOH'}, {'header': 'SOO'}, {'header': 'Actual Order'}, {'header': 'Total'} ] }) for row in range(1, 4): formula = '=orderTable[@[SOH]] + orderTable[@[SOO]] + orderTable[@[Actual Order]]' worksheet.write_formula(row, 3, formula) workbook.close() What I have tried so far: Confirmed that the table is created and named correctly in Excel Pasting the formula as is in the cell it would be placed -> in this case the formula works as intended Using direct cell references (E.g. B1, B2 etc) -> This actually works, but I prefer using structural references instead for readability I tried both with and without table name in front (e.g. [@[SOH]] and orderTable[@[SOH]] I tried removing/changes inner brackets and using quotation marks (' ') around the variable names I confirmed that the headers do not contain any odd characters that could cause issues in creating the references I tried using write_array_formula and some other formats without any luck Thanks in advance! | The structured formula that Excel displays in a Table isn't the formula that it stores. In addition to this Excel has changed the syntax of referring to table elements from [#This Row],[Column] to @[Column] over time but it still uses the former in the stored formula. XlsxWriter tries to account for this and in most cases it will modify and store the required formula correctly. For example the program would generate the required output if the formula was added to the table column parameters like this: import xlsxwriter workbook = xlsxwriter.Workbook('example.xlsx') worksheet = workbook.add_worksheet() data = [ ['SOH', 'SOO', 'Actual Order', 'Total'], # Headers [10, 5, 3, None], [20, 10, 7, None], [15, 8, 6, None] ] for row, row_data in enumerate(data): worksheet.write_row(row, 0, row_data) worksheet.add_table('A1:D4', { 'name': 'orderTable', 'columns': [ {'header': 'SOH'}, {'header': 'SOO'}, {'header': 'Actual Order'}, {'header': 'Total', 'formula': 'orderTable[@[SOH]] + orderTable[@[SOO]] + orderTable[@[Actual Order]]'} ] }) workbook.close() Output: This is explained, to some extent, in the XlsxWriter docs on table columns. If you wish to explicitly set the formula, like in the for() loop of your example you will need to explicitly expand the formula yourself to the following: for row in range(1, 4): formula = 'orderTable[[#This Row],[SOH]] + orderTable[[#This Row],[SOO]] + orderTable[[#This Row],[Actual Order]]' worksheet.write_formula(row, 3, formula) | 1 | 2 |
79,246,369 | 2024-12-3 | https://stackoverflow.com/questions/79246369/detect-filled-in-black-rectangles-on-patterned-background-with-python-opencv | I'm trying to detect the location of these filled-in black rectangles using OpenCV. Black rectangles on paper I have tried to find the contours of these, but I think the background lines are also detected as objects. Also the rectangles aren't fully seperated (sometimes they touch a corner), and then they are detected as one, but I want the location of each of them seperately. Here are the results I got, from the following code. import numpy as np import cv2 image = cv2.imread("page.jpg") result = image.copy() gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,51,9) cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(thresh, [c], -1, (255,255,255), -1) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=4) cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3) cv2.imshow('thresh', thresh) cv2.imshow('opening', opening) cv2.imshow('image', image) cv2.waitKey() Threshold Opening Result As you can see, in the Opening image, the white rectangles are joined with the black ones, but I want them seperately. Then in the Result, it just detects a contour around the entire page. | As I said in the comments, no need for adaptive thresholding. Simple problems are best solved with simple solutions. A simple threshold would suffice in your case, but I guess you did not want to do that because of the lines? Is there any reason why you went with adaptive thresholding? Here's my appraoch: Read image Define threshold Generate an erosion rectangle with a width which you will need later Erode the binary mask Get contours For every contour, correct the properties of the rectangle, to account for the erosion step (more on this in the next two pictures) Print out, draw, save, go crazy... Step number 6 is important, here's why: the erosion takes a chunk of your area, width and height. This can be clearly seen in the already accepted answer. Before the correction, the rectangles will look like this: Notice what I mentioned earlier, bad dimensions. By taking into account what has been eroded, we can closely estimate the actual rectangle: I hope this helps you further, here's the code as a dump: import cv2 %matplotlib qt import matplotlib.pyplot as plt import numpy as np im = cv2.imread("stack.png") # read as BGR imGray = cv2.imread("stack.png", cv2.IMREAD_GRAYSCALE) # read as gray kernelSize = 20 # define the size of the erosion rectangle smallRectangle = np.ones((kernelSize, kernelSize), dtype=np.uint8) # define the small erosion rectangle mask = (imGray<130).astype("uint8") # get the mask based on a threshold, I think OTSU might work here as well eroded = cv2.erode(mask, smallRectangle) # erode the image contours, _ = cv2.findContours(eroded, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE) # find contours for i, cnt in enumerate(contours): # for every cnt in cnts # parse the rectangle parameters x,y,w,h = cv2.boundingRect(cnt) # correct the identified boxes to account for erosion x -= kernelSize//2 y -= kernelSize//2 w += kernelSize h += kernelSize box = (x,y,w,h) # assemble box back # draw rectangle im = cv2.rectangle(im, box, (0,0,255), 3) # print out results print(f"Rectangle {i}:\n center at ({x+w//2}, {y+h//2})\n width {w} px\n height {h} px") Results of the printing: Rectangle 0: center at (87, 471) width 42 px height 74 px Rectangle 1: center at (158, 403) width 38 px height 75 px Rectangle 2: center at (229, 403) width 45 px height 78 px Rectangle 3: center at (121, 333) width 41 px height 77 px Rectangle 4: center at (47, 259) width 43 px height 75 px Rectangle 5: center at (82, 191) width 46 px height 77 px | 2 | 2 |
79,248,409 | 2024-12-3 | https://stackoverflow.com/questions/79248409/how-to-conditionally-fill-between-two-line-charts-with-different-colours-using-p | I'm trying to fill with colour the space between two line charts (between col1 and col2) as follows: Desired output: when col1 is above col2, fill with color green. if col1 is under col2, fill with color red. I have tried this: def DisplayPlot(df): month = df['month'].tolist() col1 = df['col1'].tolist() col2 = df['col2'].tolist() col3 = df['col3'].tolist() fig.add_trace(go.Scatter(x=month, y=col1, name='col1', line=dict(color='blue', width=4))) fig.add_trace(go.Scatter(x=month, y=col2, name = 'col2',line= dict(color='red', width=2))) fig.add_trace(go.Scatter(x=month, y=col3, name='col3',mode='lines',line = dict(color='#CDCDCD', width=2, dash='dot'))) # Edit the layout fig.update_layout( title=dict( text='Title' ), xaxis=dict( title=dict( text='Month' ) ), yaxis=dict( title=dict( text='Taux' ) ), ) st.plotly_chart(fig) But I have no idea how to add the colour based on the condition above. Here's the dataset: MONTH COL1 COL2 COL3 Jan 0.1555 0.1256 0.1863 Feb 0.1097 0.119 0.1863 Apr 0.1459 0.175 0.1863 Mar 0.2804 0.2634 0.1863 May 0.267 0.1855 0.1863 | You'll have to understand that the line plot contains segments and you'll have to handle those segments separately to get the desired result. By segments I mean the regions between the lines of col1 and col2. The segments are separated from each other by the intersections of the lines. The following conditions are needed to be handled for each segment: col1 is above col2 throughout the segment col2 is above col1 throughout the segment Handling the intersections during which the above conditions get reversed (We need to find these intersections using linear interpolation) Handling parallel lines to avoid division by zero errors (refer to the code below for more clarity) Here's the DisplayPlot function to handle the above conditions: def DisplayPlot(df): if df["month"].dtype == "object": df["month"] = ( pd.Categorical( df["month"], categories=["Jan", "Feb", "Mar", "Apr", "May", "Jun"], ordered=True, ).codes + 1 ) month = df["month"] col1 = df["col1"] col2 = df["col2"] col3 = df["col3"] fig = go.Figure() fig.add_trace( go.Scatter(x=month, y=col1, name="col1", line=dict(color="blue", width=4)) ) fig.add_trace( go.Scatter(x=month, y=col2, name="col2", line=dict(color="red", width=2)) ) fig.add_trace( go.Scatter( x=month, y=col3, name="col3", mode="lines", line=dict(color="#CDCDCD", width=2, dash="dot"), ) ) # Iterate through segments and fill regions conditionally for i in range(1, len(month)): x1, x2 = month[i - 1], month[i] y1_col1, y2_col1 = col1[i - 1], col1[i] y1_col2, y2_col2 = col2[i - 1], col2[i] # Check if col1 is above col2 throughout the segment if y1_col1 > y1_col2 and y2_col1 > y2_col2: fig.add_trace( go.Scatter( x=[x1, x2, x2, x1], y=[y1_col1, y2_col1, y2_col2, y1_col2], fill="toself", fillcolor="rgba(0, 255, 0, 0.3)", # Green fill mode="none", showlegend=False, ) ) # Check if col2 is above col1 throughout the segment elif y1_col1 < y1_col2 and y2_col1 < y2_col2: fig.add_trace( go.Scatter( x=[x1, x2, x2, x1], y=[y1_col2, y2_col2, y2_col1, y1_col1], fill="toself", fillcolor="rgba(255, 0, 0, 0.3)", # Red fill mode="none", showlegend=False, ) ) else: # Handle crossing lines within the segment # Avoid division by zero by checking if the difference is non-zero denominator = (y2_col1 - y1_col1) - (y2_col2 - y1_col2) if denominator != 0: # Find the intersection point using linear interpolation intersect_x = x1 + (x2 - x1) * ((y1_col2 - y1_col1) / denominator) intersect_y = y1_col1 + (intersect_x - x1) * ( (y2_col1 - y1_col1) / (x2 - x1) ) # Fill green where col1 is above if y1_col1 > y1_col2: fig.add_trace( go.Scatter( x=[x1, intersect_x, intersect_x, x1], y=[y1_col1, intersect_y, intersect_y, y1_col2], fill="toself", fillcolor="rgba(0, 255, 0, 0.3)", mode="none", showlegend=False, ) ) # Fill red where col2 is above fig.add_trace( go.Scatter( x=[intersect_x, x2, x2, intersect_x], y=[intersect_y, y2_col1, y2_col2, intersect_y], fill="toself", fillcolor="rgba(255, 0, 0, 0.3)", mode="none", showlegend=False, ) ) else: # Fill red where col2 is above fig.add_trace( go.Scatter( x=[x1, intersect_x, intersect_x, x1], y=[y1_col2, intersect_y, intersect_y, y1_col1], fill="toself", fillcolor="rgba(255, 0, 0, 0.3)", mode="none", showlegend=False, ) ) # Fill green where col1 is above fig.add_trace( go.Scatter( x=[intersect_x, x2, x2, intersect_x], y=[intersect_y, y2_col2, y2_col1, intersect_y], fill="toself", fillcolor="rgba(0, 255, 0, 0.3)", mode="none", showlegend=False, ) ) else: # If lines are parallel, fill based on initial conditions if y1_col1 > y1_col2: fig.add_trace( go.Scatter( x=[x1, x2, x2, x1], y=[y1_col1, y2_col1, y2_col2, y1_col2], fill="toself", fillcolor="rgba(0, 255, 0, 0.3)", mode="none", showlegend=False, ) ) else: fig.add_trace( go.Scatter( x=[x1, x2, x2, x1], y=[y1_col2, y2_col2, y2_col1, y1_col1], fill="toself", fillcolor="rgba(255, 0, 0, 0.3)", mode="none", showlegend=False, ) ) month_mapping = {1: "Jan", 2: "Feb", 3: "Mar", 4: "Apr", 5: "May", 6: "Jun"} fig.update_layout( title=dict(text="Comparison of col1 and col2"), xaxis=dict( title=dict(text="Month"), tickvals=list(month_mapping.keys()), ticktext=list(month_mapping.values()), ), yaxis=dict(title=dict(text="Values")), showlegend=True, ) st.plotly_chart(fig) You can find a full working example here with the corresponding Streamlit application deployed here. Update1: As you have shared the months are strings and not numbers - you can just add this at the start of the function and it should work (updated the code above as well): def DisplayPlot(df): if df["month"].dtype == "object": df["month"] = ( pd.Categorical( df["month"], categories=["Jan", "Feb", "Mar", "Apr", "May", "Jun"], ordered=True, ).codes + 1 ) . . . Also, you can have more months in the categories as per your requirement. I have kept it to the minimum for brevity. Update2: We can use a mapping dictionary to replace the numbers with month names during the final layout update as follows (updated the original code as well): . . . month_mapping = {1: "Jan", 2: "Feb", 3: "Mar", 4: "Apr", 5: "May", 6: "Jun"} fig.update_layout( title=dict(text="Comparison of col1 and col2"), xaxis=dict( title=dict(text="Month"), tickvals=list(month_mapping.keys()), ticktext=list(month_mapping.values()), ), yaxis=dict(title=dict(text="Values")), showlegend=True, ) st.plotly_chart(fig) . . . | 2 | 2 |
79,248,880 | 2024-12-3 | https://stackoverflow.com/questions/79248880/networkx-find-largest-fully-connected-subset-of-points | I am new to graph theory and am trying to find the largest subset of x,y coordinate points in which all points in the subset are within a specified distance of one another. I am currently using the nx.k_core(G) function in the following code: import networkx as nx import geopandas as gpd import numpy as np # Initialize graph G = nx.Graph() # Create nodes for idx, point in enumerate(gdf.geometry): G.add_node(idx, pos=(point.x, point.y)) # Make connections based on 3-mile threshold threshold = 3 * 5280 for i in range(len(gdf)): for j in range(i + 1, len(gdf)): point1 = gdf.geometry.iloc[i] point2 = gdf.geometry.iloc[j] distance = point1.distance(point2) if distance < threshold: G.add_edge(i, j, weight=distance) # Find subset of fully connected points core_g = nx.k_core(G) gdf_subset = gdf.iloc[np.array(core_g.nodes())] # Check distances distances = gdf_subset .geometry.apply(lambda geom: gdf_subset .distance(geom)) distances = [x for x in distances.to_numpy().flatten() if x >threshold] print(distances) Edit: The following is an example of the gdf where the function does work to find the subset: x = [7014374.9196544 , 7004008.53036228, 7004042.49746631, 7019567.77240716, 7018905.07998047, 7000518.63143169, 7001300.93303323, 6998409.97803232, 6998506.74716872, 6998447.48736545] y = [1153370.50279499, 1165484.07580641, 1163334.84053259, 1165224.27028039, 1165504.92948242, 1159125.54067565, 1158846.3225193 , 1160039.91516453, 1157673.01842574, 1157635.65641456] geometry = gpd.points_from_xy(x=x, y=y) gdf = gpd.GeoDataFrame(geometry=geometry,crs="EPSG:2227") But using a much larger dataset of x,y points, it does not properly filter the points: x=[6976758.463773819, 6977093.184559382, 6976797.36470676, 6972610.464597752, 6974361.340182712, 6982474.642210263, 6986062.801006353, 6970849.624019324, 6968166.810391851, 6966362.750706404, 6969184.980095293, 6972262.814811845, 6966844.204922337, 6966888.478964512, 6967244.422170041, 6969614.686502094, 6990356.158245409, 6985244.4455984095, 6987939.80134037, 6981684.035262955, 6981620.091957918, 6977194.833151057, 6973647.5580193, 6972401.219414424, 6972181.1010827515, 6977545.751431958, 6977276.477270048, 6977279.180730501, 6982665.23935931, 6983265.274714659, 6987455.12297157, 6988951.103273708, 6988651.907310768, 6992239.454862614, 6988124.517805978, 6988280.244704331, 6987825.826193168, 6987825.826193168, 6977950.128964132, 6972212.611515398, 6983125.573658108, 6983462.728727658, 6975674.048946035, 6977768.29780711] y=[1167471.315568185, 1165071.4983622595, 1164848.4903694615, 1164786.6823113142, 1167763.77605829, 1165261.2273441725, 1165351.8601757414, 1162465.3963376773, 1161806.8777149902, 1162436.5792886976, 1153514.1650730975, 1147692.7897634686, 1149836.5013905086, 1146776.4322807193, 1147000.2090415196, 1144556.914447512, 1158457.9077438777, 1160129.0363665363, 1159951.3434887768, 1158144.228039553, 1162406.3500516259, 1158222.9603871382, 1155292.0210851224, 1152540.9439513667, 1153266.4518821521, 1152726.215971334, 1152722.2165656704, 1152540.0718662178, 1152584.123681064, 1152483.8434239987, 1152474.3879562814, 1152497.1834639476, 1152492.6179683174, 1152729.7582206137, 1147820.598499059, 1147422.158551845, 1147779.6127970135, 1147779.6127970135, 1147667.4658725595, 1151117.1373719363, 1141878.4505136542, 1139369.3168056472, 1139653.9060171435, 1139757.820359667] geometry = gpd.points_from_xy(x=x, y=y) gdf = gpd.GeoDataFrame(geometry=geometry,crs="EPSG:2227") What I need is the largest subset of points in which each point is connected to all others in the subset (via the 3-mile threshold). Is there a better approach to achieve this? Thanks! | The K-core is not "the largest subset of x,y coordinate points in which all points in the subset are within a specified distance of one another". It is the maximal subgraph that contains nodes of degree k or more. In networkx's implementation of k_core, the default k=None will yield the subgraph with the largest core number. You rather want to find a clique, specifically, the max_clique: nodes = nx.approximation.max_clique(G) # {0, 1, 2, 3, 4, 5, 6, 7, 17, 19, 20, 21, 25, 26, 27} Checking the distances: nodes = nx.approximation.max_clique(G) gdf_subset = gdf.iloc[list(nodes)] # Check distances distances = gdf_subset .geometry.apply(lambda geom: gdf_subset .distance(geom)) distances = [x for x in distances.to_numpy().flatten() if x >threshold] print(distances) # [] | 1 | 1 |
79,247,091 | 2024-12-3 | https://stackoverflow.com/questions/79247091/how-to-efficiently-compute-and-process-3x3x3-voxel-neighborhoods-in-a-3d-numpy-a | I am working on a function to process 3D images voxel-by-voxel. For each voxel, I compute the difference between the voxel value and its 3x3x3 neighborhood, apply distance-based scaling, and determine a label based on the maximum scaled difference. However, my current implementation is slow, and I'm looking for ways to optimize it. import numpy as np from tqdm import tqdm import itertools def create_label(image_input): # Define neighborhood offsets and distances locations = np.array([tuple([dx, dy, dz]) for dx, dy, dz in itertools.product(range(-1, 2), repeat=3)]) euclidian_distances = np.linalg.norm(locations, axis=1) euclidian_distances_inverse = np.divide(1, euclidian_distances, out=np.zeros_like(euclidian_distances), where=euclidian_distances != 0) image_input = image_input.astype(np.float32) label = np.zeros_like(image_input) image_shape = image_input.shape # Process each voxel (excluding the edges) for i in tqdm(range(1, image_shape[0] - 1)): for j in range(1, image_shape[1] - 1): for k in range(1, image_shape[2] - 1): # Extract 3x3x3 neighborhood values all_values_neighborhood = [image_input[loc[0] + i, loc[1] + j, loc[2] + k] for loc in locations] # Compute differences and scale by distances centre_minus_rest = image_input[i, j, k, None] - np.array(all_values_neighborhood) if np.all(centre_minus_rest < 0): label[i, j, k] = 13 else: centre_minus_rest_divided = centre_minus_rest * euclidian_distances_inverse centre_minus_rest_divided[13] = -100 # Ignore the center value class_label = np.argmax(centre_minus_rest_divided) label[i, j, k] = class_label return label # example 3d array image_input = np.random.rand(10, 10, 10).astype(np.float32) # run the function label = create_label(image_input) print(label) | You could use a sliding_window_view, and vectorize the for loops. One attempt def create_label2(image_input): # Define neighborhood offsets and distances locations = np.array([tuple([dx, dy, dz]) for dx, dy, dz in itertools.product(range(-1, 2), repeat=3)]) euclidian_distances = np.linalg.norm(locations, axis=1) euclidian_distances_inverse = np.divide(1, euclidian_distances, out=np.zeros_like(euclidian_distances), where=euclidian_distances != 0) image_input = image_input.astype(np.float32) label = np.zeros_like(image_input) image_shape = image_input.shape neighborhood = np.lib.stride_tricks.sliding_window_view(image_input, (3,3,3)) # Process each voxel (excluding the edges) centre_minus_rest = image_input[1:-1, 1:-1, 1:-1, None,None,None] - neighborhood centre_minus_rest_divided = centre_minus_rest * euclidian_distances_inverse.reshape(1,1,1,3,3,3) centre_minus_rest_divided[:,:,:,1,1,1] = -100 class_label = np.argmax(centre_minus_rest_divided.reshape(image_shape[0]-2, image_shape[1]-2, image_shape[2]-2,27), axis=3) label[1:-1,1:-1,1:-1] = np.where((centre_minus_rest<0).all(axis=(3,4,5)), 13, class_label) return label On my machine, that is a Γ70 gain What it does is building a (P-2)Γ(H-2)Γ(W-2)Γ3Γ3Γ3 array of all 3x3x3 cubes around each z,y,x points in 1..P-1, 1..H-1, 1..W-1 From there, it is then possible to computes in single vectorized operations what you did in for loops I said "attempt", which may not seem very confident, because that is quite memory consuming. Not sliding_window_view itself, that costs nothing: it is just a view, with no own data in memory (the memory is the one of image_input. So neighborhood[12,13,14,1,1,1] is the same as neighborhood[13,14,15,0,0,0]. Not just the same value: the same data, the same memory place. If you neighborhood[12,13,14,1,1,1]=999, and then print(neighborhood[13,14,15,0,0,0]) or print(neighborhood[13,12,13,0,2,2]), it prints your 999 back. So neighborhood cost nothing. And its shape (that amounts to, roughly, 27 times the volume of initial image) is "virtual". But, still, even if it is not using actual memory, that is an array of almost 27 times more values than in the image (each being repeated 27 times β "almost" part being because of edges). So, as soon as you do some computation on it, you get such an array, with this times all values in its own place in memory. In other words, if you start from a 100x100x100 image, that uses 1000000 memory places (4000000 bytes, since those are np.float32), neighborhood cost nothing: it is the same 4000000 bytes that makes its almost 27000000 values. But centre_minus_rest, and then the subsequent computation do occupy "almost" 108000000 bytes in memory. So, depending on the size of your image (and the memory available on your machine), we may have replaced a cpu-time problem by a memory problem by doing that. Compromises are possible tho. you could keep your first first loop (for i in range...), and then vectorize the computation for all j and k. Anyway, inner loops are the one that cost the more, and need to be vectorized. Not vectorizing outer loop is not that serious. In my 100x100x100 example, that means that I have 100 non-vectorized iterations and 1010000 vectorized ones... it is not a huge regression for the previous 1010100 vectorized for iterations for i in range(1, image_shape[0]-1): centre_minus_rest = image_input[i, 1:-1, 1:-1, None,None,None] - neighborhood[i-1] centre_minus_rest_divided = centre_minus_rest * euclidian_distances_inverse.reshape(1,1,3,3,3) centre_minus_rest_divided[:,:,1,1,1] = -100 class_label = np.argmax(centre_minus_rest_divided.reshape(image_shape[1]-2, image_shape[2]-2,27), axis=2) label[i,1:-1,1:-1] = np.where((centre_minus_rest<0).all(axis=(2,3,4)), 13, class_label) That is as fast on my machine. Still have the same Γ70 gain, with no measurable decrease. And yet, intermediates values (centre_minus_rest, ...) are 3 times smaller in memory. Note that this is still not ideal code. I should pick size once for all between having 3x3x3 subarrays of values, euclideans distances, ..., or having a 1D array of size 27. (But sliding_window_view needs a 3x3x3 array, otherwise it can do its magic => reshaping into (P-2,H-2,W-2,27) would necessitate a full copy ; and on the contrary, .argmax can't work along more than 1 axis, and needs length-27 arrays) Also, the itertools trick is very pythonic, but not at all numpythonic. Something like np.arange(3)[:,None,None]**2+np.arange(3)[None,:,None]**2+np.arange(3)[None,None,:]... But that is anyway outside the costly loop It must also be noted that one inconvenient of vectorization is that it sometimes forces you to compute more things that you need (but if you compute twice too much things, but 100 times faster, that is a x50 time gain). Here, my np.where((centre_minus_rest<0).all(axis=(2,3,4)), 13, class_label) means that for each value for which answer to that is 13, I have computed class_label in vain. Which your for for for if wasn't. So take it more as a first lead on how to do it, more than as a definitive solution. But, still, Γ70 is a good start. | 1 | 2 |
79,249,406 | 2024-12-3 | https://stackoverflow.com/questions/79249406/is-it-possible-to-find-the-average-of-a-group-of-numbers-and-return-the-index-of | Specifically: Write a function (in python, java or pseudocode) average(nums) that prints the mean of the numbers in list nums, and returns a tuple: The tuple's first element is the index of the number furthest from the mean. The tuple's second element is the number furthest from the mean. Can this be done with one loop? I know how to write the function with two loops, and using one or two loops are both O(n) time, so the code doesn't technically become more efficient with one loop. But I'm just curious if it can be done. A party trick, if you will. Some of my thought process: So computing the average as you iterate through the list is as simple as total += nums[i], then avg = total/(i+1). And I guess you could calculate which number is furthest from your current average, but by the time you get to the end of the list an extreme number could make your code incorrect , I think? | Is it possible to find the average of a group of numbers and return the index of the number furthest from the average with one loop? Yes. The number furthest from the mean will be either the largest or the smallest in the sample. With a single pass through the data, you can compute the average and track the indices of the largest and smallest elements seen so far. At the end, you'll know the indices of the overall largest and smallest. You can check which of these is further from the mean, and return the information for that one. | 1 | 4 |
79,249,078 | 2024-12-3 | https://stackoverflow.com/questions/79249078/efficient-algorithm-that-returns-a-list-of-unique-lists-given-a-list-of-of-lists | Given a python list of lists containing numbers i.e lists = [ [1, 2], [2, 1], [3, 4] ], the problem is to return a list of all unique lists from the input list. A list is considered a duplicate if it can be generated from another list by reordering the items in the list. i.e [2, 1] is a duplicate of [1, 2]. Given the input [ [1, 2], [2, 1], [3, 4] ], the output should be [ [1, 2], [3, 4]] . Any reordering of [ [1, 2], [3, 4]] is also correct i.e [1, 2], [4, 3]], My approach is to first sort all lists in the input list, convert the lists to tuples, use a set data structure to filter out duplicate tuples, and finally convert the unique tuples back to lists. The time complexity for sorting all lists is O(m*nlogn) where m is the number of lists and n is the size of each list(assuming same size lists). Converting the lists to tuples takes O(mn) time, creating a set from the tuples takes O(mn), converting the set of unique tuples back to lists also takes O(mn) making the total time complexity O(mnlogn + mn + mn + mn) = O(mnlogn). Can we do any better than O(mnlogn)? Code: def find_unique(lists): sorted_lists = [ sorted(lst) for lst in lists] tuples = [tuple(lst) for lst in sorted_lists] unique_tuples = set(tuples) return list(unique_tuples) | You can get an O(m*n) solution as long as the "key" you are using is O(m*n). This can be accomplished in two ways. If the inner lists can't contain duplicates, then a set of frozen sets is an elegant solution. Note, frozenset(mylist) is O(n): def unique(lists): seen = set() result = [] for sub in lists: key = frozenset(sub) if key not in seen: result.append(sub) seen.add(key) return result The above returns the first seen "unique" list that is actually in the input. If any order of a unique list is valid, even an order not seen in the original input (I presume this is the case because that is what your solution does) then perhaps more tersely: def unique(lists): return list(map(list, set(map(frozenset, lists)))) If the inner lists can contain duplicates, then the above won't work, but you can use a collections.Counter which can act like a multiset, then use a frozent-set of the items in the counter: from collections import Counter def unique(lists): seen = set() result = [] for sub in lists: key = frozenset(Counter(sub).items()) if key not in seen: result.append(sub) seen.add(key) return result Note, if n is smallish, I bet the sorted solution is faster. | 4 | 3 |
79,249,060 | 2024-12-3 | https://stackoverflow.com/questions/79249060/is-there-a-portable-way-to-deduce-the-current-python-interpreter | I have a Python script that attempts to run a subprocess that runs the same interpreter as the currently-running interpreter. The subprocess's interpreter needs to be the same executable as the currently running interpreter's. Specifically, it needs to be the correct version even when multiple versions of Python are installed. Here are some solutions that almost work and the reasons they aren't quite complete: Using sys.executable: The expected result of sys.executable differs from it's actual implementation. It relies on potentially-incorrect paths from sys.argv and the PATH environment variable. For example, if sys.argv == [''] and PATH contains a bin directory which has python3 in it, sys.executable will return the path to that python3 interpreter even if it is NOT the same as the current interpreter (IE it refers to a newer version than the current interpreter). Deducing the path from os.__file__: This approach works fine for Linux systems where the path to the interpreter can (typically) be safely deduced from this path (os.__file__ might be something like /usr/lib/python3.8/os.py so we may deduce that the correct interpreter is /usr/bin/python3.8). However, this deduction is only safe when the installation process for the current Python interpreter was "normal" (IE this does not work for embedded installations, custom Python builds installed to nonstandard directories, etc). In addition, I am unsure that it is possible to implement this deduction in a way that is OS-agnostic. In theory, an ideal implementation would leverage OS-dependent utilities to deduce the path of the currently-running executable. This would likely mean reading /proc/self/exe or /proc/{pid}/exe on Linux or GetModuleFileName(GetCurrentModule()) on Windows. Is there an easier way to do this? | psutil package claims to be portable across several platforms Can be tested with a little recursive script import psutil import subprocess,os proc = psutil.Process().cmdline() print(f"{os.getpid()} {psutil.Process().exe()} {proc}") # set next command interpreter to the one found proc[0]=psutil.Process().exe() # run that command, forever :-p subprocess.run(proc) Running it as python3.9 proc.py 2>/dev/null 13148 /usr/bin/python3.9 ['python3.9', 'proc.py'] 13149 /usr/bin/python3.9 ['/usr/bin/python3.9', 'proc.py'] 13150 /usr/bin/python3.9 ['/usr/bin/python3.9', 'proc.py'] 13151 /usr/bin/python3.9 ['/usr/bin/python3.9', 'proc.py'] 13152 /usr/bin/python3.9 ['/usr/bin/python3.9', 'proc.py'] ^C | 1 | 1 |
79,248,789 | 2024-12-3 | https://stackoverflow.com/questions/79248789/polars-python-how-to-change-the-number-of-conditions-inputted-when-making-a-ne | I have large datasets (ranging from 100k - 4 million rows) where I am looking for different relevant codes across multiple columns. For example, if I wanted to identify each row which has some start to a string '302' I would do: import polars as pl df = pl.DataFrame({ 'Codes_1': ['302E513', '301E513', '302E512'], 'Codes_2': ['303E513', '306E510', '302E512']}).lazy() conditions = ['302E513', '306E510'] column_names = ['Codes_1', 'Codes_2'] #create new column df = df.with_columns( pl.when(pl.any_horizontal( pl.col(column_names).str.starts_with(conditions[0]), pl.col(column_names).str.starts_with(conditions[1]))) .then(1.0) .otherwise(0.0) .alias('Column_name') ) It is really annoying when I am looking for say 4 codes instead of 2 to have to type in each of the codes to form my new column: import polars as pl df = pl.DataFrame({ 'Codes_1': ['302E513', '301E513', '302E512'], 'Codes_2': ['303E513', '306E510', '302E512']}).lazy() conditions = ['302E513', '306E510', '5164E23', '302E514'] column_names = ['Codes_1', 'Codes_2'] #create new column df = df.with_columns( pl.when(pl.any_horizontal( #Tedious part pl.col(column_names).str.starts_with(conditions[0]), pl.col(column_names).str.starts_with(conditions[1]), pl.col(column_names).str.starts_with(conditions[2]), pl.col(column_names).str.starts_with(conditions[3]) )) .then(1.0) .otherwise(0.0) .alias('Column_name') ) I know that this can be done with pandas by updating a mask with a for loop import pandas as pd df = pd.DataFrame({ 'Codes_1': ['302E513', '301E513', '302E512'], 'Codes_2': ['303E513', '306E510', '302E512']}) conditions = ['302E513', '306E510'] column_names = ['Codes_1', 'Codes_2'] #loop to create new column mask = False for code in conditions: mask |= df[column_names].eq(code).any(axis=1) df['Column_name'] = 0.0 df.loc[mask, 'Column_name'] = 1.0 print(df['Column_name']) And I could change the number of conditions to any number and this code would execute. However, I would much rather use polars as it is faster and does not overflow the RAM on my machine for larger datasets. Any help is appreciated. | You could replace the multiple str.starts_with with a single regex and str.contains: df.with_columns( pl.when(pl.any_horizontal( pl.col(column_names).str.contains(f"^({'|'.join(conditions)})"), )) .then(1.0) .otherwise(0.0) .alias('Column_name') ) Or use a loop: df.with_columns( pl.when(pl.any_horizontal( pl.col(column_names).str.starts_with(c) for c in conditions )) .then(1.0) .otherwise(0.0) .alias('Column_name') ) Intermediate: # f"^({'|'.join(conditions)})" '^(302E513|306E510|5164E23|302E514)' Output (non-lazy): βββββββββββ¬ββββββββββ¬ββββββββββββββ β Codes_1 β Codes_2 β Column_name β β --- β --- β --- β β str β str β f64 β βββββββββββͺββββββββββͺββββββββββββββ‘ β 302E513 β 303E513 β 1.0 β β 301E513 β 306E510 β 1.0 β β 302E512 β 302E512 β 0.0 β βββββββββββ΄ββββββββββ΄ββββββββββββββ | 3 | 2 |
79,248,296 | 2024-12-3 | https://stackoverflow.com/questions/79248296/pandas-dataframe-combine-cell-values-as-strings | I have a dataframe: Email | Col1 | Col2 | Col3 | Name -------------------------------------------------------------------- [email protected] | CellStr11 | 1.4 | CellStr13 | John Cena [email protected] | CellStr11 | 1.2 | CellStr13 | Matt Smith [email protected] | CellStr21 | 1.2 | CellStr23 | John Cena I need to aggregate the values of cells of Col1, Col2 and Col3 by Name. If row[Name] == row[Name] then: Email | Col1 | Col2 | Col3 | Name ------------------------------------------------------------------------------------------ [email protected] | CellStr11 / CellStr21 | 1.4 / 1.2 | CellStr13 / CellStr23 | John Cena [email protected] | CellStr11 | 1.2 | CellStr13 | Matt Smith I don't have to care about any other columns, their data can be lost. First I though to about this apporach: Split the dataframe in such a way that I get the other Name matching row (learnt how to do that here Append the columns to the end of the original dataframe The issue here was that not all rows had the corresponding row with matching Name, which would result in some rows having the additional columns, some not (correct me if I'm wrong, if this apporach is ok that would also be fine, even better). Is there a simple way to do it with df.apply and lambda function? Or any other more complex way? I could use: g2[['B', 'C']].apply(lambda x: x / x.sum()) but instead of summing just join as strings, but how can I differentiate between x.B and x.C? | If your dataframe is df: df['Col2'] = df['Col2'].astype('str') # all the columns must be strings gp_col = df.groupby(["Name"])[['Col1', 'Col2', 'Col3']] \ .agg(lambda x: " / ".join(x)) \ .reset_index() display(gp_col) gives: | 1 | 1 |
79,247,499 | 2024-12-3 | https://stackoverflow.com/questions/79247499/pandas-dataframe-finding-row-comparing-two-cell-values | I have a dataframe: Email | ... | Name -------------------------------------- [email protected] | ... | John Cena [email protected] | ... | John Cena I need to find a row, that matches Name column with Email column when email_cell.split("@")[0] == name_cell.lower().replace(" ", ".") I tried dataframe.loc[dataframe["Email"].str.contains(dataframe["Name"].replace(" ", "."))] and other ways but I cannot use Series as if it was a string. Is there a way to do this? | Take email usernames by splitting on '@' and Convert names to lowercase and replaces spaces with dots df[df['Email'].str.split('@').str[0] == df['Name'].str.lower().str.replace(' ', '.')] Output Matched rows: Email Name 0 [email protected] John Cena 2 [email protected] Randy Orton You can also try apply with lambda df[df.apply(lambda x: x['Email'].split('@')[0] == x['Name'].lower().replace(' ', '.'), axis=1)] | 1 | 2 |
79,246,676 | 2024-12-3 | https://stackoverflow.com/questions/79246676/plot-contours-from-discrete-data-in-matplotlib | How do I make a contourf plot where the areas are supposed to be discrete (integer array instead of float)? The values should discretely mapped to color indices. Instead matplotlib just scales the result across the whole set of colors. Example: import numpy as np from matplotlib import pyplot as plt axes = (np.linspace(-2, 2, 100), np.linspace(-2, 2, 100)) xx, yy = np.meshgrid(*axes, indexing="xy") fig, ax = plt.subplots() z = np.abs(xx * yy).astype(int) # values 0, 1, 2, 3, 4 z[z==0] = 4 ax.contourf(xx, yy, z, cmap="Set1") | Now I got it :) Thanks @jared, pcolormesh was the right function, but I have to explicitly map the colors as the plotted variable: import numpy as np from matplotlib import pyplot as plt axes = (np.linspace(-2, 2, 100), np.linspace(-2, 2, 100)) xx, yy = np.meshgrid(*axes, indexing="xy") fig, ax = plt.subplots() z = np.abs(xx * yy).astype(int) # values 0, 1, 2, 3, 4 z[z==0] = 4 cmap = plt.get_cmap("Set1") z_color = cmap(z) # shape (100, 100, 4) with `z` as index ax.pcolormesh(xx, yy, z_color) | 1 | 1 |
79,245,922 | 2024-12-3 | https://stackoverflow.com/questions/79245922/how-to-get-parameter-name-type-and-default-value-of-oracle-plsql-function-body | I have the following PLSQL code which I am processing with Antlr4 in Python. I ma having trouble extracting the function parameter name and related details. CREATE OR REPLACE FUNCTION getcost ( p_prod_id IN VARCHAR2 , p_date IN DATE) RETURN number AS The ParseTree output for this: ββ sql_script β β unit_statement β ββ create_function_body β β β "CREATE" (CREATE) β β β "OR" (OR) β β β "REPLACE" (REPLACE) β β β "FUNCTION" (FUNCTION) β β β function_name β β ββ identifier β β ββ id_expression β β ββ regular_id β β ββ "getcost" (REGULAR_ID) β β β "(" (LEFT_PAREN) β β β parameter β β β β parameter_name β β β ββ identifier β β β ββ id_expression β β β ββ regular_id β β β ββ "p_prod_id" (REGULAR_ID) β β β β "IN" (IN) β β ββ type_spec β β ββ datatype β β ββ native_datatype_element β β ββ "VARCHAR2" (VARCHAR2) β β β "," (COMMA) β β β parameter β β β β parameter_name β β β ββ identifier β β β ββ id_expression β β β ββ regular_id β β β ββ "p_date" (REGULAR_ID) β β β β "IN" (IN) β β ββ type_spec β β ββ datatype β β ββ native_datatype_element β β ββ "DATE" (DATE) β β β ")" (RIGHT_PAREN) β β β "RETURN" (RETURN) β β β type_spec β β ββ datatype β β ββ native_datatype_element β β ββ "number" (NUMBER) β The python code is: def enterParameter(self, ctx:PlSqlParser.ParameterContext): print(ctx.toStringTree(recog=parser)) param_name = None if ctx.parameter_name: if ctx.parameter_name.identifier() and ctx.parameter_name.identifier().id_expression(): param_name = ctx.parameter_name.identifier().id_expression().regular_id().getText() param_type = None if ctx.type_spec and ctx.type_spec.datatype: if ctx.type_spec.datatype.native_datatype_element(): param_type = ctx.type_spec.datatype.native_datatype_element().getText() default_value = None print(f"Parameter: {param_name}, Type: {param_type}, Def: {default_value}") The output from the print statement is: (parameter (parameter_name (identifier (id_expression (regular_id p_prod_id)))) IN (type_spec (datatype (native_datatype_element VARCHAR2)))) But I get the following error: if ctx.parameter_name.identifier() and ctx.parameter_name.identifier().id_expression(): AttributeError: 'function' object has no attribute 'identifier' If I use ctx.getChild(0).getText() to get the parameter name it works, but I don't want to rely on hardcoded indices, and I also do not understand why this isn't working. | The parameter_name in the context is a fuction, beacuse if the context can contain more than one parameter the integer parameter defines the index of it. so please change you code to this: ctx.parameter_name().identifier() And I wonder why your function has no return value, so for multiple parameters the param_name variable will always change. Of course, this could be intentional. | 1 | 1 |
79,245,886 | 2024-12-2 | https://stackoverflow.com/questions/79245886/how-can-i-efficiently-scan-multiple-remote-parquet-files-in-parallel | Suppose I have urls, a list of s3 Parquet urls (on S3). I observe that this collect_all runs in O(urls). Is there a better way to parallelize this task? import polars as pl pl.collect_all(( pl.scan_parquet(url).filter(expr) for url in urls) )) | Depending on what your expr is and what you're doing next, you might be better off with pl.concat([ pl.scan_parquet(url) for url in urls ]).filter(expr).collect() One difference is that instead of getting a list of distinct dfs this one assumes you want them all combined into one and that they have the same schema. Another approach is to use asyncio import asyncio await asyncio.gather(*[pl.scan_parquet(url).filter(expr).collect_async() for url in urls]) I've see where the asyncio.gather approach is slightly faster than the alternative. | 1 | 1 |
79,245,770 | 2024-12-2 | https://stackoverflow.com/questions/79245770/where-is-scipy-stats-dirichlet-multinomial-rvs | I wanted to draw samples from a Dirichlet-multinomial distribution using SciPy. Unfortunately it seems that scipy.stats.dirichlet_multinomial does not define the rvs method that other distributions use to generate random samples. I think this would be equivalent to the following for a single sample: import scipy.stats as sps def dirichlet_multinomial_sample(alpha, n, **kwargs): kwargs['size'] = 1 # force size to 1 for simplicity p = sps.dirichlet.rvs( alpha=alpha, **kwargs ) return sps.multinomial.rvs( n=n, p=p.ravel(), **kwargs ) Multiple samples (i.e. size > 1) could be drawn similarly with a little bit more work to make it efficient. This seems easy enough to implement. My two questions are: Is the above implementation correct? If it is, how can I suggest this enhancement to SciPy developers? | Is the above implementation correct? This looks correct to me. Based on the discussion in the PR implementing multinomial, SciPy did implement a bit of code to generate samples from a multinomial Dirichlet, but the code is only part of a test, not a public API. One of the reviewers briefly touches on what you mention: optional, probably follow-up PR add RVS method (as demonstrated in test_moments) Looking up the code from the test-case they're referencing, here's what it's doing. https://github.com/scipy/scipy/pull/17211/files#diff-a998a313f078eba79aeb6347a65117e0a0c4542a4d778a4cfd398f1737380a71R3030 rng = np.random.default_rng(28469824356873456) n = rng.integers(1, 100) alpha = rng.random(size=5) * 10 dist = dirichlet_multinomial(alpha, n) # Generate a random sample from the distribution using NumPy m = 100000 p = rng.dirichlet(alpha, size=m) x = rng.multinomial(n, p, size=m) That would appear to be essentially the same thing you're doing, only using the equivalent NumPy API rather than the SciPy API. See 1 2. Another sticking point that was discussed was whether this would require SciPy to bump the minimum version of NumPy. The problem is that it will be slow and cumbersome without NumPy 1.22 (vectorization over multinomial shape parameters), but that should be the minimum supported version before the next release of SciPy, so it should be OK. Since this message, SciPy has changed to a minimum version of 1.23.5, which means this is no longer a problem. If it is, how can I suggest this enhancement to SciPy developers? You can open an issue, and ask for them to fix this. You could also try fixing it yourself. If you do this, I would still recommend that you open an issue first. This will ensure that you and the maintainer are on the same page. If you decide to fix it yourself, I would recommend that you read the following things: The contributor quickstart guide Running SciPy Tests Locally The implementation of another multivariate rvs(), such as multivariate_normal.rvs(). The comments in this thread which specifically mention rvs(). | 2 | 2 |
79,244,847 | 2024-12-2 | https://stackoverflow.com/questions/79244847/scipy-spatial-how-to-use-convexhull-with-points-containing-an-attribute | Having a list of points (x, y), the function ConvexHull() of SciPy.spatial is great to calculate the points that form the hull. In my case, every point (x, y) also has a string as attribute. Is it possible to calculate the hull and returning its points including their respective attribute (x, y, str)? I checked both https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html as well as http://www.qhull.org/html/qh-optq.htm, but could not find a solution. Here, an example where I would like to run the commented lines with points_with_attribute instead of points_no_attribute: from scipy.spatial import ConvexHull points_no_attribute = [(0, 0), (2, 6), (6, 1), (2, 4), (3, 2)] points_with_attribute = [(0, 0, "point1"), (2, 6, "point2"), (6, 1, "point3"), (2, 4, "point4"), (3, 2, "point5")] hull = ConvexHull(points_no_attribute) hull_points = [points_no_attribute[i] for i in hull.vertices] # hull = ConvexHull(points_with_attribute) # hull_points = [points_with_attribute[i] for i in hull.vertices] print(hull_points) Thanks for a hint! | I would suggest: hull = ConvexHull([(p[0], p[1]) for p in points_with_attribute]) hull_points = [points_with_attribute[i] for i in hull.vertices] This has the advantage of avoiding an O(N^2) loop, by using the index of the points that has already been found. | 1 | 2 |
79,245,015 | 2024-12-2 | https://stackoverflow.com/questions/79245015/pydantic-objects-as-elements-in-a-polars-dataframe-get-automatically-converted-t | What puzzles me is that when running class Cat(pydantic.BaseModel): name: str age: int cats = [Cat(name="a", age=1), Cat(name="b", age=2)] df = pl.DataFrame({"cats": cats}) df = df.with_columns(pl.lit(0).alias("acq_num")) def wrap(batch): return Cat(name="c", age=3) df = df.group_by("acq_num").agg(pl.col("cats").map_batches(wrap, return_dtype=pl.Struct).alias("cats")) type(df["cats"][0][0]) # dict the resulting entries are dicts, even though the function "wrap" returns a Cat. So polars automatically converts it to a dict, calling model_dump of pydantic? Changing to df = df.group_by("acq_num").agg(pl.col("cats").map_batches(wrap, return_dtype=pl.Object).alias("cats")) results in the error: SchemaError: expected output type 'Object("object", None)', got 'Struct([Field { name: "name", dtype: String }, Field { name: "age", dtype: Int64 }])'; set `return_dtype` to the proper datatype I am confused by this conversion happening. How can I prevent it? | Polars already infers the dtype at pl.DataFrame: cats = [Cat(name="a", age=1), Cat(name="b", age=2)] df = pl.DataFrame({"cats": cats}) df shape: (2, 1) βββββββββββββ β cats β β --- β β struct[2] β βββββββββββββ‘ β {"a",1} β β {"b",2} β βββββββββββββ df.schema Schema([('cats', Struct({'name': String, 'age': Int64}))]) Rather than accessing model_dump, it looks at the dict storing the object's attributes and converts that into a struct: Cat(name="a", age=1).__dict__ {'name': 'a', 'age': 1} To avoid this behaviour, set pl.Object for schema: df = pl.DataFrame({"cats": cats}, schema={'cats': pl.Object}) df shape: (2, 1) ββββββββββββββββββ β cats β β --- β β object β ββββββββββββββββββ‘ β name='a' age=1 β β name='b' age=2 β ββββββββββββββββββ type(df.item(0, 'cats')) __main__.Cat Now, with map_batches, the expected output is a pl.Series (or a np.array, which it converts). So, here too you need to specify the dtype at construction: # import numpy as np def wrap_batch(batch): return pl.Series([Cat(name="c", age=3)], dtype=pl.Object) # return np.array([Cat(name="c", age=3)], dtype=object) will also work df = df.group_by("acq_num").agg( pl.col("cats").map_batches(wrap_batch, return_dtype=pl.Object).alias("cats") ) (Note that the above also works without return_dtype=pl.Object, "but [this] is considered a bug in the user's query".) type(df.explode('cats').item(0, 'cats')) __main__.Cat # you could add `returns_scalar=True` in this trivial example to avoid `explode` With map_elements, you would only need to worry about the dtype in this case via return_dtype: def wrap_element(element): return Cat(name="c", age=3) df = df.group_by("acq_num").agg( pl.col("cats").map_elements(wrap_element, return_dtype=pl.Object) .alias("cats") ) type(df.item(0, 'cats')) __main__.Cat | 1 | 1 |
79,241,319 | 2024-12-1 | https://stackoverflow.com/questions/79241319/autoflake-prints-unused-imports-variables-but-doesnt-remove-them | I'm using the autoflake tool to remove unused imports and variables in a Python file, but while it prints that unused imports/variables are detected, it doesn't actually remove them from the file. Here's the command I'm running: autoflake --in-place --remove-unused-variables portal/reports.py Printed Output: portal/reports.py: Unused imports/variables detected Despite the message indicating unused imports and variables, the file remains unchanged. I've confirmed that the file has unused imports and variables that should be removed. Versions: Django: 5.0.7 autoflake: 2.3.1 Python: 3.11 Ubuntu: 24.04 I also noticed this issue: when I try to install autoflake after turning off .venv in my Django app folder or anywhere in the OS, I get the following error. error: externally-managed-environment Γ This environment is externally managed β°β> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.12/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. Has anyone encountered this issue? What could be causing it, and how can I fix it to properly remove unused imports and variables? | A similar issue was raised here and the OP was able to solve the issue by removing [tool.autoflake] check = true in the pyproject.toml. See Remove unused imports not working in place. | 1 | 1 |
79,244,459 | 2024-12-2 | https://stackoverflow.com/questions/79244459/how-to-use-aggregation-functions-as-an-index-in-a-polars-dataframe | I have a Polars DataFrame, and I want to create a summarized view where aggregated values (e.g., unique IDs, total sends) are displayed in a format that makes comparison across months easier. Here's an example of my dataset: My example dataframe: import polars as pl df = pl.DataFrame({ "Channel": ["X", "X", "Y", "Y", "X", "X", "Y", "Y", "X", "X", "Y", "Y", "X", "X", "Y", "Y"], "ID": ["a", "b", "b", "a", "e", "b", "g", "h", "a", "a", "k", "a", "b", "n", "o", "p"], "Month": ["1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2", "1", "2"] }) Currently, I use the following group_by() approach to calculate the number of unique IDs and the total number of sends for each Month and Channel: ( df .group_by( pl.col("Month"), pl.col("Channel") ) .agg( pl.col("ID").n_unique().alias("Uniques ID"), pl.col("ID").len().alias("Total sends") ) ) shape: (4, 4) βββββββββ¬ββββββββββ¬βββββββββββββ¬ββββββββββββββ β Month β Channel β Uniques ID β Total sends β β --- β --- β --- β --- β β str β str β u32 β u32 β βββββββββͺββββββββββͺβββββββββββββͺββββββββββββββ‘ β 1 β X β 3 β 4 β β 1 β Y β 4 β 4 β β 2 β X β 3 β 4 β β 2 β Y β 3 β 4 β βββββββββ΄ββββββββββ΄βββββββββββββ΄ββββββββββββββ However, my actual dataset is much larger, and have more agg_functions, so I want a format that better highlights comparisons across months. Ideally, I want the output to look like this: | Channels | agg_func | months | months | |----------|--------------|--------|--------| | | | 1 | 2 | | X | Uniques ID | 3 | 3 | | X | Total sends | 4 | 4 | | Y | Uniques ID | 4 | 3 | | Y | Total sends | 4 | 4 | I believe I could use .pivot() and pass the aggregation functions as part of the index. But, I'm not sure how to implement this directly without creating an auxiliary DataFrame. Any suggestions? | You can aggregate multiple aggregates while pivoting and then explode the lists: ( df.pivot( on="Month", values="ID", aggregate_function= pl.concat_list( pl.element().n_unique().alias("value"), pl.element().len().alias("value") ) ) .with_columns(agg_func=["Uniques ID","Total sends"]) .explode(pl.exclude("Channel")) ) shape: (4, 4) βββββββββββ¬ββββββ¬ββββββ¬ββββββββββββββ β Channel β 1 β 2 β agg_func β β --- β --- β --- β --- β β str β u32 β u32 β str β βββββββββββͺββββββͺββββββͺββββββββββββββ‘ β X β 3 β 3 β Uniques ID β β X β 4 β 4 β Total sends β β Y β 4 β 3 β Uniques ID β β Y β 4 β 4 β Total sends β βββββββββββ΄ββββββ΄ββββββ΄ββββββββββββββ Or, you can do it with multiple pivots (one per aggregate function): pl.concat([ df.pivot( on="Month", values="ID", aggregate_function=agg_func ).with_columns( pl.lit(agg_func_name).alias("agg_func") ) for agg_func, agg_func_name in [ (pl.element().n_unique(), "Uniques ID"), (pl.element().len(), "Total sends") ] ]) # alternatively group_by first and then pivot # pl.concat([ # df.group_by("Month","Channel") # .agg(agg_func) # .with_columns(agg_func=pl.lit(agg_func_name)) # for agg_func, agg_func_name in [ # (pl.col.ID.n_unique(), "Uniques ID"), # (pl.col.ID.len(), "Total sends") # ] # ]).pivot(on="Month", values="ID") shape: (4, 4) βββββββββββ¬ββββββ¬ββββββ¬ββββββββββββββ β Channel β 1 β 2 β agg_func β β --- β --- β --- β --- β β str β u32 β u32 β str β βββββββββββͺββββββͺββββββͺββββββββββββββ‘ β X β 3 β 3 β Uniques ID β β Y β 4 β 3 β Uniques ID β β X β 4 β 4 β Total sends β β Y β 4 β 4 β Total sends β βββββββββββ΄ββββββ΄ββββββ΄ββββββββββββββ Of course, you can also extend your solution with unpivot and pivot ( df .group_by("Month","Channel") .agg( pl.col("ID").n_unique().alias("Uniques ID"), pl.col("ID").len().alias("Total sends") ) .unpivot(index=["Month","Channel"], variable_name="agg_func") .pivot(on="Month", values="value") ) shape: (4, 4) βββββββββββ¬ββββββββββββββ¬ββββββ¬ββββββ β Channel β agg_func β 2 β 1 β β --- β --- β --- β --- β β str β str β u32 β u32 β βββββββββββͺββββββββββββββͺββββββͺββββββ‘ β Y β Uniques ID β 3 β 4 β β X β Uniques ID β 3 β 3 β β Y β Total sends β 4 β 4 β β X β Total sends β 4 β 4 β βββββββββββ΄ββββββββββββββ΄ββββββ΄ββββββ | 1 | 1 |
79,243,808 | 2024-12-2 | https://stackoverflow.com/questions/79243808/python-tkinter-grid-column-width-not-expanding-need-first-row-not-scorallable | I have three column with scrollbar and it needs to be expand and it should be stretched the window size but i am unable to increase the column width according to the screen size and i need top first row should need to stable that is like heading. the code is below import tkinter as tk from tkinter import * from tkinter import ttk root = tk.Tk() root.title("TEST") root.geometry("800x600") frame=ttk.Frame(root) frame.pack(expand=1, fill=BOTH) canvas = tk.Canvas(frame) scrollbar = ttk.Scrollbar(frame, orient="vertical", command=canvas.yview) canvas.configure(yscrollcommand=scrollbar.set) frame3s = ttk.Frame(canvas) frame3s.bind("<Configure>", lambda e: canvas.configure(scrollregion=canvas.bbox("all"))) RCnt = 0 DataRow = {} LabCnt = 0; for gh in range(40): DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid") DataRow[LabCnt].grid(row=RCnt, column=0, sticky='ew') LabCnt += 1 DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid") DataRow[LabCnt].grid(row=RCnt, column=1, sticky='ew') LabCnt += 1 DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"),cursor="hand2", justify=tk.CENTER,relief="solid") DataRow[LabCnt].grid(row=RCnt, column=2, sticky='ew') LabCnt += 1 RCnt += 1 frame3s.columnconfigure(gh, weight=1) frame3s.rowconfigure(gh, weight=1) frame.columnconfigure(0, weight=1) frame.rowconfigure(0, weight=1) canvas.create_window((0, 0), window=frame3s, anchor="nw") canvas.grid(row=0, column=0, sticky="nsew") scrollbar.grid(row=0, column=1, sticky="ns") def _on_mousewheel(event): canvas.yview_scroll(int(-1 * (event.delta / 120)), "units") canvas.bind_all("<MouseWheel>", _on_mousewheel) while True: root.update() please guide me to achieve expanded column width according to the screen size | If you want to expand the columns to fit the canvas width, you need to: save the item ID of canvas.create_window((0, 0), window=frame3s, ...) set the width of frame3s to the same as that of canvas inside the callback of event <Configure> on canvas using canvas.itemconfig() set weight=1 on column 0 to 2 using frame3s.columnconfigure() Below is the updated code: import tkinter as tk from tkinter import ttk root = tk.Tk() root.title("TEST") root.geometry("800x600") frame = ttk.Frame(root) frame.pack(expand=1, fill=tk.BOTH) canvas = tk.Canvas(frame) scrollbar = ttk.Scrollbar(frame, orient="vertical", command=canvas.yview) canvas.configure(yscrollcommand=scrollbar.set) frame3s = ttk.Frame(canvas) frame3s.bind("<Configure>", lambda e: canvas.configure(scrollregion=canvas.bbox("all"))) RCnt = 0 DataRow = {} LabCnt = 0; for gh in range(40): DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"), cursor="hand2", justify=tk.CENTER, relief="solid") DataRow[LabCnt].grid(row=RCnt, column=0, sticky='ew') LabCnt += 1 DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"), cursor="hand2", justify=tk.CENTER, relief="solid") DataRow[LabCnt].grid(row=RCnt, column=1, sticky='ew') LabCnt += 1 DataRow[LabCnt] = ttk.Label(frame3s, text=LabCnt, font=("Arial", 16, "bold"), cursor="hand2", justify=tk.CENTER, relief="solid") DataRow[LabCnt].grid(row=RCnt, column=2, sticky='ew') LabCnt += 1 RCnt += 1 # expand column 0 to 2 to fill the width of the frame frame3s.columnconfigure((0,1,2), weight=1) frame.columnconfigure(0, weight=1) frame.rowconfigure(0, weight=1) # save the item ID of frame3s frame_id = canvas.create_window((0, 0), window=frame3s, anchor="nw") canvas.grid(row=0, column=0, sticky="nsew") scrollbar.grid(row=0, column=1, sticky="ns") def _on_mousewheel(event): canvas.yview_scroll(int(-1 * (event.delta / 120)), "units") canvas.bind_all("<MouseWheel>", _on_mousewheel) # set width of frame3s to the same as that of the canvas canvas.bind('<Configure>', lambda e: canvas.itemconfig(frame_id, width=e.width)) root.mainloop() # using .mainloop() instead of while loop | 1 | 1 |
79,243,356 | 2024-12-2 | https://stackoverflow.com/questions/79243356/how-do-i-convert-a-csv-file-to-an-apache-arrow-ipc-file-with-dictionary-encoding | I am trying to use pyarrow to convert a csv to an apache arrow ipc with dictionary encoding turned on. The following appears to convert the csv to an arrow ipc file: file = "./in.csv" arrowFile = "./out.arrow" with pa.OSFile(arrowFile, 'wb') as arrow: with pa.csv.open_csv(file) as reader: with pa.RecordBatchFileWriter(arrow, reader.schema) as writer: for batch in reader: writer.write_batch(batch) I tried the following to use dictionary encoding: convert_options = pa.csv.ConvertOptions(auto_dict_encode = True) with pa.OSFile(arrowFile, 'wb') as arrow: with pa.csv.open_csv(file, convert_options=convert_options) as reader: with pa.RecordBatchFileWriter(arrow, reader.schema) as writer: for batch in reader: writer.write_batch(batch) But I get the following error: File "pyarrow/ipc.pxi", line 507, in pyarrow.lib._CRecordBatchWriter.write_batch File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Dictionary replacement detected when writing IPC file format. Arrow IPC files only support a single non-delta dictionary for a given field across all batches. How do I fix the code to use dictionary encoding? | It looks like the IPC protocol only supports unified dictionary encoding. In your example each batch has got different dictionary encoding, which IPC doesn't support. You'd have to load the whole table and call unify_dicionaries. import pyarrow as pa schema = pa.schema([pa.field("col1", pa.dictionary(pa.int32(), pa.string()))]) table = pa.Table.from_batches( [ pa.record_batch({"col1": ["a", "b", "b"]}, schema), pa.record_batch({"col1": ["b", "b", "c"]}, schema), ], ) with pa.OSFile("data.arrow", "wb") as arrow: with pa.RecordBatchFileWriter(arrow, schema) as writer: for batch in table.unify_dictionaries().to_batches(): writer.write_batch(batch) | 1 | 2 |
79,231,097 | 2024-11-27 | https://stackoverflow.com/questions/79231097/incorrect-syntax-near-cast-when-using-pandas-to-sql | I'm trying to write some code to update an sql table from the values in a pandas dataframe. The code I'm using to do this is: df.to_sql(name='streamlit_test', con=engine, schema='dbo', if_exists='replace', index=False) Using an sqlalchemy engine. However, I'm getting the error: ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'CAST'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)") [SQL: SELECT cast(com.value as nvarchar(max)) FROM fn_listextendedproperty('MS_Description', 'schema', CAST(? AS NVARCHAR(max)), 'table', CAST(? AS NVARCHAR(max)), NULL, NULL ) as com; ] [parameters: ('dbo', 'streamlit_test')] I thought it might be something to do with data types, so I've tried changing the dtypes of my dataframe to match the corresponding data types in the table I'm trying to write to (using this lookup: https://learn.microsoft.com/en-us/sql/machine-learning/python/python-libraries-and-data-types?view=sql-server-ver16) but I'm still getting the same error. I haven't managed to find anyone else with a similar problem through googling, so any help/pointers is greatly appreciated! Edit I've tried using: from sqlalchemy.dialects.mssql import BIGINT, FLOAT, TEXT, BIT df.to_sql(name='streamlit_test', con=engine, schema='dbo', if_exists='replace', index=False, dtype={'col1':BIGINT, 'col2':FLOAT, 'col3':TEXT, 'Tickbox':BIT, 'Comment':TEXT}) so that the dtypes match what they are in the created table, but I still get the same error. For more context, to_sql works fine and saves the dataframe to a table if the table doesn't already exist, and if I change if_exists='replace' to if_exists='append', that also works, but when trying to replace the data in the existing table, I get the 42000 error. My MSSMS version is: Microsoft SQL Server 2008 R2 (SP3-GDR) (KB4057113) - 10.50.6560.0 (X64) Dec 28 2017 15:03:48 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor) | Microsoft SQL Server offers limited support for using databases that are compatible with earlier versions. For example, SQL Server 2008 creates databases with compatibility level 100 (SQL Server 2008) by default, but it also allows us to use databases that are compatible with SQL Server 2005 - database compatibility level 90 SQL Server 2000 - database compatibility level 80 SQLAlchemy 2.0 added a feature to reflect table comments. That feature uses the query SELECT cast(com.value as nvarchar(max)) FROM fn_listextendedproperty('MS_Description', 'schema', CAST(? AS NVARCHAR(max)), 'table', CAST(? AS NVARCHAR(max)), NULL, NULL ) as com; and that syntax is only supported for database compatibility level β₯ 90. Executing that same query on a database with compatibility level β€ 80 results in sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'CAST'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Statement(s) could not be prepared. (8180)") SQLAlchemy 1.4 is required to work with such old databases, e.g., pip install sqlalchemy==1.4.54 pandas==2.1.4 | 2 | 1 |
79,244,132 | 2024-12-2 | https://stackoverflow.com/questions/79244132/placing-label-next-to-a-slider-handle | I want a label near the slider's handle, that displays the current value. I used the code provided on the forum and tried to adapt it to Python from PyQt5 import QtCore, QtGui, QtWidgets class test(QtWidgets.QWidget): def __init__(self, parent = None): super().__init__(parent) self.main_layout = QtWidgets.QHBoxLayout(self) self.main_layout.setContentsMargins(0, 0, 0, 0) self.main_layout.setSpacing(0) self.slider = QtWidgets.QSlider(QtCore.Qt.Vertical,self) self.slider.setMaximum(100) self.slider.setMinimum(0) self.slider.setTracking(True) self.label= QtWidgets.QLabel(self) self.label.setText("label") self.main_layout.addWidget(self.slider,0,QtCore.Qt.AlignLeft) self.main_layout.addWidget(self.label,0,QtCore.Qt.AlignBottom) self.updateLabel(0) self.setLayout(self.main_layout) self.slider.valueChanged.connect(self.updateLabel) def updateLabel(self,value): height= QtWidgets.QStyle.sliderPositionFromValue(0,100,self.slider.value(),self.slider.height()-self.label.height(), True) self.label.move(self.slider.width(), height) self.label.setText(str(value)) def resizeEvent(self,ev): self.updateLabel(self.slider.sliderPosition()) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) ui = test() ui.show() sys.exit(app.exec_()) As a result, I get a label with value, but it is always shown at the bottom of the window. Once I resize my window, the label moves to the correct place. Do I miss something? | With the help of @ekhumoro and @simon, I fixed the problem from PyQt5 import QtCore, QtGui, QtWidgets class test(QtWidgets.QWidget): def __init__(self, parent = None): super().__init__(parent) self.main_layout = QtWidgets.QHBoxLayout(self) self.main_layout.setContentsMargins(0, 0, 0, 0) self.main_layout.setSpacing(0) self.slider = QtWidgets.QSlider(QtCore.Qt.Vertical,self) self.slider.setMaximum(100) self.slider.setMinimum(0) self.slider.setTracking(True) self.label= QtWidgets.QLabel(self) self.label.setText("label") self.main_layout.addWidget(self.slider,0,QtCore.Qt.AlignLeft) self.updateLabel(0) self.setLayout(self.main_layout) self.slider.valueChanged.connect(self.updateLabel) def updateLabel(self,value): height= QtWidgets.QStyle.sliderPositionFromValue(0,100,self.slider.value(),self.slider.height()-self.label.height(), True) self.label.move(self.slider.width(), height) self.label.setText(str(value)) def resizeEvent(self,ev): self.updateLabel(self.slider.sliderPosition()) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) ui = test() ui.show() sys.exit(app.exec_()) The main difference is not adding the Qlabel inside of the layout, hence having more direct control over its location. One would need to account for label size while using the widget, since by default it's size shrinks to QSlider size | 2 | 1 |
79,242,541 | 2024-12-1 | https://stackoverflow.com/questions/79242541/in-polars-how-do-you-create-a-group-counter-group-id | How do you get a group_id column like this, grouping by columns col1 and col2 ? col1 col2 group_id A Z 1 A Y 2 A Z 1 B Z 3 based on such a DataFrame : df = pl.DataFrame({ 'col1': ['A', 'A', 'A', 'B'], 'col2': ['Z', 'Y', 'Z', 'Z']} ) In other words, I'm looking for polars equivalent to R data.table .GRP (df[, group_id:=.GRP, by = .(col1, col2)]). Thanks ! Context : I want to build an event ID because in my data, I have many detailed rows for one event. Once the event ID is created, I will use it to perform various window operations. I prefer to have this event ID rather than keeping a list of grouping variables. | If you don't care about "first occurrence has lower rank" then you can just rank combination of col1 and col2 columns. pl.struct() to create one column out of col1, col2. pl.Expr.rank() to assign rank to rows. df.with_columns(group_id = pl.struct("col1","col2").rank("dense")) shape: (4, 3) ββββββββ¬βββββββ¬βββββββββββ β col1 β col2 β group_id β β --- β --- β --- β β str β str β u32 β ββββββββͺβββββββͺβββββββββββ‘ β A β Z β 2 β β A β Y β 1 β β A β Z β 2 β β B β Z β 3 β ββββββββ΄βββββββ΄βββββββββββ Otherwise, it will be a bit more complicated - you can create index to track current row order and then assign rank based on this index. pl.DataFrame.with_row_index() to create an index / row number. pl.Expr.min() and pl.Expr.over() to get minimum index within col, col2. pl.Expr.rank() to "compress" this minimum to dense rank. ( df.with_row_index("group_id") .with_columns( pl.col.group_id.min().over("col1","col2").rank("dense") ) ) shape: (4, 3) ββββββββββββ¬βββββββ¬βββββββ β group_id β col1 β col2 β β --- β --- β --- β β u32 β str β str β ββββββββββββͺβββββββͺβββββββ‘ β 1 β A β Z β β 2 β A β Y β β 1 β A β Z β β 3 β B β Z β ββββββββββββ΄βββββββ΄βββββββ | 3 | 2 |
79,244,168 | 2024-12-2 | https://stackoverflow.com/questions/79244168/order-pandas-dataframe-rows-with-custom-order-defined-by-list | I'm trying to order this dataframe in quarterly order using the list sortTo as reference to put it into a table. import pandas as pd # Sample DataFrame data = {'QuarterYear': ["Q1 2024", "Q2 2024", "Q3 2023", 'Q3 2024', "Q4 2023", "Q4 2024"], 'data1': [5, 6, 2, 1, 10, 3], 'data2': [12, 4, 2, 7, 2, 9]} sortTo = ["Q3 2023", "Q4 2023", "Q1 2024", 'Q2 2024', "Q3 2024", "Q4 2024"] df = pd.DataFrame(data) df.reindex(sortTo) I've tried re-index, sort_values to no avail. I cannot use np.sort as the quarters are not numerical. Current output: QuarterYear data1 data2 0 Q1 2024 5 12 1 Q2 2024 6 4 2 Q3 2023 2 2 3 Q3 2024 1 7 4 Q4 2023 10 2 5 Q4 2024 3 9 | Code use key parameter for custom sorting. out = df.sort_values( 'QuarterYear', key=lambda x: x.map({k: n for n, k in enumerate(sortTo)}) ) out: QuarterYear data1 data2 2 Q3 2023 2 2 4 Q4 2023 10 2 0 Q1 2024 5 12 1 Q2 2024 6 4 3 Q3 2024 1 7 5 Q4 2024 3 9 If your data consists of years and quarters, the following code should do the sorting you need without sortTo. out = df.sort_values( 'QuarterYear', key=lambda x: x.str.replace(r'(Q\d) (\d+)', r'\2 \1', regex=True) ) | 1 | 1 |
79,243,345 | 2024-12-2 | https://stackoverflow.com/questions/79243345/linking-python-to-c-something-not-shown-on-the-console-of-c | I have the following in Python: def ask_ollama(messages): """ Function to send the conversation to the Ollama API and get the response. """ payload = { 'model': 'llama3.2:1b', # Model ID 'messages': messages, # Pass the entire conversation history 'stream': False } response = requests.post( "http://localhost:11434/api/chat", # Ollama API URL json=payload ) raw_data = response.text data = json.loads(raw_data) message_content = data['message']['content'] return message_content def main(): parser = argparse.ArgumentParser() parser.add_argument('file_path', type=str) parser.add_argument('pre_question', type=str) args = parser.parse_args() # Initialize context from file context = "" with open(args.file_path, 'r', encoding="utf-8") as file: context = file.read() # Construct the initial question with context question = ( f"{context} would typically consist of plenty of paragraphs. " f"Scrutinize each of the paragraphs, and try to answer {args.pre_question}. " "Be patient, do not overlook any of those paragraphs. " "In some occasion, you need to group certain amounts of the paragraphs to answer the question, beware of it. " "If you have tried your best but still do not find any useful information from those paragraphs for the answer, " "then begin your response with 'it seems that there is no information that I am able to gain from the context' " "at the very first sentence and then elaborate what you know about the question according to your own knowledge." ) # Initialize the conversation history with the user's initial question conversation_history = [{'role': 'user', 'content': question}] # Get the initial response from the model response = ask_ollama(conversation_history) print(response) # Print the first response # Now enter a loop to handle follow-up questions while True: # Read the next follow-up question pre_follow_up_question = input() if pre_follow_up_question.lower() == 'exit': break follow_up_question = ( f"{context} would typically consist of plenty of paragraphs. " f"Scrutinize each of the paragraphs, and try to answer {pre_follow_up_question}. " "Be patient, do not overlook any of those paragraphs. " "In some occasion, you need to group certain amounts of the paragraphs to answer the question, beware of it. " "If you have tried your best but still do not find any useful information from those paragraphs for the answer, " "then begin your response with 'it seems that there is no information that I am able to gain from the context' " "at the very first sentence and then elaborate what you know about the question according to your own knowledge." ) # Append the follow-up question to the conversation history conversation_history.append({'role': 'user', 'content': follow_up_question}) # Get the follow-up response from Ollama using the updated conversation history response = ask_ollama(conversation_history) # Print the follow-up response print(response) main() I am using C# to run this file, the code is like string filePath = @".\Uploads\Philosophy.txt"; string inputText = "What is philosophy of language?"; ProcessStartInfo startInfo = new ProcessStartInfo { FileName = "python", Arguments = $"Python_Ollama.py \"{filePath}\" \"{inputText}\"", RedirectStandardInput = true, RedirectStandardOutput = true, RedirectStandardError = true, UseShellExecute = false, CreateNoWindow = true }; using (Process process = Process.Start(startInfo)) { StreamReader reader = process.StandardOutput; StreamWriter writer = process.StandardInput; Console.WriteLine("Python Output:"); string line; while ((line = reader.ReadLine()) != null) { Console.WriteLine(line); } while (true) { Console.WriteLine("\nEnter a follow-up question (or type 'exit' to quit): "); string followUpQuestion = Console.ReadLine(); if (followUpQuestion.ToLower() == "exit") { break; } writer.WriteLine(followUpQuestion); writer.Flush(); while ((line = reader.ReadLine()) != null) { Console.WriteLine(line); } } } the console of C# only showed the first response, it doesn't show the "Enter a follow..." and of course no way to let me to insert the follow-up question. First of all, I have tried ReadToEnd() (or ReadToEndAsync()) and it doesn't show up anything. And I used while ((line = reader.ReadLineAsync()) != null) { Console.WriteLine(line); } the problem persists. If I used string output = await reader.ReadToEndAsync(); Console.WriteLine(output); for the first part, then the sentence "Enter a follow..." is inserted in between of the response (the response is used to have plenty of paragraphs). | To put comments into an answer: First of all: you should consider calling the REST API directly from C#. That would probably solve a lot of problems revolving around this issue. But let's say you couldn't for a moment. Then you still have control over both parts: C# and Python. Which means you can do the following: In the Python, append the api response with a specific string that cannot possibly be part of the api response. Let's say "=== End of API response ===". In the C# code, instead of while ((line = reader.ReadLine()) != null) you do while ((line = reader.ReadLine()) != "=== End of API response ===") That way you can detect the end of the response output without the need for the stream to be closed. | 2 | 1 |
79,233,046 | 2024-11-28 | https://stackoverflow.com/questions/79233046/python-ssl-issue-with-azure-cosmos-db-emulator-in-github-actions | I am trying to make unit tests for my azure functions, written in Python. I have a python file that does some setup (making the cosmos db databases and containers) and I do have a github actions yaml file to pull a docker container and then run the scripts. The error: For some reason, I do get an error when running the Python script: azure.core.exceptions.ServiceRequestError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1006) I have already tried to install the CA certificate, provided by the docker container. I think this worked correctly but the error still persists. The yaml file: jobs: test: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v3 - name: Start Cosmos DB Emulator run: docker run --detach --publish 8081:8081 --publish 1234:1234 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest - name: pause run : sleep 120 - name : emulator certificate run : | retry_count=0 max_retry_count=10 until sudo curl --insecure --silent --fail --show-error "https://localhost:8081/_explorer/emulator.pem" --output "/usr/local/share/ca-certificates/cosmos-db-emulator.crt"; do if [ $retry_count -eq $max_retry_count ]; then echo "Failed to download certificate after $retry_count attempts." exit 1 fi echo "Failed to download certificate. Retrying in 5 seconds..." sleep 5 retry_count=$((retry_count+1)) done sudo update-ca-certificates sudo ls /etc/ssl/certs | grep emulator - name: Set up Python uses: actions/setup-python@v4 with: python-version: '3.11' - name: Cache dependencies uses: actions/cache@v3 with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- - name: Install dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Set up Azure Functions Core Tools run: | wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb sudo apt-get update sudo apt-get install azure-functions-core-tools-4 - name: Log in with Azure uses: azure/login@v1 with: creds: '${{ secrets.AZURE_CREDENTIALS }}' - name: Start Azurite run: | docker run -d -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite - name: Wait for Azurite to start run: sleep 5 - name: Get Emulator Connection String id: get-connection-string run: | AZURE_STORAGE_CONNECTION_STRING="AccountEndpoint=https://localhost:8081/;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VR2Vo3Fl+QUFOzQYzRPgAzF1jAd+pQ==;" echo "AZURE_STORAGE_CONNECTION_STRING=${AZURE_STORAGE_CONNECTION_STRING}" >> $GITHUB_ENV - name: Setup test environment in Python run : python Tests/setup.py - name: Run tests run: | python -m unittest discover Tests The Python script urllib3.disable_warnings() print(DEFAULT_CA_BUNDLE_PATH) connection_string : str = os.getenv("COSMOS_DB_CONNECTION_STRING") database_client_string : str = os.getenv("COSMOS_DB_CLIENT") container_client_string : str = os.getenv("COSMOS_DB_CONTAINER_MEASUREMENTS") cosmos_client : CosmosClient = CosmosClient.from_connection_string( conn_str=connection_string ) cosmos_client.create_database( id=database_client_string, offer_throughput=400 ) database_client : DatabaseProxy = cosmos_client.get_database_client(database_client_string) database_client.create_container( id=container_client_string, partition_key=PartitionKey(path="/path") ) Output of the certificate installation step Updating certificates in /etc/ssl/certs... rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... /etc/ssl/certs/adoptium/cacerts successfully populated. Updating Mono key store Mono Certificate Store Sync - version 6.12.0.200 Populate Mono certificate store from a concatenated list of certificates. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. Importing into legacy system store: I already trust 146, your new list has 147 Certificate added: CN=localhost 1 new root certificates were added to your trust store. Import process completed. Importing into BTLS system store: I already trust 146, your new list has 147 Certificate added: CN=localhost 1 new root certificates were added to your trust store. Import process completed. Done done. cosmos-db-emulator.pem My thoughts I think that the issue arrises at the part where I create the database in Python script. Once I comment those lines, the error will not show. But I do need it :) Question Why might my solution not have worked, and what can I do to solve the issue? | After bits of puzzling around for a few days I got it to work: jobs: test: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v3 - name: Start Cosmos DB Emulator run: docker run --detach --publish 8081:8081 --publish 1234:1234 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview --protocol https - name: pause run : sleep 120 - name: Set environment variables run: | echo "EMULATOR_HOST=localhost" >> $GITHUB_ENV echo "EMULATOR_PORT=8081" >> $GITHUB_ENV echo "EMULATOR_CERT_PATH=/tmp/cosmos_emulator.cert" >> $GITHUB_ENV - name: Fetch Emulator Certificate run: | openssl s_client -connect ${EMULATOR_HOST}:${EMULATOR_PORT} </dev/null | \ sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > $EMULATOR_CERT_PATH - name: Install Certificate as a Trusted CA run: | sudo cp $EMULATOR_CERT_PATH /usr/local/share/ca-certificates/emulator_cert.crt sudo update-ca-certificates - name: Verify CA Installation run: | openssl s_client -connect ${EMULATOR_HOST}:${EMULATOR_PORT} -CAfile /etc/ssl/certs/ca-certificates.crt - name: Set up Python uses: actions/setup-python@v4 with: python-version: '3.11' # Adjust to your required Python version - name: Cache dependencies uses: actions/cache@v3 with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- - name: Install dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Set up Azure Functions Core Tools run: | wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb sudo apt-get update sudo apt-get install azure-functions-core-tools-4 - name: Log in with Azure uses: azure/login@v1 with: creds: '${{ secrets.AZURE_CREDENTIALS }}' - name: Start Azurite run: | docker run -d -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite - name: Wait for Azurite to start run: sleep 5 - name: Setup test environment in Python run : python Tests/setup.py - name: Run tests run: | python -m unittest discover Tests | 2 | 3 |
79,241,291 | 2024-12-1 | https://stackoverflow.com/questions/79241291/how-to-save-a-matplotlib-figure-with-automatic-height-to-pdf | I have the following problem: I want to save a figure with a specific width, but auto-determine its height. Let's look at an example: import matplotlib.pyplot as plt import numpy as np fig,ax=plt.subplots(figsize=(5,5),layout='constrained') x=np.linspace(0,2*np.pi) y=np.sin(x) ax.set_aspect('equal') ax.plot(x,y) plt.show() fig.savefig('test.pdf',format='pdf') Here, I want the figure to be 5 inches wide, I want the axis to use up all the horizontal space, but I don't really care about the exact vertical size. Basically exactly, what plt.show() gives me: fig.savefig() gives me a lot of whitespace on top and below the figure (obviously, because I have defined figsize=(5,5)). Using the option bbox_inches='tight' almost does what I want to, but it re-sizes the x-direction of the figure (in this case to roughly 5.1 inches). I have thus not found any way of saving this figure with a width of exactly 5 inches but an auto-determined height. The only way I can achieve what I want is to manually decrease the figsize until I see that the figure starts shrinking in x-direction. | Thanks to RuthC for providing the answer in a comment, the following seems to solve my problem: fig.savefig('test.pdf', format='pdf',bbox_inches='tight', pad_inches='layout') https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.8.0.html#pad-inches-layout-for-savefig | 2 | 2 |
79,238,475 | 2024-11-29 | https://stackoverflow.com/questions/79238475/logging-inheritance-in-python | I am currently developing a core utils package where I want to set some logging properties (I know that this is not best practice, but itΒ΄s for interal purposes and intended to generate logs). When I now import the package nothing gets logged: # core.__main__.py class BaseLoggerConfig(BaseModel): LOG_FORMAT: str = "%(levelprefix)s %(asctime)s %(name)s:%(lineno)d: %(message)s" DATEFMT: str = "%Y-%m-%d %H:%M:%S" LOG_LEVEL: int = logging.INFO version: int = 1 disable_existing_loggers: bool = False formatters: dict = { "default": { # "()": "uvicorn.logging.DefaultFormatter", "fmt": LOG_FORMAT, "datefmt": DATEFMT, }, } filters: dict = {} handlers: dict = { "default": { "formatter": "default", "class": "logging.StreamHandler", "stream": "ext://sys.stderr", } } loggers: dict = {} def __init__(self, name: str, **data): super().__init__(**data) self.loggers[name] = { "handlers": ["default"], "level": self.LOG_LEVEL, "propagate": False, } LOG_CONFIG = BaseLoggerConfig(__name__) logging.config.dictConfig(LOG_CONFIG) - core.__main__ Level: INFO Handlers: ['StreamHandler'] I now have logging in my other files, like: # core.utils import logging logger = logging.getLogger(__name__) def test(): logger.info(f"I am a log from {__name__}") # test.py import logging from core.utils import test logger = logging.getLogger(__name__) test() What am I missing? | There are a couple of tweaks that need to be made here. First, the file in which you configure your loggers should not be core/__main__.py, it should be core/__init__.py. __main__.py is used when you want to python -m core, which would run __main__.py. Second, the fmt key in your formatter config should be called format. You can see the official docs for logging.config. Third, in your LOG_FORMAT, levelprefix is not a valid field name. You probably want to use levelname which is "INFO", "DEBUG", etc. Putting it all together: # core/__init__.py import logging import logging.config from pydantic import BaseModel class BaseLoggerConfig(BaseModel): LOG_FORMAT: str = "%(levelname)s %(asctime)s %(name)s:%(lineno)d: %(message)s" DATEFMT: str = "%Y-%m-%d %H:%M:%S" LOG_LEVEL: int = logging.INFO version: int = 1 disable_existing_loggers: bool = False formatters: dict = { "default": { # "()": "uvicorn.logging.DefaultFormatter", "format": LOG_FORMAT, "datefmt": DATEFMT, }, } filters: dict = {} handlers: dict = { "default": { "formatter": "default", "class": "logging.StreamHandler", "stream": "ext://sys.stderr", } } loggers: dict = {} def __init__(self, name: str, **data): super().__init__(**data) self.loggers[name] = { "handlers": ["default"], "level": self.LOG_LEVEL, "propagate": False, } LOG_CONFIG = BaseLoggerConfig(__name__) logging.config.dictConfig(LOG_CONFIG) # core/utils.py import logging logger = logging.getLogger(__name__) def test(): logger.info(f"I am a log from {__name__}") # test.py import logging from core.utils import test logger = logging.getLogger(__name__) test() Output: INFO 2024-11-30 11:06:28 core.utils:8: I am a log from core.utils EDIT: I realized that your %(levelprefix)s comes from uvicorn's formatter. In that case, you can uncomment your line # "()": "uvicorn.logging.DefaultFormatter", and undo changing the key to format. Which means: # core/__init__.py # ... class BaseLoggerConfig(BaseModel): LOG_FORMAT: str = "%(levelprefix)s %(asctime)s %(name)s:%(lineno)d: %(message)s" DATEFMT: str = "%Y-%m-%d %H:%M:%S" # ... formatters: dict = { "default": { # "()": "uvicorn.logging.DefaultFormatter", "format": LOG_FORMAT, "datefmt": DATEFMT, }, } # ... Output: ("INFO" text is green) INFO: 2024-12-01 14:13:14 core.utils:6: I am a log from core.utils | 1 | 2 |
79,237,327 | 2024-11-29 | https://stackoverflow.com/questions/79237327/how-to-bound-typevar-correctly-to-protocol | I want to type annotate a simple sorting function that receives a list of any values that have either __lt__, __gt__, or both, but not mixed methods (i.e., all values should have the same comparison method) and returns a list containing the same elements it received in sorted order. What I've done so far: from typing import Any, Protocol class SupportsLT(Protocol): def __lt__(self, other: Any, /) -> bool: ... class SupportsGT(Protocol): def __gt__(self, other: Any, /) -> bool: ... def quick_sort[T: SupportsLT | SupportsGT](seq: list[T]) -> list[T]: if len(seq) <= 1: return list(seq) pivot = seq[0] smaller: list[T] = [] larger_or_equal: list[T] = [] for i in range(1, len(seq)): item = seq[i] if item < pivot: smaller.append(item) else: larger_or_equal.append(item) return [*quick_sort(smaller), pivot, *quick_sort(larger_or_equal)] Running mypy --strict gives the error: error: Unsupported left operand type for < (some union) [operator]. I think this is because there is no guarantee that the values do not have mixed protocols. Another attempt is using constraints instead of bound: def quick_sort[T: (SupportsLT, SupportsGT)](seq: list[T]) -> list[T]: if len(seq) <= 1: return list(seq) pivot = seq[0] smaller: list[T] = [] larger_or_equal: list[T] = [] for i in range(1, len(seq)): item = seq[i] if item < pivot: smaller.append(item) else: larger_or_equal.append(item) return [*quick_sort(smaller), pivot, *quick_sort(larger_or_equal)] When I run mypy --strict, I get complaints about the type of the list returned by the return statement: error: List item 0 has incompatible type "list[T]"; expected "SupportsLT" [list-item] error: List item 0 has incompatible type "list[T]"; expected "SupportsGT" [list-item] error: List item 2 has incompatible type "list[T]"; expected "SupportsLT" [list-item] error: List item 2 has incompatible type "list[T]"; expected "SupportsGT" [list-item] However, the second attempt has a significant issue. Running the following gives Revealed type is "builtins.list[SupportsLT]", which should be "builtins.list[builtins.int]" instead: from typing import reveal_type x = quick_sort([12, 3]) reveal_type(x) python version: 3.13.0, mypy version: 1.13.0 UPDATE: The reason I'm using both SupportsLT and SupportsGT as bound while my function only utilizes the less-than operator (i.e., <) is that when the left-hand operand lacks the __lt__ method, Python calls the __gt__ method of the right-hand operand and passes the left operand as an argument. Thus, values with only __gt__ should be considered valid as input to my function. Consider the following simple example: from typing import Self class HasGT: def __init__(self, value: int) -> None: self.value = value def __gt__(self, other: Self) -> bool: return self.value > other.value print(HasGT(5) < HasGT(6)) # Prints True | Union bound Your first attempt is indeed unsafe. Let's see that: class HasGT: def __init__(self, value: int) -> None: self.value = value def __gt__(self, other: Self) -> bool: return self.value > other.value class HasLT: def __init__(self, value: int) -> None: self.value = value def __lt__(self, other: Self) -> bool: return self.value < other.value foo: list[HasLT | HasGT] = [HasLT(2), HasGT(3)] sorted_foo = quick_sort(foo) mypy accepts this part (the error points at your definition), but it fails at runtime: $ mypy s.py --strict s.py:17: error: Unsupported left operand type for < (some union) [operator] Found 1 error in 1 file (checked 1 source file) $ python s.py Traceback (most recent call last): File "/tmp/temp/s.py", line 56, in <module> sorted_foo = quick_sort(foo) ^^^^^^^^^^^^^^^ File "/tmp/temp/s.py", line 17, in quick_sort if item < pivot: ^^^^^^^^^^^^ TypeError: '<' not supported between instances of 'HasGT' and 'HasLT' TypeVar bound to some type T can be substituted with any T1 <= T. Where T is a union type, there's nothing wrong with T1 being the same union type, that's explicitly allowed. So such implementation is unsafe. Constrained typevar Your second snippet is actually safe. There's a mypy bug making it reject your function as-is (something shady happens during unpacking, I'll have a look later), but replacing the last line with return quick_sort(smaller) + [pivot] + quick_sort(larger_or_equal) fixes things. Such implementation passes mypy --strict, but is barely useful as you noticed: it's return type will be just that, a protocol with a single comparison method. Reasonable compromise I think that it's reasonable to say "my implementation is fine" and provide the best possible signature for callers. It's reasonable to assume that the input collection is homogeneous, so let's just make two overloads (and also avoid restricting the input to lists: it works for any sequence): from collections.abc import Sequence from typing import Any, Protocol, Self, overload # [snip] Supports{L,G}T and Has{L,G}T definitions here @overload def quick_sort_overloaded[T: SupportsLT](seq: Sequence[T]) -> list[T]: ... @overload def quick_sort_overloaded[T: SupportsGT](seq: Sequence[T]) -> list[T]: ... def quick_sort_overloaded[T: SupportsLT | SupportsGT](seq: Sequence[T]) -> list[T]: if len(seq) <= 1: return list(seq) pivot = seq[0] smaller: list[T] = [] larger_or_equal: list[T] = [] for i in range(1, len(seq)): item = seq[i] if item < pivot: # type: ignore[operator] smaller.append(item) else: larger_or_equal.append(item) return [ *quick_sort_overloaded(smaller), # type: ignore[type-var] pivot, *quick_sort_overloaded(larger_or_equal) # type: ignore[type-var] ] And now reveal_type(quick_sort_overloaded([HasLT(2), HasLT(3)])) # N: Revealed type is "builtins.list[__main__.HasLT]" reveal_type(quick_sort_overloaded([HasGT(2), HasGT(3)])) # N: Revealed type is "builtins.list[__main__.HasGT]" foo: list[HasLT | HasGT] = [HasLT(2), HasGT(3)] try: quick_sort_overloaded(foo) # E: Value of type variable "T" of "quick_sort_overloaded" cannot be "HasLT | HasGT" [type-var] except TypeError: print("`quick_sort_overloaded` failed as warned") Here's a playground to compare all those solutions. | 3 | 1 |
79,239,871 | 2024-11-30 | https://stackoverflow.com/questions/79239871/cant-access-tracks-audio-features-using-spotipy-fetching-from-spotify-api | Doing a project in university data science course and I'm working with API-s for the first time. I need to fetch different data about tracks using Spotify API, but I have encountered a problem early on. I can get access to some basic data about tracks popularity, duration, etc but I get 403 error when trying to fetch audio-features aswell. As far as I have searched for help on the web, apparently I have insufficient permissions in the scope of my Spotify access token. But I don't know how to get this problem fixed. I attempted to fetch audio features (e.g., tempo, danceability, energy) for the same tracks using sp.audio_features(track_ids). This is where the issue arises. When I run the code, I receive an error, and the audio features are not retrieved. Error: "HTTP Error for GET to https://api.spotify.com... with Params: {} returned 403 due to None" from spotipy.oauth2 import SpotifyOAuth import spotipy from dotenv import load_dotenv import os import pandas as pd load_dotenv() client_id = os.getenv("SPOTIPY_CLIENT_ID") client_secret = os.getenv("SPOTIPY_CLIENT_SECRET") redirect_uri = os.getenv("SPOTIPY_REDIRECT_URI") auth_manager = SpotifyOAuth( client_id=client_id, client_secret=client_secret, redirect_uri=redirect_uri, scope="user-top-read" ) sp = spotipy.Spotify(auth_manager=auth_manager) # Fetch data top_tracks = sp.current_user_top_tracks(limit=10) track_data = [] track_ids = [] for item in top_tracks['items']: track_ids.append(item['id']) track_data.append({ 'track_id': item['id'], 'track_name': item['name'], 'artists': ", ".join(artist['name'] for artist in item['artists']), 'popularity': item['popularity'], 'explicit': item['explicit'], 'duration_ms': item['duration_ms'] }) # Fetch audio features try: audio_features = sp.audio_features(track_ids) # Causing errors audio_features_df = pd.DataFrame(audio_features) df = pd.DataFrame(track_data) df = pd.merge(df, audio_features_df, left_on='track_id', right_on='id') except: print("Can not access audio features") df = pd.DataFrame(track_data) df.head(10) | Spotify deprecated several API endpoints on November 27th 2024. The get-audio-features was one of those endpoints. This in turn affected spotipy as well. | 3 | 6 |
79,239,590 | 2024-11-30 | https://stackoverflow.com/questions/79239590/how-to-declare-the-type-of-the-attribute-of-an-undeclared-variable | I want to declare the type of a variable which I do not declare myself, but which I know exists. Motivation I am currently working with the kivy library, which does a poor job indicating to static type checkers what types its fields have. I would like to indicate the types myself so I can get autocompletions. In the example below, self.ids will be a DictProperty (I believe this is a subclass of a dict) and I want to declare the type of self.ids.task_list_area. Problem statement How do I declare the type of self.ids.task_list_area in the following code? class MyScreenManager(ScreenManager): task_list = ["Do Homework", "Take out Trash"] class DetailsScreen(Screen): manager: MyScreenManager # this works # ids: dict # ids.task_list_area: BoxLayout # this does not work because ids does not exist def display_tasks(self): print(self.manager.task_list) # this is correctly infered as list[str] # this works at runtime, but the type checker has no clue what is happening print(self.ids.task_list_area) def display_tasks_with_annotation(self): self.ids.task_list_area: BoxLayout # this does not error, but does not work either print(self.ids.task_list_area) # type checker still knows nothing | You should annotate ids with a type for which it is known that it has an attribute task_list_area of type BoxLayout. If kivy doesn't provide such a type, you can create one using typing.Protocol: from typing import Protocol class HasTaskListArea(Protocol): task_list_area: BoxLayout class DetailsScreen(Screen): ids: HasTaskListArea # ... | 1 | 3 |
79,239,332 | 2024-11-30 | https://stackoverflow.com/questions/79239332/getting-the-uuid-of-an-entries-list-of-a-group-using-pykeepass | I'm looking to get the UUID of the entries of a group with pykeepass. #!/usr/bin/env python3 from pykeepass import PyKeePass kp = PyKeePass('dbtest.kdbx', password='password') kpGroup = kp.find_groups(name='GroupTest', first=True) print(kpGroup.entries) The code will return this : [Entry: "GroupTest/test ([email protected])", Entry: "GroupTest/test ([email protected])"] The problem is when 2 entries of the group have a similar Title (test) and username ([email protected]), I am unable to use kp.find_entries() on either the Title or Username, having the uuid displayed by kpGroup.entries for each of those entries will solve the problem. I would like to have something like this displayed : [Entry: "GroupTest/test ([email protected]) UUID=uuid", Entry: "GroupTest/test ([email protected]) UUID=uuid"] | It is the difference between the real content of kpGroup.entries, the brief info which you get with the print(kpGroup.entries). So you may obtain arbitrary other info from this list of group entries. You wanted this: [(e.group, e.title, e.uuid) for e in kpGroup.entries] You will obtain something as this: [(Group: "GroupTest", 'test', UUID('dfc7622c-1cc1-f045-84fd-6e6acab78539')), (Group: "GroupTest", 'test', UUID('d14a2296-703e-2a40-aa78-f559f6ec090f'))] | 2 | 1 |
79,239,232 | 2024-11-30 | https://stackoverflow.com/questions/79239232/how-to-implement-softmax-in-python-whereby-the-input-are-signed-8-integers | I am trying to implement a softmax function that takes in signed int8 input and returns a signed int8 output array. The current implementation I have going is this, import numpy as np def softmax_int8(inputs): inputs = np.array(inputs, dtype=np.int8) x = inputs.astype(np.int32) x_max = np.max(x) x_shifted = x - x_max scale_factor = 2 ** 14 exp_limit = 16 exp_x = np.clip(x_shifted + exp_limit, 0, None) exp_x = (1 << exp_x) sum_exp_x = np.sum(exp_x) if sum_exp_x == 0: sum_exp_x = 1 softmax_probs = (exp_x * scale_factor) // sum_exp_x max_prob = np.max(softmax_probs) min_prob = np.min(softmax_probs) range_prob = max_prob - min_prob if max_prob != min_prob else 1 scaled_probs = ((softmax_probs - min_prob) * 255) // range_prob - 128 outputs = scaled_probs.astype(np.int8) return outputs I test it using this input, Input = [101, 49, 6, -34, -75, -79, -38, 120, -55, 115] but I get this output array([-128, -128, -128, -128, -128, -128, -128, 127, -128, -121],dtype=int8). My expected output is array([-57, -70, -79, -86, -92, -94, -88, -54, -91, -56], dtype=int8). What am I doing wrong here and how can I fix it? | I think there are different mathematical definitions of softmax in different contexts. Wikipedia definition (on real numbers): exp(z) / sum(exp(z)) What I inferred from your code: (1<<(z-z_max + 16)) / sum((1 << (z-z_max + 16))) or something similar. 1<< === 2** obviously. The major difference is the base number of the exponential. With base too high you are highly likely to get underflow and get a lot of -128. Besides there are also a biase that maps the result to [-128, 127] range, which is trival and less important It's highly likely that the library that you takes test cases from use a different definition than both of above. I did some testing with your test case and floating point definition of softmax with matplotlib, and the following expression gives a good fit: softmax_naive = (np.exp(inarr / 128) / np.sum(np.exp(inarr / 128)) * 256) - 100 You can imagine that you probably need to do a >>7 to input bytes before doing 1<< 2-based exponential. To give completely identical result, surely you should dig into that library code, which I didn't have time to do. Below are validation codes: import numpy as np import matplotlib.pyplot as plt inarr = np.array([101, 49, 6, -34, -75, -79, -38, 120, -55, 115], dtype=np.int8).astype(np.double) expected_arr = np.array([-57, -70, -79, -86, -92, -94, -88, -54, -91, -56], dtype=np.int8).astype(np.double) print(expected_arr) softmax_naive = (np.exp(inarr / 128) / np.sum(np.exp(inarr / 128)) * 256) - 100 print(softmax_naive - expected_arr) plt.plot(inarr) plt.plot(expected_arr) plt.plot(softmax_naive) plt.show() | 4 | 2 |
79,235,770 | 2024-11-29 | https://stackoverflow.com/questions/79235770/python-how-to-set-dict-key-as-01-02-03-when-using-dpath | Let's say we want to create/maintain a dict with below key structure { "a": {"bb": {"01": "some value 01", "02": "some value 02", } }, } We use dpath .new() as below to do this import dpath d=dict() ; print(d) # d={} dpath.new(d,'a/bb/00/c', '00val') ; print(d) # d={'a': {'bb': [{'c': '00val'}]}} # d # NOTE this will set :bb as a list, and set at bb[00] ie idx=0 of list bb # d # what we want instead is d={'a': {'bb': {'00': {'c': '00val'} }}} # d # what we want is NOT d={'a': {'bb': [ {'c': '00val'} ]}} So when using 00 in the path, dpath translate it into list index at 0 instead of dict key 00 The question is how to set key as 00? Currently I have to dodge it by setting prefix s ie s00 s01 s02 | I traced into the source, and noticed that it calls a Creator, documented in the function as: creator allows you to pass in a creator method that is responsible for creating missing keys at arbitrary levels of the path (see the help for dpath.path.set) I copied _default_creator from dpath.segments, and just eliminated the code that treats int values as a list index. Here's the quick and dirty version: import dpath from dpath.segments import extend from typing import Sequence, MutableSequence def my_creator(current, segments, i, hints): segment = segments[i] length = len(segments) if isinstance(current, Sequence): segment = int(segment) if isinstance(current, MutableSequence): extend(current, segment) # Infer the type from the hints provided. if i < len(hints): current[segment] = hints[i][1]() else: # deleted the array stuff here current[segment] = {} d = dict() dpath.new(d, r'a/bb/00/c', '00val', creator=my_creator) print(d) output {'a': {'bb': {'00': {'c': '00val'}}}} | 1 | 1 |
79,237,559 | 2024-11-29 | https://stackoverflow.com/questions/79237559/azure-function-not-able-to-index-functions | Hi I have three azure function written in python, last day I did a minor change in one of them and deployed it. However as we don't have separated deployments for the threee of them the other two got updated as well. After deployi9ng what I thought would be a small change all the functions stopped working. when checking the logs I see this error. Error: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33' not found (required by /home/site/wwwroot/.python_packages/lib/site-packages/cryptography/hazmat/bindings/_rust.abi3.so), Cannot find module. Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: https://aka.ms/functions-modulenotfound. Current sys.path: ['/home/site/wwwroot', '/home/site/wwwroot/.python_packages/lib/site-packages', '/azure-functions-host/workers/python/3.11/LINUX/X64', '/usr/local/lib/python311.zip', '/usr/local/lib/python3.11', '/usr/local/lib/python3.11/lib-dynload', '/usr/local/lib/python3.11/site-packages'] Traceback (most recent call last): File '/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/utils/wrappers.py', line 44, in call return func(\*args, \*\*kwargs) ^^^^^^^^^^^^^^^^^^^^^ File '/azure-functions-host/workers/python/3.11/LINUX/X64/azure_functions_worker/loader.py', line 244, in index_function_app imported_module = importlib.import_module(module_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File '/usr/local/lib/python3.11/importlib/__init__.py', line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File '<frozen importlib._bootstrap>', line 1204, in _gcd_import File '<frozen importlib._bootstrap>', line 1176, in _find_and_load File '<frozen importlib._bootstrap>', line 1147, in _find_and_load_unlocked File '<frozen importlib._bootstrap>', line 690, in _load_unlocked File '<frozen importlib._bootstrap_external>', line 940, in exec_module File '<frozen importlib._bootstrap>', line 241, in _call_with_frames_removed File '/home/site/wwwroot/function_app.py', line 6, in <module> from app.view import view_source_file File '/home/site/wwwroot/app/view.py', line 6, in <module> import storage.blob as blob File '/home/site/wwwroot/storage/blob.py', line 4, in <module> from azure.storage.blob import BlobServiceClient, BlobSasPermissions, generate_blob_sas File '/home/site/wwwroot/.python_packages/lib/site-packages/azure/storage/blob/__init__.py', line 12, in <module> from ._blob_client import BlobClient File '/home/site/wwwroot/.python_packages/lib/site-packages/azure/storage/blob/_blob_client.py', line 21, in <module> from ._blob_client_helpers import ( File '/home/site/wwwroot/.python_packages/lib/site-packages/azure/storage/blob/_blob_client_helpers.py', line 17, in <module> from ._encryption import modify_user_agent_for_encryption, _ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION File '/home/site/wwwroot/.python_packages/lib/site-packages/azure/storage/blob/_encryption.py', line 23, in <module> from cryptography.hazmat.primitives.ciphers import Cipher File '/home/site/wwwroot/.python_packages/lib/site-packages/cryptography/hazmat/primitives/ciphers/__init__.py', line 11, in <module> from cryptography.hazmat.primitives.ciphers.base import ( File '/home/site/wwwroot/.python_packages/lib/site-packages/cryptography/hazmat/primitives/ciphers/base.py', line 10, in <module> from cryptography.hazmat.bindings._rust import openssl as rust_openssl ImportError: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33' not found (required by /home/site/wwwroot/.python_packages/lib/site-packages/cryptography/hazmat/bindings/_rust.abi3.so) I didnΒ΄t add any packages just did a small code change, and it was working normal until before the deployment. I tried to recreate the infrastructure, redeploy code, upgrade Debian version on host. I saw in other post that I maybe should downgrade the python version but this was working before and didn't change anything. Also mention that downgrading will cause other problems with dependencies. I am using python 3.11. Can somebody help me here_ | Please check your Health History in Service Health -> Health History. A global Azure Functions service issue has been occurring for the last 23 hours. Most probably you are affected as well. | 1 | 2 |
79,237,895 | 2024-11-29 | https://stackoverflow.com/questions/79237895/python-polars-create-pairwise-elements-of-two-lists | In Python Polars I am trying to "group" two lists together, for example in a Pl.Struct, removing the group when one of the elements is Null. For example, given this DataFrame: df = pl.DataFrame({ "Movie": [[None, "IT",]], "Count": [[30, 27]], }) shape: (1, 2) ββββββββββββββββ¬ββββββββββββ β Movie β Count β β --- β --- β β list[str] β list[i64] β ββββββββββββββββͺββββββββββββ‘ β [null, "IT"] β [30, 27] β ββββββββββββββββ΄ββββββββββββ I would like to get the result below: shape: (1, 1) βββββββββββββ β data β β --- β β struct[2] β βββββββββββββ‘ β {"IT",27} β βββββββββββββ However, the challenge I am finding in this case is that I need to do it in a single context (i.e. in a single with_columns). Is it possible? | df.with_columns( pl.struct( pl.col("Movie", "Count").list.gather( pl.col("Movie").list.eval(pl.element().is_not_null().arg_true()) ) .list.first() ) .alias("data") ) shape: (1, 3) ββββββββββββββββ¬ββββββββββββ¬ββββββββββββ β Movie β Count β data β β --- β --- β --- β β list[str] β list[i64] β struct[2] β ββββββββββββββββͺββββββββββββͺββββββββββββ‘ β [null, "IT"] β [30, 27] β {"IT",27} β ββββββββββββββββ΄ββββββββββββ΄ββββββββββββ | 1 | 1 |
79,229,633 | 2024-11-27 | https://stackoverflow.com/questions/79229633/an-iterative-plot-is-not-drawn-in-jupyter-notebook-while-python-works-fine-why | I have written an iterative plot code for python using plt.draw(). While it works fine for python interpreter and ipython also, but it does not work in jupyter notebook. I am running python in virtualenv in Debian. The codes are import matplotlib.pyplot as plt import numpy as np N=5 x = np.arange(-3, 3, 0.01) fig = plt.figure() ax = fig.add_subplot(111) for n in range(N): y = np.sin(np.pi*x*n) line, = ax.plot(x, y) plt.draw() plt.pause(0.5) line.remove() This works fine in command line such as $ python a.py But, in jupyter notebook, it plots first figure and then, repeat <Figure size ... 0 Axes> for N-1 times, not drawing the new figure inside axis. It looks the problem of jupyter, because no problem in running python code. How can I fix this? Or plotly would work in jupyter? (N.B. I happen to find a jupyter community site and posted there also.) | Jupyter Notebook handles plot display differently. If youβre able to modify the code, you can try the following approach: %matplotlib widget import matplotlib.pyplot as plt import numpy as np from matplotlib.animation import FuncAnimation # Parameters for the plot N = 5 x = np.arange(-3, 3, 0.01) # Create a figure and axes fig, ax = plt.subplots() line, = ax.plot([], [], lw=2) # Line object # Set the limits for the axes ax.set_xlim(-3, 3) ax.set_ylim(-1.5, 1.5) # Initialization function: called at the start def init(): line.set_data([], []) return line, # Update function: called for each frame def update(n): y = np.sin(np.pi * x * n) # Calculate sine wave line.set_data(x, y) # Update line data return line, # Create the animation using FuncAnimation ani = FuncAnimation(fig, update, frames=N, init_func=init, blit=True, interval=500) This method works well for me in VSCode. If you want more control over the animation, you can display it directly in the Jupyter Notebook using: # Display the animation in the Jupyter notebook from IPython.display import HTML HTML(ani.to_jshtml()) # Render the animation as HTML/JS The frames parameter specifies the data passed to the update(n) function during each interaction step. Setting frames=N is equivalent to using frames=range(N). This operates similarly to the following loop: for n in range(N): update(n) For more details about FuncAnimation, please refer to the documentation at: https://matplotlib.org/stable/api/_as_gen/matplotlib.animation.FuncAnimation.html. | 2 | 3 |
79,235,819 | 2024-11-29 | https://stackoverflow.com/questions/79235819/how-to-access-the-last-10-rows-and-first-two-columns-of-a-dataframe | I was given a file to practice pandas on, and was asked this question: Q: Access the last 10 rows and the first two columns of the index dataframe. So, I tried this code: df = index[(index.tail(10)) & (index.iloc[:, :2])] df but it gave me an error. How do I access the last 10 rows and first two columns of a dataframe? I expected to have the last 10 rows of the first two columns in a variable. | One way is to try with .iloc df = index.iloc[-10: , :2] -10: takes last 10 rows. Note the negative and look more into how iloc (section Indexing both axes). :2 after comma takes first two column i.e. 0 and 1. | 1 | 3 |
79,231,593 | 2024-11-27 | https://stackoverflow.com/questions/79231593/return-two-closest-rows-above-and-below-a-target-value-in-polars | I'm trying to figure out the most elegant way in Polars to find the two bracketing rows (first above and first below) a specific target. Essentially the Min > 0 & the Max < 0. data = { "strike": [5,10,15,20,25,30], "target": [16] * 6, } df = (pl.DataFrame(data) .with_columns( diff = pl.col('strike') - pl.col('target'))) shape: (6, 3) ββββββββββ¬βββββββββ¬βββββββ β strike β target β diff β β --- β --- β --- β β i64 β i64 β i64 β ββββββββββͺβββββββββͺβββββββ‘ β 5 β 16 β -11 β β 10 β 16 β -6 β β 15 β 16 β -1 β β 20 β 16 β 4 β β 25 β 16 β 9 β β 30 β 16 β 14 β ββββββββββ΄βββββββββ΄βββββββ This is what I'm trying to arrive at. shape: (2, 3) ββββββββββ¬βββββββββ¬βββββββ β strike β target β diff β β --- β --- β --- β β i64 β i64 β i64 β ββββββββββͺβββββββββͺβββββββ‘ β 15 β 16 β -1 β β 20 β 16 β 4 β ββββββββββ΄βββββββββ΄βββββββ I am able to do it through two separate filter operations but I can't seem to string these together and would like to avoid having to vstack the two individual results back together into a single data frame if possible. df1 = (pl.DataFrame(data) .with_columns( diff = pl.col('strike') - pl.col('target')) .filter((pl.col('diff') > 0)).min()) shape: (1, 3) ββββββββββ¬βββββββββ¬βββββββ β strike β target β diff β β --- β --- β --- β β i64 β i64 β i64 β ββββββββββͺβββββββββͺβββββββ‘ β 20 β 16 β 4 β ββββββββββ΄βββββββββ΄βββββββ df2 = (pl.DataFrame(data) .with_columns( diff = pl.col('strike') - pl.col('target')) .filter((pl.col('diff') < 0)).max()) shape: (1, 3) ββββββββββ¬βββββββββ¬βββββββ β strike β target β diff β β --- β --- β --- β β i64 β i64 β i64 β ββββββββββͺβββββββββͺβββββββ‘ β 15 β 16 β -1 β ββββββββββ΄βββββββββ΄βββββββ | data = {"strike": [5,10,15,20,25,30]} target = 16 diff = pl.col("strike") - target ( pl.DataFrame(data) # diff and target can be added as columns, but this is not actually needed # .with_columns(target=target, diff=diff) # filter for where the diff equals the min above 0, or the max below 0 .filter( (diff == diff.filter(diff > 0).min()) | (diff == diff.filter(diff < 0).max()) ) ) # shape: (2, 1) # ββββββββββ # β strike β # β --- β # β i64 β # ββββββββββ‘ # β 15 β # β 20 β # ββββββββββ | 1 | 2 |
79,235,198 | 2024-11-28 | https://stackoverflow.com/questions/79235198/pandas-merge-elegant-way-to-deal-with-filling-and-dropping-columns | Assume we have two data frames with columns as follows: df1[['name', 'year', 'col1', 'col2', 'col3']] df2[['name', 'year', 'col2', 'col3', 'col4']] I want to do the merge of df1 and df2 by name and year with the condition to keep all value of col2 col3 on df1, if it is None then use value in df2 I know how to do this in traditional way by merging df1 and df2 then using ffill(). Since my process of cleaning data involve many steps of merging different df with same columns, it make the code not so clean when I keep have to using ffill() and drop columns. I don't know if pd.merge has any built-in option like that? Sample code: df1 = pd.DataFrame({'name': ['a', 'a', 'b', 'b', 'c', 'c'], 'year': [2000, 2001, 2002, 2003, 2004, 2005], 'col1': [1,2,3,4,5,6], 'col2': [0,2,4,6,8,None], 'col3': [1,3,5,7,None,9]}) df2 = pd.DataFrame({'name': ['b', 'b', 'c', 'c', 'd', 'd'], 'year': [2003, 2004, 2004, 2005, 2006, 2007], 'col2': [10,20,30,None,50,60], 'col3': [100,300,500,700,None,900], 'col4': [5,6,7,8,9,10]}) Input: df1 name year col1 col2 col3 0 a 2000 1 0.00 1.00 1 a 2001 2 2.00 3.00 2 b 2002 3 4.00 5.00 3 b 2003 4 6.00 7.00 4 c 2004 5 8.00 NaN 5 c 2005 6 NaN 9.00 df2 name year col2 col3 col4 0 b 2003 10.00 100.00 5 1 b 2004 20.00 300.00 6 2 c 2004 30.00 500.00 7 3 c 2005 NaN 700.00 8 4 d 2006 50.00 NaN 9 5 d 2007 60.00 900.00 10 Output desired name year col1 col2 col3 col4 0 a 2000 1.00 0.00 1.00 NaN 1 a 2001 2.00 2.00 3.00 NaN 2 b 2002 3.00 4.00 5.00 NaN 3 b 2003 4.00 6.00 7.00 5.00 4 b 2004 NaN 20.00 300.00 6.00 5 c 2004 5.00 8.00 500.00 7.00 6 c 2005 6.00 NaN 9.00 8.00 7 d 2006 NaN 50.00 NaN 9.00 8 d 2007 NaN 60.00 900.00 10.00 | Assuming unique combinations of name/year, you could concat and groupby.first: out = pd.concat([df1, df2]).groupby(['name', 'year'], as_index=False).first() For a more generic merge, you could perform two merges, excluding the common, non-key, columns, then combine_first: cols = ['name', 'year'] common = df1.columns.intersection(df2.columns).difference(cols) out = (df1.merge(df2.drop(columns=common), on=cols, how='outer') .combine_first(df1.drop(columns=common).merge(df2, on=cols, how='outer')) ) Another option with a single merge: cols = ['name', 'year'] common = df1.columns.intersection(df2.columns).difference(cols) out = df1.merge(df2, on=cols, suffixes=(None, '_right'), how='outer') tmp = out.filter(regex='_right$') out[common] = out[common].fillna(tmp.set_axis(common, axis=1)) out.drop(columns=tmp.columns, inplace=True) And finally with a groupby.first: out = (df1.merge(df2, on=cols, suffixes=(None, '_right'), how='outer') .rename(columns=lambda x: x.removesuffix('_right')) .groupby(level=0, axis=1, sort=False).first() ) # variant for recent pandas versions: out = (df1.merge(df2, on=cols, suffixes=(None, '_right'), how='outer') .rename(columns=lambda x: x.removesuffix('_right')) .T.groupby(level=0, sort=False).first().T ) Output: name year col1 col2 col3 col4 0 a 2000 1.0 0.0 1.0 NaN 1 a 2001 2.0 2.0 3.0 NaN 2 b 2002 3.0 4.0 5.0 NaN 3 b 2003 4.0 6.0 7.0 5.0 4 b 2004 NaN 20.0 300.0 6.0 5 c 2004 5.0 8.0 500.0 7.0 6 c 2005 6.0 NaN 9.0 8.0 7 d 2006 NaN 50.0 NaN 9.0 8 d 2007 NaN 60.0 900.0 10.0 | 1 | 1 |
79,234,492 | 2024-11-28 | https://stackoverflow.com/questions/79234492/flattening-pandas-columns-in-a-non-trivial-way | I have a pandas dataframe which looks like the following: site pay delta over under phase a a b ID D01 London 12.3 10.3 -2.0 0.0 -2.0 D02 Bristol 7.3 13.2 5.9 5.9 0.0 D03 Bristol 17.3 19.2 1.9 1.9 0.0 I'd like to flatten the column multindex to the columns are ID site a b delta over under D01 London 12.3 10.3 -2.0 0.0 -2.0 D02 Bristol 7.3 13.2 5.9 5.9 0.0 D03 Bristol 17.3 19.2 1.9 1.9 0.0 I'm struggling with the online documentation and tutorials to work out how to do this. I'd welcome advice, ideally to do this in a robust way which doesn't hardcode column positions. UPDATE: the to_dict is {'index': ['D01', 'D02', 'D03'], 'columns': [('site', 'a'), ('pay', 'a'), ('pay', 'b'), ('delta', ''), ('over', ''), ('under', '')], 'data': [['London', 12.3, 10.3, -2.0, 0.0, -2.0], ['Bristol', 7.3, 13.2, 5.8999999999999995, 5.8999999999999995, 0.0], ['Bristol', 17.3, 19.2, 1.8999999999999986, 1.8999999999999986, 0.0]], 'index_names': ['ID'], 'column_names': [None, 'phase']} | As the above answer mentioned the requirement is not clear, I have tried out just the flattening of the data, let us know import pandas as pd data = { 'index': ['D01', 'D02', 'D03'], 'columns': [('site', 'a'), ('pay', 'a'), ('pay', 'b'), ('delta', ''), ('over', ''), ('under', '')], 'data': [['London', 12.3, 10.3, -2.0, 0.0, -2.0], ['Bristol', 7.3, 13.2, 5.9, 5.9, 0.0], ['Bristol', 17.3, 19.2, 1.9, 1.9, 0.0]], 'index_names': ['ID'], 'column_names': [None, 'phase'] } # Construct the MultiIndex columns and DataFrame multiindex_columns = pd.MultiIndex.from_tuples(data['columns'], names=data['column_names']) df = pd.DataFrame(data['data'], index=data['index'], columns=multiindex_columns) # Flattening the MultiIndex columns df.columns = [' '.join(filter(None, col)) for col in df.columns] print(df) Flattened DataFrame: site a pay a pay b delta over under D01 London 12.30 10.30 -2.00 0.00 -2.00 D02 Bristol 7.30 13.20 5.90 5.90 0.00 D03 Bristol 17.30 19.20 1.90 1.90 0.00 | 1 | 2 |
79,233,300 | 2024-11-28 | https://stackoverflow.com/questions/79233300/generate-multiple-disjunct-samples-from-a-dataframe | I am doing some statistic of a very large dataframe that takes sums of multiple random samples. I would like the samples to be disjuct (no number should be present in two different samples). Minimal example that might use some numbers multiple times: import polars as pl import numpy as np df = pl.DataFrame( {"a": np.random.random(1000)} ) N_samples = 50 N_logs = 20 sums = [ df.sample(N_logs).select(pl.col("a").sum()).item() for _ in range(N_samples) ] How to avoid multiple usage of same numbers? | You can sample them all at once using with_replacement = False (which is default) and then aggregate into N_samples sums: ( df .sample(N_samples * N_logs) .group_by(pl.int_range(pl.len()) // N_logs) .sum() .get_column("a") ) shape: (50,) Series: 'a' [f64] [ 9.993712 10.667377 9.983055 7.092786 10.780031 β¦ 9.384218 8.57084 10.085927 12.77378 10.23612 ] | 2 | 1 |
79,232,831 | 2024-11-28 | https://stackoverflow.com/questions/79232831/how-to-use-django-querystring-with-get-form | In Django 5.1 {% querystring %} was added. Is there some way to use it with GET form? For example, let's say we have template with: <span>Paginate by:</span> <a href="{% querystring paginate_by=50 %}">50</a> {# ... #} <form method="GET"> <input name="query" value="{{ request.GET.query }}"> <button type="submit">Search</button> </form> Assuming that we are currently on localhost:8000/?paginate_by=50, how to change form so clicking Search won't delete paginate_by query parameter - so what I want is for example localhost:8000/?paginate_by=50&query=abc and not localhost:8000/?query=abc? Before 5.1 I handled that via providing form with hidden fields based on GET parameters, but I am hoping that now more elegant solution is possible. | Is there some way to use it with GET form? No., the {% querystring β¦ %} template tag [Django-doc] is used to generate querystrings. This is not what a form element is supposed to do. If you add a querystring at the end of the action=".." [mdn-doc] for a form that makes a GET request, then the browser will normally strip away the querystring. You probably can make use of the template tag, and some JavaScript that thus when submitting the form obtains the data from the form, and from the {% querystring %} (or probably better, JavaScript's window.location.search), and mix these together. Then you thus prevent the default form flow, but it thus requires some JavaScript to handle it. I would however advise not to manually render HTML forms if you are searching. You can use django-filter [GitHub] to automatically generate a filter form, that will thus preserve other options. | 4 | 3 |
79,233,242 | 2024-11-28 | https://stackoverflow.com/questions/79233242/how-to-get-relative-frequencies-from-pandas-groupby-with-two-grouping-variables | Suppose my data look as follows: import datetime import pandas as pd df = pd.DataFrame({'datetime': [datetime.datetime(2024, 11, 27, 0), datetime.datetime(2024, 11, 27, 1), datetime.datetime(2024, 11, 28, 0), datetime.datetime(2024, 11, 28, 1), datetime.datetime(2024, 11, 28, 2)], 'product': ['Apple', 'Banana', 'Banana', 'Apple', 'Banana']}) datetime product 0 2024-11-27 00:00:00 Apple 1 2024-11-27 01:00:00 Banana 2 2024-11-28 00:00:00 Banana 3 2024-11-28 01:00:00 Apple 4 2024-11-28 02:00:00 Banana All I want is to plot the relative frequencies of the products sold at each day. In this example 1/2 (50%) of apples and 1/2 of bananas on day 2024-11-27. And 1/3 apples and 2/3 bananas on day 2024-11-28 What I managed to do: absolute_frequencies = df.groupby([pd.Grouper(key='datetime', freq='D'), 'product']).size().reset_index(name='count') total_counts = absolute_frequencies.groupby('datetime')['count'].transform('sum') absolute_frequencies['relative_frequency'] = absolute_frequencies['count'] / total_counts absolute_frequencies.pivot(index='datetime', columns='product', values='relative_frequency').plot() I am pretty confident, there is a much less complicated way, since for the absolute frequencies I simply can use: df.groupby([pd.Grouper(key='datetime', freq='D'), 'product']).size().unstack('product').plot(kind='line') | You can use a crosstab with normalize: ct = pd.crosstab(df['datetime'].dt.normalize(), df['product'], normalize='index') Output: product Apple Banana datetime 2024-11-27 0.500000 0.500000 2024-11-28 0.333333 0.666667 As a graph: ct.plot.bar() Output: | 2 | 1 |
79,233,050 | 2024-11-28 | https://stackoverflow.com/questions/79233050/django-model-has-manytomany-field-how-to-get-all-ids-without-fetching-the-objec | I have a data structure like this: class Pizza(models.Model): name = models.CharField(max_length=100) toppings = models.ManyToManyField(Topping, related_name="pizzas") class Topping(models.Model): name = models.CharField(max_length=100) And to get all topping IDs related to a pizza I can do this: list(map(lambda t: t.id, pizza.toppings.all())) But this fetches all toppings completely from the database, even thought I only need the IDs. Is there a way to get the IDs without fetching the complete objects (for performance reasons)? | Use .values_list() methods to get only the IDs as tuples: topping_ids = pizza.toppings.all().values_list("id") # Example of result ( (4), (5), (12), (54) ) | 2 | 2 |
79,232,452 | 2024-11-28 | https://stackoverflow.com/questions/79232452/perform-a-binary-op-on-values-in-a-pandas-dataframe-column-by-a-value-in-that-sa | Sorry for the mouthful title. I think this is best illustrated by an example. Let's say we have an item that has different rarity levels, all of which have dfferent prices in different shops. I want to know how much more expensive a given rarity is than the base "normal" rarity in each shop separately. How could I add a new "premium" column that would give me the result of dividing the price of the item with a given rarity at a given shop, by the price of that item in "normal" quality at that shop in particular? The result can be seen in the table below. item quality price shop premium (todo) base price bread normal 2.0 Lumbridge 1.0 2.0 bread rare 3.0 Lumbridge 1.5 2.0 bread legendary 5.0 Lumbridge 2.5 2.0 bread normal 1.5 Varrock 1.0 1.5 bread rare 4.5 Varrock 3.0 1.5 bread legendary 6.0 Varrock 4.0 1.5 bread normal 3.0 Yanille 1.0 3.0 bread rare 2.0 Yanille 0.66 3.0 bread legendary 4.0 Yanille 1.33 3.0 I thought about repeating the rows with normal quality as a new column (given as the "base price" column), but I don't see any mechanism that could allow for that. If this is possible could it be done if instead of just one "item" we had many (i.e. extending the filtering to multiple columns)? | Here's one approach, assuming a single 'normal' price per item / shop. Data used # adding another item, and deleting 1 shop import pandas as pd data = {'item': {0: 'bread', 1: 'bread', 2: 'bread', 3: 'bread', 4: 'bread', 5: 'bread'}, 'quality': {0: 'normal', 1: 'rare', 2: 'legendary', 3: 'normal', 4: 'rare', 5: 'legendary'}, 'price': {0: 2.0, 1: 3.0, 2: 5.0, 3: 1.5, 4: 4.5, 5: 6.0}, 'shop': {0: 'Lumbridge', 1: 'Lumbridge', 2: 'Lumbridge', 3: 'Varrock', 4: 'Varrock', 5: 'Varrock', }, 'premium (todo)': {0: 1.0, 1: 1.5, 2: 2.5, 3: 1.0, 4: 3.0, 5: 4.0}, 'base price': {0: 2.0, 1: 2.0, 2: 2.0, 3: 1.5, 4: 1.5, 5: 1.5}} df = pd.DataFrame(data) df = pd.concat([df, df.assign(item=df['item'].replace('bread', 'milk'))], ignore_index=True) Code df = ( df.merge( df.query('quality == "normal"')[['item', 'shop', 'price']], on=['item', 'shop'], suffixes=('', '_base'), how='left' ) .assign(premium=lambda x: x['price'].div(x.pop('price_base'))) ) Output: item quality price shop premium (todo) base price premium 0 bread normal 2.0 Lumbridge 1.0 2.0 1.0 1 bread rare 3.0 Lumbridge 1.5 2.0 1.5 2 bread legendary 5.0 Lumbridge 2.5 2.0 2.5 3 bread normal 1.5 Varrock 1.0 1.5 1.0 4 bread rare 4.5 Varrock 3.0 1.5 3.0 5 bread legendary 6.0 Varrock 4.0 1.5 4.0 6 milk normal 2.0 Lumbridge 1.0 2.0 1.0 7 milk rare 3.0 Lumbridge 1.5 2.0 1.5 8 milk legendary 5.0 Lumbridge 2.5 2.0 2.5 9 milk normal 1.5 Varrock 1.0 1.5 1.0 10 milk rare 4.5 Varrock 3.0 1.5 3.0 11 milk legendary 6.0 Varrock 4.0 1.5 4.0 Explanation / intermediates Use df.query to filter on 'normal' and select only columns ['item', 'shop', 'price']. df.query('quality == "normal"')[['item', 'shop', 'price']] item shop price 0 bread Lumbridge 2.0 3 bread Varrock 1.5 6 milk Lumbridge 2.0 9 milk Varrock 1.5 Pass the filtered df to df.merge as right, left joining on ['item', 'shop'] and adding a suffix for column 'price' coming from right (_base). This generates your 'base price' column: # df.merge(...)['price_base'] 0 2.0 1 2.0 2 2.0 3 1.5 4 1.5 5 1.5 6 2.0 7 2.0 8 2.0 9 1.5 10 1.5 11 1.5 Name: price_base, dtype: float64 Chain df.assign and use Series.div with Series.pop for the divisor, assigning the result as new column 'premium'. Easier alternative, if: Your df is properly sorted on 'item', 'quality' (categorical, with 'normal' first), 'shop' (as in your example), and; Each 'normal' price has a non-NaN value. df['premium'] = df['price'].div( df['price'].where(df['quality'] == 'normal').ffill() ) Using Series.div + Series.where + Series.ffill. | 1 | 2 |
79,232,565 | 2024-11-28 | https://stackoverflow.com/questions/79232565/app-registration-error-aadsts500011-show-tenant-is-as-domain-instead-of-long-str | I've tried numerous times to register an app and connect to in in python: app_id = '670...' tenant_id = '065...' client_secret_value = 'YJr...' import requests import msal authority = f'https://login.microsoftonline.com/{tenant_id}' scopes = ['https://analysis.microsoft.net/powerbi/api/.default'] app = msal.ConfidentialClientApplication(app_id, authority=authority, client_credential=client_secret_value) result = None result = app.acquire_token_for_client(scopes=scopes) Overview: I feel like I've followed this video exactly: https://www.youtube.com/watch?v=3Fu8FjvYvyc&t=577s&ab_channel=JJPowerBI I'm up to minute 8:38. I'm getting the following error messsage and googling it shows me the tenatid should be the id and not the domain name. I'm not sure why that's happening and what I need to change to get this to work Edit adding API Permissions I am the owner of the subscription. Edit2: Looks a little different then the comment, but I enabled this and it says it could take 15 minutes to update. | The error occurred as you are using wrong scope value to generate access token for Power BI API. Initially, I too got same error when I tried to generate token with scope as https://analysis.microsoft.net/powerbi/api/.default like this: Make sure to use https://analysis.windows.net/powerbi/api/.default/ as scope value that worked and generated token successfully as below: app_id = 'appId' tenant_id = 'tenantId' client_secret_value = 'secret' import msal authority = f'https://login.microsoftonline.com/{tenant_id}' scopes = ['https://analysis.windows.net/powerbi/api/.default'] app = msal.ConfidentialClientApplication( app_id, authority=authority, client_credential=client_secret_value ) result = app.acquire_token_for_client(scopes=scopes) if 'access_token' in result: print(result['access_token']) else: print(result) Response: | 1 | 2 |
79,231,405 | 2024-11-27 | https://stackoverflow.com/questions/79231405/no-enum-for-numpy-uintp | I am trying to wrap a C pointer array of type size_t with a numpy ndarray via Cython using the following: cimport numpy as cnp from libcpp.vector cimport vector cnp.import_array() cdef size_t num_layers = 10 cdef vector[size_t] steps_taken_vec = vector[size_t]() steps_taken_vec.resize(3 * num_layers) cdef size_t* steps_taken_ptr = steps_taken_vec.data() cdef cnp.npy_intp[2] shape = [3, num_layers] cdef cnp.npy_intp ndim = 2 self.shooting_method_steps_taken_array = cnp.PyArray_SimpleNewFromData( ndim, &shape[0], cnp.NPY_UINTP, # <-- This is the problem steps_taken_ptr) # steps_taken_ptr is a size_t* The above produces the error "cimported module has no attribute 'NPY_UINTP'". According to numpy's documentation there should be an enum that directs numpy to create an array using size_t: https://numpy.org/devdocs/reference/c-api/dtype.html#c.NPY_TYPES.NPY_UINTP The PyArray_SimpleNewFromData API requires a enum that defines the type used to create the ndarray. However, the actual init.pxd does not appear to have that enum. It does set the type correctly, see line 25 here but no enum in this list. Those links, and my code, is using numpy 1.26.4. I looked ahead at 2.0+ and see that there was some definitional changes to this type, but the enum still appears to be missing (see here) As a workaround, I am using cnp.NPY_UINT64 which works but I am not sure if that is guranteed to be the same size as size_t across platforms and into the future. Am I missing something here? | The easiest solution is probably to declare NPY_UINTP yourself. def extern from *: cdef enum: NPY_UINTP This just assumes Cython that some enum-like constant exists called NPY_UINTP. You can pass that to PyArray_SimpleNewFromData. It doesn't matter that it's not from the "official" Numpy pxd file. As I said in a comment, do report the missing constant to Numpy (or better yet, contribute a fix to Numpy). If they don't know then they can't fix it. | 1 | 2 |
79,230,796 | 2024-11-27 | https://stackoverflow.com/questions/79230796/skipping-empty-row-and-fixing-length-mismatch | I am working with an Excel data set that looks like this Col 1 Col 2 Col 3 Col 4 Col 5 Col 6 Col 7 Col 8 1 2 A\n \nB C\n \nD E\n \nF G\n \nH I\n \nJ K\n \nL 3 4 5 6 a\n \nb c e\n \nf g\n \nh i\n \nj k\n \nl I want to use Col 3 to determine how many new rows need to be created, and then split Col 3 through Col 8 based on "\n \n". My 2 issues are when Col 3 is empty and when there is a length mismatch between Col 3 and Col 4. My goal is to have a data set that looks like this Col 1 Col 2 Col 3 Col 4 Col 5 Col 6 Col 7 Col 8 1 2 A C E G I K 1 2 B D F H J L 5 6 a c e g i k 5 6 b c f h j l You can see that in the goal data set Col 1 and Col 2 are replicated for the new row created, the row that contained an empty Col 3 was deleted, and the row that had a length mismatch between Col 3 and Col 4 just copied the value for the new row since there was nothing to split. The following is the script that I have been working with. You can see the commented out section of if/elif/else that I have tried to use to solve my two issues. There are no issues when Col 3 is populated and the index lengths are the same across Col 3 - Col 8. def splitappendix(moduleName): tableHeaders = ["A", "B", "C", "D", "E", "F", "G", "H"] splitfunctionHeaders = ["Col 1", "Col 2", "Col 3", "Col 4", "Col 5", "Col 6", "Col 7", "Col 8"] df = pd.read_excel(f'Output\\{moduleName}') df.columns = splitfunctionHeaders PrintConsole(df) for i, row in df.iterrows(): df.dropna(axis = 0, subset = ['Col 3'], inplace = True) PrintConsole(df) df_rep = pd.DataFrame(columns =[f'Col {i}' for i in range(1,9)]) for i, row in enumerate(df.index): ''' if df.isnull().iloc[i,2]: df.dropna(axis = 0, subset = ['Col 3'], inplace = True) if df['Col 3'][i] == "NaN": df.dropna(axis = 0, subset = ['Col 3'], inplace = True) elif len(df['Col 3'][i]) != len(df['Col 4'][i]): single_rep = pd.DataFrame(index = range(len(df['Col 3'][i].split('\n \n'))), columns =[f'Col {i}' for i in range(1,9)]) single_rep['Col 1'] = df.iat[i,0] single_rep['Col 2'] = df.iat[i,1] single_rep['Col 4'] = df.iat[i,3] PrintConsole(single_rep) for j in range(3,3) and range(5,9): single_rep['Col {}'.format(j)] = df['Col {}'.format(j)][i].split('\n \n') df_rep = pd.concat([df_rep, single_rep]) PrintConsole(df_rep) ''' #else: single_rep = pd.DataFrame(index = range(len(df['Col 3'][i].split('\n \n'))), columns =[f'Col {i}' for i in range(1,9)]) single_rep['Col 1'] = df.iat[i,0] single_rep['Col 2'] = df.iat[i,1] PrintConsole(single_rep) for j in range(3,9): single_rep['Col {}'.format(j)] = df['Col {}'.format(j)][i].split('\n \n') df_rep = pd.concat([df_rep, single_rep]) PrintConsole(df_rep) df_rep.reset_index(drop=True, inplace=True) df_rep.columns = tableHeaders PrintConsole(df_rep) #df_rep.to_excel(f'Output\\{moduleName}') if __name__ == "__main__": moduleName = 'Appendix.xlsx' splitappendix(moduleName) PrintConsole("Split Complete") ExitCode() | Assuming this input: df = pd.DataFrame({'Col 1': [1, 3, 5], 'Col 2': [2, 4, 6], 'Col 3': ['A\\n \\nB', None, 'a\\n \\nb'], 'Col 4': ['C\\n \\nD', None, 'c'], 'Col 5': ['E\\n \\nF', None, 'e\\n \\nf'], 'Col 6': ['G\\n \\nH', None, 'g\\n \\nh'], 'Col 7': ['I\\n \\nJ', None, 'i\\n \\nj'], 'Col 8': ['K\\n \\nL', None, 'k\\n \\nl'], }) You could split the strings with a custom function, then explode with de-duplication as described here: def explode_dedup(s): s = s.explode() return s.set_axis( pd.MultiIndex.from_arrays([s.index, s.groupby(level=0).cumcount()]) ) def split(x): if isinstance(x, str): return l if len(l:=x.split('\\n \\n'))>1 else l[0] return x m = df['Col 3'].notna() out = (pd.concat({c: explode_dedup(df.loc[m, c].map(split)) for c in df}, axis=1) .sort_index() .groupby(level=0).ffill() .droplevel(1).convert_dtypes() ) Output: Col 1 Col 2 Col 3 Col 4 Col 5 Col 6 Col 7 Col 8 0 1 2 A C E G I K 0 1 2 B D F H J L 2 5 6 a c e g i k 2 5 6 b c f h j l | 3 | 3 |
79,209,183 | 2024-11-20 | https://stackoverflow.com/questions/79209183/problem-with-mismatched-length-when-using-a-mask | I'm writing a code and I have a function that calculates the values that are not fulfilling a condition with the values that are fulfilling the condition, but I'm having a lot of trouble with managing the shape of the arrays. I have a similar function, but with other logical structure that does this (MWE for the function that works) import numpy as np def f_est(f, fd, mask): """ This function returns the estimated value given a function f and its derivative fd for the points selected by the mask, estimating forward. Parameters: - f: Array of function values. - fd: Array of derivative values of the function. - mask: Boolean mask to select points in f and fd. """ h = 0.000001 # Find the last index that satisfies the mask last_index = np.max(np.where(mask)[0]) # Create shifted masks (inspired by f_cal), but centered on the last index mask_current = mask[:last_index + 1] # Mask for current positions up to the last index mask_prev = mask[:last_index] # Mask for previous positions (for fd_prev_slice) mask_prev2 = mask[:last_index - 1] # Mask for previous positions (for fd_prev2_slice) # Apply masks to f and fd (with shifts), centered on the last index f_slice = f[:last_index + 1][mask_current] # Note: adjusted to align with mask_current fd_slice = fd[:last_index + 1][mask_current] fd_prev_slice = fd[:last_index][mask_prev] fd_prev2_slice = fd[:last_index - 1][mask_prev2] # Perform the calculations with consistent slices, estimating forward # Use the last value of f_slice, fd_slice, fd_prev_slice, and fd_prev2_slice for estimation last_f = f_slice[-1] last_fd = fd_slice[-1] last_fd_prev = fd_prev_slice[-1] if len(fd_prev_slice) > 0 else 0 last_fd_prev2 = fd_prev2_slice[-1] if len(fd_prev2_slice) > 0 else 0 estimated_next_value = ( last_f + h * last_fd + 1 / 2 * (h * last_fd - h * last_fd_prev) + 5 / 12 * ((h * last_fd - h * last_fd_prev) - (h * last_fd_prev - h * last_fd_prev2)) ) return estimated_next_value f = np.array([1, 2, 3, 4, 5, 6, 7]) fd = f mask = np.array([True, True, True, False, False, False, False]) print("Original Array:", f) print("Length of Original Array:", len(f)) print("Masked Array:", f[~mask]) print("Length of Masked Array:", len(f[~mask])) f[~mask] = f_est(f, fd, mask) print("Final Array:", f) print("Length of Final Array:", len(f)) But with this function (MWE that doesn't work): import numpy as np def f_cal(var, dvar, mask): """ Calculates next value using trapezoidal method with masks, estimating forward. Parameters: var: array of current values dvar: array of derivatives mask: boolean array indicating which positions to calculate """ h = 0.0000001 # Encontrar el ΓΊltimo Γndice que verifica la mΓ‘scara last_index = np.max(np.where(mask)[0]) # Crear mΓ‘scaras para posiciones actuales y la siguiente mask_current = mask[:last_index] mask_next = mask[1:last_index+1] # Marcar como True el Γndice siguiente # Ajustar los arreglos para alinear con las mΓ‘scaras var_current = var[:last_index+1][mask_current] dvar_current = dvar[:last_index+1][mask_current] dvar_next = dvar[:last_index+2][mask_next][:1] # Solo el valor siguiente # Calculate using trapezoidal method with masks, estimating forward result = var_current + h * dvar_next - 1/2*(h*dvar_next-h*dvar_current[-1]) return result f = np.array([1, 2, 3, 4, 5,6,7]) fd = f mask = np.array([True, True, True, False, False, False, False]) print("Original Array:", f) print("Length of Original Array:", len(f)) print("Masked Array:", f[~mask]) print("Length of Masked Array:", len(f[~mask])) f[~mask] = f_cal(f, fd, mask) print("Final Array:", f) print("Length of Final Array:", len(f)) I'm having a lot of trouble keeping the length of the array to match the number of elements that are not satisfying the condition | I believe you're trying to use the Taylor series to approximate the function at a position beyond the last known value. The function values are stored in the var array, and the first derivatives are in the dvar array. It seems like you're using mask to identify the last known function value, and h represents the argument step. From your code, it looks like you're using terms up to the second derivative, which you approximate as the difference between adjacent first derivatives divided by the argument step. This can be expressed as: f(x+h) = f(x) + hβ
fβ²(x) + Β½β
hΒ²β
fβ³(x) + β¦ β β f(x) + hβ
fβ²(x) + Β½β
hΒ²β
(fβ²(x+h)-fβ²(x))/h At least, this resonates with the expression you return as the result (which I tweaked slightly): var_current + h*dvar_next - 1/2*(h*dvar_next-h*dvar_current) We can transform the above line step by step as follows: var_current + h*dvar_next - 1/2*h*dvar_next + 1/2*h*dvar_current == var_current + 1/2*h*dvar_next + 1/2*h*dvar_current == var_current + h*dvar_current + 1/2*h*dvar_next - 1/2*h*dvar_current == var_current + h*dvar_current + 1/2*h*(dvar_next - dvar_current) == var_current + h*dvar_current + 1/2*(h**2)*(dvar_next - dvar_current)/h The last line reminded me of the Taylor series, which is why I suspect that the mention of the Trapezoidal method in your function description might be a mistake. If that's the case, the function could look like this: def f_cal(var, dvar, mask, h=1e-7): index = np.asarray(mask).nonzero()[0] assert index.size > 0, 'Nowhere to start from' assert index[-1] < len(mask)-1, 'Nowhere to extrapolate to' pos = index[-1] f = var[pos] df = dvar[pos] df_next = dvar[pos+1] return f + h*df + h*(df_next - df)/2 However, I might have misunderstood your intent. For example, if your reference to the trapezoidal method is deliberate, or you're using a different convention, you may need to clarify your question. P.S. Answer to the original question about mismatching lengths when using a mask: you seem to have missed handling array dimensions in some places. Here's an alternative implementation to compare with (note that the changes are made based on the assumption that you approximate the function using Taylor series): def f_cal(var, dvar, mask): h = 0.0000001 last_index = np.max(np.where(mask)[0]) mask_current = mask[:last_index+1] mask_next = mask[:last_index+2] var_current = var[:last_index+1][mask_current][-1] dvar_current = dvar[:last_index+1][mask_current][-1] dvar_next = dvar[:last_index+2][mask_next][-1] return var_current + h*dvar_next - 1/2*h*(dvar_next-dvar_current) | 2 | 0 |
79,224,765 | 2024-11-25 | https://stackoverflow.com/questions/79224765/ansible-python-interpreter-fallback-is-not-working | I have a playbook with a mix of connection: local tasks and remote tasks that use an AWS dynamic inventory. The Python interpreter has different paths on local and remote systems. Through another question python3 venv - how to sync ansible_python_interpreter for playbooks that mix connection:local and target system, I have determined I should use ansible_python_interpreter_fallback to configure two Python interpreter paths to try. But I cannot get them working. I have tried: Defining it in my playbook: --- - hosts: tag_group_web_servers vars_files: - group_vars/au roles: - autodeploy vars: ansible_python_interpreter_fallback: - /Users/jd/projects/mgr2/ansible/bin/python3 - /usr/bin/python3 , which is ignored And defining it in the dynamic inventory: plugin: aws_ec2 regions: - ap-southeast-2 - us-east-1 hostnames: - ip-address keyed_groups: - prefix: "tag" key: tags - prefix: "group" key: tags - prefix: "security_groups" key: 'security_groups|json_query("[].group_name")' all: hosts: 127.0.0.1: ansible_connection: local ansible_python_interpreter: "/Users/jd/projects/mgr2/ansible/bin/python3" remote: ansible_host: remote.host.ip ansible_python_interpreter: /usr/bin/python3 ansible_python_interpreter_fallback: - /Users/jd/projects/mgr2/ansible/bin/python3 - /usr/bin/python3 , which is also ignored. I'm confused where else this can go or why it doesn't work. Here is my Ansible version: ansible [core 2.17.4] config file = /Users/jd/projects/mgr2/ansible/ansible.cfg configured module search path = ['/Users/jd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible ansible collection location = /Users/jd/.ansible/collections:/usr/share/ansible/collections executable location = /opt/homebrew/bin/ansible python version = 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)] (/opt/homebrew/opt/[email protected]/bin/python3.11) jinja version = 3.1.4 libyaml = True | You cannot mix and match a dynamic inventory and a "regular" (YAML) inventory in the same file like that, you will have to create two different inventory files. The reason being the inventory is inferred by Ansible when it loads the inventory files. You can actually realise this if you run Ansible in verbose mode (using the option -vvv) $ ansible-playbook play.yml -vvv (... some lines not relevant for the issue at hand) Using inventory plugin 'ansible_collections.amazon.aws.plugins.inventory.aws_ec2' to process inventory source '/usr/local/ansible/inventories/inventory.aws_ec2.yml' Parsed /usr/local/ansible/inventories/inventory.yml inventory source with yaml plugin (...) The good thing is that you can configure Ansible inventory(ies) either as a list of files or as a folder of inventories in your configuration file. For example, my ansible.cfg reads inventory = /usr/local/ansible/inventories/ But you could also do something like inventory = /usr/local/ansible/inventories/inventory.yml, # YAML inventory /usr/local/ansible/inventories/inventory.aws_ec2.yml # AWS inventory But, in your case, I would probably use a default on all hosts and redefine it only for the local machine, all of this happening in the YAML inventory (i.e.: in /usr/local/ansible/inventories/inventory.yml, in the example above): all: vars: ansible_python_interpreter: /usr/bin/python3 hosts: 127.0.0.1: ansible_connection: local ansible_python_interpreter: >- /Users/jd/projects/mgr2/ansible/bin/python3 Another solution would be to use the compose parameter to set the variable in the dynamic inventory: plugin: aws_ec2 regions: - ap-southeast-2 - us-east-1 hostnames: - ip-address # You can define variable(s) for the dynamic hosts here, # and it can even be applied with Jinja templating, making those dynamic # Mind that the usage of both double and single quote here **is** important # as it will indicate to the Jinja templating that it is a literal string compose: ansible_python_interpreter: "'/usr/bin/python3'" keyed_groups: - prefix: "tag" key: tags - prefix: "group" key: tags - prefix: "security_groups" key: 'security_groups|json_query("[].group_name")' Then in the YAML inventory: all: hosts: 127.0.0.1: ansible_connection: local ansible_python_interpreter: >- /Users/jd/projects/mgr2/ansible/bin/python3 | 2 | 2 |
79,215,387 | 2024-11-22 | https://stackoverflow.com/questions/79215387/extract-the-original-paper-links-from-google-scholar-author-profiles | I put together the following python code to get the links of the papers published by a random author (from google scholar): import requests from bs4 import BeautifulSoup as bs import pandas as pd def fetch_scholar_links_from_url(url: str) -> pd.DataFrame: pd.set_option('display.max_columns', None) pd.set_option('display.max_colwidth', None) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36' } s = requests.Session() s.headers.update(headers) r = s.post(url, data={'json': '1'}) soup = bs(r.json()['B'], 'html.parser') works = [ ('https://scholar.google.com' + x.get('href')) for x in soup.select('a') if 'javascript:void(0)' not in x.get('href') and len(x.get_text()) > 7 ] df = pd.DataFrame(works, columns=['Link']) return df url = 'https://scholar.google.ca/citations?user=iYN86KEAAAAJ&hl=en' df_links = fetch_scholar_links_from_url(url) print(df_links) So far so good (even though this code returns only the first 20 papers..). However, since the links extracted from the google scholar's author page are not the original links of the journals (but still google scholar links, i.e. "nested links"), I would like to enter each of the extracted google scholar's links, and get the original links of the journals, by re-applying the same function for links extraction. However, if I perform the same function for extracting the links of the papers (this is an example with the first link in the extracted list of links), url2 = df_links.iloc[0]['Link'] df_links_2 = fetch_scholar_links_from_url(url2) print(df_links_2) I get the following error: traceback (most recent call last): File "/opt/anaconda3/lib/python3.12/site-packages/requests/models.py", line 974, in json return complexjson.loads(self.text, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/my_username/my_folder/collect_urls_papers_per_author.py", line 73, in <module> df_links_2 = fetch_scholar_links_from_url(url2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/my_username/my_folder/collect_urls_papers_per_author.py", line 55, in fetch_scholar_links_from_url soup = bs(r.json()['B'], 'html.parser') ^^^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/requests/models.py", line 978, in json raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0) I tried several times to fix this error, in order to get the original links of the papers, but I am not able to do so. Do you have suggestions on how to fix these errors? Just for clarity, in my example, url2 = df_links.iloc[0]['Link'] corresponds to https://scholar.google.com/citations?view_op=view_citation&hl=en&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:kNdYIx-mwKoC And, with the commands df_links_2 = fetch_scholar_links_from_url(url2) print(df_links_2) I would expect to get the following link... https://proceedings.neurips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html ...which you would get, manually, simply by clicking on the paper's title in https://scholar.google.com/citations?view_op=view_citation&hl=en&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:kNdYIx-mwKoC . | TL;DR: I know you expect Python code but Google blocked most of my attempts so to get the job done I used a combination of other tools and python. Scroll down to the bottom to view the results. Also, I'm on macOS and you might not have the following tools: - curl - jq - ripgrep - awk - sed The final python script uses the output of the first other-tools-only (later reffered as the #1 script) script to process the citation links and get the paper links. I'm pretty sure your code is getting flagged by Google's anti-bot systems. My guess is that Google uses some sophisticated techniques to detect bots, such as the order of headers, the presence of certain headers, the behavior of the HTTP client or even SSL behavior. I've tried your code and a lot of modifications of it. I tired with or without headers. I even used VPN. None of that worked. I always got picked up by Google's systems and flagged. But... When I used a barebones curl request (no custom headers) I got a valid response every single time. Even when making it right after running the Python script, which got blocked. Same IP, no VPN. curl worked, Python got flagged. So, I thought I could mimic curl. import http.client import ssl from urllib.parse import urlencode from bs4 import BeautifulSoup def parse_page(content): soup = BeautifulSoup(content, "lxml") return [a['href'] for a in soup.select('.gsc_a_at')] payload = { "user": "iYN86KEAAAAJ", "hl": "en", } params = urlencode(payload) # Create a connection with SSL verification disabled (curl does this by default) context = ssl.create_default_context() context.check_hostname = False context.verify_mode = ssl.CERT_NONE conn = http.client.HTTPSConnection("scholar.google.ca", context=context) # Define the headers headers = { 'User-Agent': 'curl/8.7.1', 'Accept': '*/*', } conn.request("GET", f"/citations?{params}", headers=headers) response = conn.getresponse() data = response.read() try: data = data.decode("utf-8", errors="ignore") except UnicodeDecodeError: data = data.decode("ISO-8859-1") print("\n".join(parse_page(data))) conn.close() And it worked: [/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:kNdYIx-mwKoC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:ZeXyd9-uunAC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:AXPGKjj_ei8C /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:KxtntwgDAa4C /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:MXK_kJrjxJIC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:_Qo2XoVZTnwC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:RHpTSmoSYBkC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:4JMBOYKVnBMC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:NhqRSupF_l8C /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:IWHjjKOFINEC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:bEWYMUwI8FkC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:Mojj43d5GZwC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:7PzlFSSx8tAC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:fPk4N6BV_jEC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:ufrVoPGSRksC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:maZDTaKrznsC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:vRqMK49ujn8C /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:LkGwnXOMwfcC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:R3hNpaxXUhUC /citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:Y0pCki6q_DkC So far so good but I still needed to get all the other citation links, right? And that's where all the problems really began. It turns out that the first request is a GET (first 20 links) but then it becomes a POST, as you have it in your cdoe. But to get it working you need to figure out the pagination. There's some funky JS code in the HTML source that does just that. function Re() { var a = window.location.href.split("#")[0]; Pe = a.replace(/([?&])(cstart|pagesize)=[^&]*/g, "$1"); Qe = Math.max(+a.replace(/.*[?&]cstart=([^&]*).*/, "$1") || 0, 0); Oe = +a.replace(/.*[?&]pagesize=([^&]*).*/, "$1") || 0; Oe = Math.max(Math.min(Oe, 100), 20); Q("#gsc_bpf_more", "click", Ne) } And there's a snippet that handles the POST request. function cc(a, b, c) { var d = new XMLHttpRequest; d.onreadystatechange = function() { if (d.readyState == 4) { var e = d.status , f = d.responseText , h = d.getResponseHeader("Content-Type") , k = d.responseURL , m = window.location , l = m.protocol; m = "//" + m.host + "/"; k && k.indexOf(l + m) && k.indexOf("https:" + m) && (e = 0, h = f = ""); c(e, f, h || "") } } ; d.open(b ? "POST" : "GET", a, !0); d.setRequestHeader("X-Requested-With", "XHR"); b && d.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); b ? d.send(b) : d.send(); But (again!)... Getting this working in Python turned out to be pretty cumbersome and error-prone. I managed to get the JSON response but failed at decoding issues. So... I thought I'd give curl another go. I cooked up a simple bash script. Here's the breakdown: I use pairs of numbers to mimic the pagination values and start the while loop I use curl with default headers to send POST requests only I add the {"json": 1} payload, although I'm not sure if it's really needed I pipe the response to jq and grab the value of B, which is the HTML that contains the links I use rg or ripgrep to grab the citation links I use sed to replace & with & to get working URL paths Finally, I use awk to add the base_url to the extracted paths #!/bin/bash base_url="https://scholar.google.ca/" user="iYN86KEAAAAJ" echo -e "0 20\n20 80\n100 100\n200 100" | while read cstart pagesize; do curl -s -X POST "${base_url}citations?user=${user}&hl=en&cstart=${cstart}&pagesize=${pagesize}" -d "json=1" \ | jq -r ".B" \ | rg -o -r '$1' 'href="\/(.+?)"' \ | sed 's/&/\&/g' \ | awk '{print "https://scholar.google.ca/" $0}' done Running this should give you all the citation links for that particular scolar. Output (first 20 links plus 1 from the next page; I redacted it on purpose): https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:kNdYIx-mwKoC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:ZeXyd9-uunAC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:AXPGKjj_ei8C https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:KxtntwgDAa4C https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:MXK_kJrjxJIC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:_Qo2XoVZTnwC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:RHpTSmoSYBkC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:4JMBOYKVnBMC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:NhqRSupF_l8C https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:IWHjjKOFINEC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:bEWYMUwI8FkC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:Mojj43d5GZwC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:7PzlFSSx8tAC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:fPk4N6BV_jEC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:ufrVoPGSRksC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:maZDTaKrznsC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:vRqMK49ujn8C https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:LkGwnXOMwfcC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:R3hNpaxXUhUC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&citation_for_view=iYN86KEAAAAJ:Y0pCki6q_DkC https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=iYN86KEAAAAJ&cstart=20&pagesize=80&citation_for_view=iYN86KEAAAAJ:5nxA0vEk-isC and much more... Here's a slightly modified script where you can save the results to a file and keep looping for even more results. Let's get all citation links for the Godfather of AI and the Noble prize winner Geoffrey Hinton. :) Working script #1 Note: make sure you chmod this bash file. For example: chmod +x no1_script.sh And use it by running: $ ./no1_script.sh #!/bin/bash max_results=1000 # Modify this to get more results for different users (scholars) base_url="https://scholar.google.ca/" user="JicYPdAAAAAJ" # Geoffrey Hinton cstart=0 while [ $cstart -lt $max_results ]; do # Determine pagesize based on cstart if [ $cstart -eq 0 ]; then pagesize=20 elif [ $cstart -eq 20 ]; then pagesize=80 else pagesize=100 fi curl -s -X POST "${base_url}citations?user=${user}&hl=en&cstart=${cstart}&pagesize=${pagesize}" -d "json=1" \ | jq -r ".B" \ | rg -o -r '$1' 'href="\/(.+?)"' \ | sed 's/&/\&/g' \ | awk '{print "https://scholar.google.ca/" $0}' >> "${user}.txt" # Update cstart for the next iteration if [ $cstart -eq 0 ]; then cstart=20 elif [ $cstart -eq 20 ]; then cstart=100 else cstart=$((cstart + 100)) fi done If you cat the output file you'll get 709 links. cat JicYPdAAAAAJ.txt | wc -l 709 Putting it all together: Feed the output of the first bash script that collects citation links to this script: Here I'm using Geoffrey Hinton's citation links that I stored in the JicYPdAAAAAJ.txt file from the #1 script. import random import subprocess import time from bs4 import BeautifulSoup def wait_for(max_wait: int = 10) -> None: wait = random.randint(1, max_wait + 1) print(f"Waiting for {wait} seconds...") time.sleep(wait) # Use the output of the first script to get the citation links citation_links_file = [ line for line in open("JicYPdAAAAAJ.txt").read().split("\n") if line ] # Remove [:3] to visit all citation links for link in citation_links_file[:3]: print(f"Visiting {link}...") curl = subprocess.run( ["curl", link], capture_output=True, ) try: soup = ( BeautifulSoup(curl.stdout.decode("ISO-8859-1"), "html.parser") .select_one("#gsc_oci_title_gg > div > a") ) print(f"Paper link: {soup.get('href')}") wait_for() except AttributeError: print("No paper found.") continue You should get the paper links (not all citations have papers links though). Sample output: Visiting https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=JicYPdAAAAAJ&citation_for_view=JicYPdAAAAAJ:VN7nJs4JPk0C... Paper link: https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf Waiting for 7 seconds... Visiting https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=JicYPdAAAAAJ&citation_for_view=JicYPdAAAAAJ:C-SEpTPhZ1wC... Paper link: https://hal.science/hal-04206682/document Waiting for 5 seconds... Visiting https://scholar.google.ca/citations?view_op=view_citation&hl=en&oe=ASCII&user=JicYPdAAAAAJ&citation_for_view=JicYPdAAAAAJ:GFxP56DSvIMC... No paper found. | 1 | 3 |
79,226,797 | 2024-11-26 | https://stackoverflow.com/questions/79226797/how-to-build-a-cffi-extension-for-inclusion-in-binary-python-wheel-with-uv | I am migrating a library of mine to use uv to manage the package. My package, however, includes a C extension that is wrapped and compiled through CFFI (there's a build script for this). Below is the current version of pyproject.toml that fails to run the build script and, If I build the extension manually, uv build --wheel does not include the _readdbc.so file in the wheel. [project] authors = [ {name = "Flavio Codeco Coelho", email = "[email protected]"}, {name = "Sandro Loch", email = "[email protected]"}, ] license = {text = "AGPL-3.0"} requires-python = "<4,>=3.9" dependencies = [ "dbfread<3,>=2.0.7", "tqdm<5,>=4.64.0", "cffi<2,>=1.15.1", "pyyaml>=6", "setuptools>=75.6.0", ] name = "pyreaddbc" version = "1.1.0" description = "pyreaddbc package" readme = "README.md" classifiers = [ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Operating System :: OS Independent", "Topic :: Software Development :: Libraries :: Python Modules", "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", ] [tool.setuptools] packages = ["pyreaddbc"] [build-system] requires = ["setuptools>=61"] build-backend = "setuptools.build_meta" [build] default = ["python", "-m", "./pyreaddbc/_build_readdbc.py"] # this script never runs [tool.black] line-length = 79 skip-string-normalization = true target-version = ["py39", "py310", "py311", "py312"] exclude = "docs/" [project.optional-dependencies] dev = [ "pytest<9.0.0,>=8.1.1", "pandas<3.0.0,>=2.1.0", ] [project.urls] Repository = "https://github.com/AlertaDengue/pyreaddbc" What am I missing? | After a few days, bumping my head against the lack of specific documentation about this, here is the final form of a pyproject.toml that works : [project] authors = [ { name = "Flavio Codeco Coelho", email = "[email protected]" }, { name = "Sandro Loch", email = "[email protected]" }, ] license = { text = "AGPL-3.0" } requires-python = "<4,>=3.9" dependencies = [ "dbfread<3,>=2.0.7", "tqdm<5,>=4.64.0", "cffi<2,>=1.15.1", "pyyaml>=6", "setuptools>=75.6.0", ] name = "pyreaddbc" version = "1.2.0" description = "pyreaddbc package" readme = "README.md" classifiers = [ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Operating System :: OS Independent", "Topic :: Software Development :: Libraries :: Python Modules", "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", ] [tool.setuptools] py-modules = ["pyreaddbc"] ext-modules = [ {name = "pyreaddbc._readdbc", sources = ["pyreaddbc/_readdbc.c", "pyreaddbc/c-src/blast.c"], include-dirs = ["pyreaddbc", "pyreaddbc/c-src"]}#, libraries = ["readdbc"], library-dirs = ["pyreaddbc"]}, ] [build-system] requires = ["setuptools>=61", "cffi"] build-backend = "setuptools.build_meta" [build] default = ["python", "-m", "./pyreaddbc/_build_readdbc.py"] [tool.black] line-length = 79 skip-string-normalization = true target-version = ["py39", "py310", "py311", "py312"] exclude = "docs/" [project.optional-dependencies] dev = [ "pytest<9.0.0,>=8.1.1", "pandas<3.0.0,>=2.1.0", ] [project.urls] Repository = "https://github.com/AlertaDengue/pyreaddbc" I hope this will help other people needing to automate CFFI extension build using uv. | 3 | 1 |
79,219,057 | 2024-11-23 | https://stackoverflow.com/questions/79219057/how-to-efficiently-represent-a-matrix-product-with-repeated-elements | I have a tensor a that is of shape (n/f, c, c) that I want to multiply by another tensor b of shape (n, c, 1). Each row of a represents f rows of b, such that the naiive way of implementing this would be to simply repeat each row of a f times before performing the multiplication: n = 100 c = 5 f = 10 a = tf.constant(np.random.rand(n//f, c, c)) b = tf.constant(np.random.rand(n, c, c)) a_prime = tf.repeat(a, f, 0) result = a_prime @ b This works, but for large n and f I'm worried about the memory footprint of the repeat. I could of course loop through each row and perform dot-products manually, but that would have implications on performance. Is there a better way? | We can do this by reshaping tensors and utilizing broadcasting, We can perform matrix multiplication more efficiently by eliminating the need for explicit repetition. import tensorflow as tf import numpy as np n = 100 c = 5 f = 10 a = tf.constant(np.random.rand(n // f, c, c)) b = tf.constant(np.random.rand(n, c, c)) #Reshape a and b a_reshaped = tf.reshape(a, (1, n // f, c, c)) b_reshaped = tf.reshape(b, (n, 1, c, c)) # perform matrix multiplication result = tf.matmul(a_reshaped, b_reshaped) result = tf.reduce_sum(result, axis=1) print(result.shape) output: (100, 5, 5) | 3 | 2 |
79,223,097 | 2024-11-25 | https://stackoverflow.com/questions/79223097/settings-py-cannot-find-psycopg2-squlite3-and-postgresql-problem-no-such-t | I'm trying to deploy a simple app (learning log from Python Crash Course) to heroku. The app runs but upon login, I get a 500 error, and in debug=true the error is: no such table: auth_user. I realise this has something to do with squlite3 and Postgresql, but when I try to build the code in settings: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': BASE_DIR / 'db.sqlite3', } } the error message is ImportError: DLL load failed while importing _psycopg: The specified module could not be found. I have attempted to install and import psycopg2, and it appears to be in requirements.txt. But I think the paths are not aligning, since psycopg2 is in a python pathway, and setting.py is looking in my project environment. I'm very confused! Please help! | You have not correctly configured your DATABASES in settings.py. You are attempting to use the default sqlite3 configuration with a PostgreSQL engine which wouldn't work. Also Heroku does not work well with sqlite3 and it wouldn't be a good idea to use it in production. See Why are my file uploads missing/deleted from the Application?. It's great you've installed psycopg2 - the database adapter for PostgreSQL. You'd usually have to configure your PostgreSQL database as shown below: DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": "mydatabase", "USER": "mydatabaseuser", "PASSWORD": "mypassword", "HOST": "127.0.0.1", "PORT": "5432", } } Ensure you've created your database and enter the credentials in the DATABASES dictionary in settings.py. When deploying to Heroku, you'd have to provision your database on Heroku by selecting the Postgre addon. Please see Deploying Django App to Heroku: Full Guide for a complete guide to deploying your Django app on Heroku. | 1 | 2 |
79,223,478 | 2024-11-25 | https://stackoverflow.com/questions/79223478/era5-pressure-values-on-theta-levels | How do I interpret the Pressure values in this dataset? I downloaded ERA5 Potential Vorticity and Pressure(Monthly means of Daily means, analysis, Potential Temperature levels). The pressure data doesnβt make sense to me. Please see the attached image. The potential vorticity data makes sense. It is a 2D grid with the same shape as the lats/lons. But the pressure data is 1D with a length that doesnβt match anything I can think of. Any advice? Thanks! Download request: #!/usr/bin/env python import cdsapi c = cdsapi.Client() c.retrieve("reanalysis-era5-complete", { "class": "ea", "date": "20200101/20200201/20200301/20200401/20200501/20200601/20200701/20200801/20200901/20201001/20201101/20201201/20210101/20210201/20210301/20210401/20210501/20210601/20210701/20210801/20210901/20211001/20211101/20211201/20220101/20220201/20220301/20220401/20220501/20220601/20220701/20220801/20220901/20221001/20221101/20221201/20230101/20230201/20230301/20230401/20230501/20230601/20230701/20230801/20230901/20231001/20231101/20231201/20240101/20240201/20240301/20240401/20240501/20240601/20240701/20240801", "expver": "1", "levelist": "265/275/285/300/315/320/330/350/370/395/430/475/530/600/700/850", "levtype": "pt", "param": "54.128/60.128", "stream": "moda", "type": "an" }, "output") | So if you convert your grib file to netcdf like this cdo -f nc4 copy test.grb test.nc4 and look at the header with ncdump ncdump -h test.nc4 you will see that the PV field (var60) is retrieved on an unstructured reduced Gaussian grid and the pressure field (var54) is a spectral field, that's why you have the zeros as these are the spectral coefficients. float var54(time, lev, nsp, nc2) ; var54:table = 128 ; var54:CDI_grid_type = "spectral" ; var54:axis = "TZ--" ; var54:truncation = 639 ; float var60(time, lev, rgrid) ; var60:table = 128 ; var60:CDI_grid_type = "gaussian_reduced" ; var60:CDI_grid_num_LPE = 320 ; var60:CDI_grid_latitudes = "lat" ; var60:CDI_grid_reduced_points = "reduced_points" ; This makes conversion to a regular grid problematic with cdo, you can do this with eccodes though. However, a far easier solution is to simply request a regular lat-lon grid in the original retrieval, and let MARS handle the interpolation for you. You can do this with the keyword "grid", like this (I also use the format keyword to get the file directly as netcdf, but of course you can stay with grib if you prefer). #!/usr/bin/env python import cdsapi c = cdsapi.Client() c.retrieve("reanalysis-era5-complete", { "class": "ea", "date": "20200101", "expver": "1", "levelist": "265/275/285/300/315/320/330/350/370/395/430/475/530/600/700/850", "levtype": "pt", "param": "54.128/60.128", "stream": "moda", "type": "an", "grid": "F128", "format":"netcdf" }, "test2.nc") This retrieves the data directly on a regular lat-lon grid. My test2.nc file now has these dimensions: longitude = 512 ; latitude = 256 ; theta = 16 ; And here are my regular-gridded field headers for the two variables: float pres(theta, latitude, longitude) float pv(theta, latitude, longitude) I used a low resolution 128 grid here, but you can select others. See the WIKI for the grid options. | 1 | 3 |
79,221,167 | 2024-11-24 | https://stackoverflow.com/questions/79221167/blip2-type-mismatch-exception | I'm trying to create an image captioning model using hugging face blip2 model on colab. My code was working fine till last week (Nov 8) but it gives me an exception now. To install packages I use the following command: !pip install -q git+https://github.com/huggingface/peft.git transformers bitsandbytes datasets To load blip2 processor and model I use the following code: model_name = "Salesforce/blip2-opt-2.7b" processor = AutoProcessor.from_pretrained(model_name) model = Blip2ForConditionalGeneration.from_pretrained(model_name,device_map="auto",load_in_8bit=False) I use the following code to generate captions: def generate_caption(processor, model, image_path): image = PILImage.open(image_path).convert("RGB") print("image shape:" + image.size) device = "cuda" if torch.cuda.is_available() else "cpu" # Preprocess the image inputs = processor(images=image, return_tensors="pt").to(device) print("Input shape:", inputs['pixel_values'].shape) print("Device:", device) # Additional debugging for key, value in inputs.items(): print(f"Key: {key}, Shape: {value.shape}") # Generate caption with torch.no_grad(): generated_ids = model.generate(**inputs) caption = processor.decode(generated_ids[0], skip_special_tokens=True) return caption here is the code that uses this method to generate captions: image_path = "my_image_path.jpg" caption = generate_caption(processor, model, image_path) print(f"{image_path}: {caption}" finally, this is the outputs and errors of running the code above: image shape: (320, 240) Input shape: torch.Size([1, 3, 224, 224]) Device: cuda Key: pixel_values, Shape: torch.Size([1, 3, 224, 224]) --------------------------------------------------------------------------- . . . /usr/local/lib/python3.10/dist-packages/transformers/models/blip_2/modeling_blip_2.py in generate(self, pixel_values, input_ids, attention_mask, interpolate_pos_encoding, **generate_kwargs) 2314 if getattr(self.config, "image_token_index", None) is not None: 2315 special_image_mask = (input_ids == self.config.image_token_index).unsqueeze(-1).expand_as(inputs_embeds) -> 2316 inputs_embeds[special_image_mask] = language_model_inputs.flatten() 2317 else: 2318 logger.warning_once( RuntimeError: shape mismatch: value tensor of shape [81920] cannot be broadcast to indexing result of shape [0] I have searched the internet and used various AI models for help but to no avail. My guess is that this is a package update problem since my code had no problem last week. (I tried to restore my code to Nov 8 version but it throws an exception.) Moreover, I don't understand how 81920 is calculated in the error message. | I had the same issue. You need to add a prompt in the processor: prompt = " " inputs = processor(images=image, text=prompt, return_tensors="pt").to(device="cuda", dtype=torch.float16) Hope it helps. | 2 | 1 |
79,227,338 | 2024-11-26 | https://stackoverflow.com/questions/79227338/case-insensitive-uniqueconstraint-using-sqlalchemy-with-postgresql-db | I am using SQLAlchemy and PostgreSQL, and I am trying to create a case-insensitive unique constraint that works like this UniqueConstraint( 'country', 'organisation_id', func.lower(func.trim('name')), name='uq_country_orgid_lower_trim_name' ) Ensuring a unique combination of name, country and organisation id, regardless of case and spacing in the name, i.e. "Name 1", "name1", "nAmE 1" would all be handled as "name1" in the check. I want to make sure that I do not change the actual case or spacing of the name saved in the database. How would I go about this? | This is mostly the ORM version of @snakecharmerb and avoids the use of text (nothing wrong with sanitized text, it is just a preference). This answer suggests using a unique index, instead of unique constraint because SQLA lacks support. from sqlalchemy import create_engine, func, Index, select, column from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, Session class Base(DeclarativeBase): ... class SomeTable(Base): __tablename__ = "some_table" __table_args__ = ( Index( "uq_country_orgid_lower_trim_name", "country", "organization_id", func.lower(func.replace(column("name"), " ", "")), unique=True, ), ) name: Mapped[str] id: Mapped[int] = mapped_column(primary_key=True) country: Mapped[str] organization_id: Mapped[int] engine = create_engine("postgresql+psycopg://") Base.metadata.create_all(engine) with Session(engine) as session: session.add(SomeTable(country="C1", organization_id=1, name="NAME 1")) session.commit() with Session(engine) as session: rows = ( SomeTable(country="C1", organization_id=1, name="NAME1"), SomeTable(country="C1", organization_id=1, name="Name 1"), SomeTable(country="C1", organization_id=1, name="name1"), SomeTable(country="C1", organization_id=1, name="nAmE 1"), ) for row in rows: try: with session.begin_nested(): session.add(row) except IntegrityError: print("failed as expected") with Session(engine) as session: row = session.scalar(select(SomeTable)) assert row assert row.name == "NAME 1" | 1 | 2 |
79,226,508 | 2024-11-26 | https://stackoverflow.com/questions/79226508/pyspark-groupeddata-chain-several-different-aggregation-methods | I am playing with GroupedData in pyspark. This is my environment. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.5.1 /_/ Using Scala version 2.12.18, OpenJDK 64-Bit Server VM, 11.0.24 Branch HEAD https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.GroupedData.html I wonder if the following is possible. Say I want to use only the methods of GroupedData, and not import any functions from pyspark.sql.functions. OK, suppose I have some DataFrame and I've grouped it already by column A and I've got a GroupedData object back. Now I want to do on my GroupedData object say sum(column B), and say avg(column C) and maybe min(column D) in one shot or via chained method calls. Can I do this just by using GroupedData methods? I am asking this because it seems that once I've done sum(column B), I don't have a GroupedData object anymore, and so I cannot continue to chain any GroupedData methods further. So is that (what I have in mind) possible or not? If it's possible, how can we do it? | I do not think that this is possible. Looking at the source of GroupedData, we see that all functions like avg max min and sum return a DataFrame, so chaining is not possible. | 1 | 1 |
79,225,773 | 2024-11-26 | https://stackoverflow.com/questions/79225773/in-polars-what-is-the-correct-equivalent-code-for-row-number-overpartition-b | I am trying to refactor (translate) a given SQL query to python script using polars library. I am stuck in one line of query where ROW_NUMBER() function is used followed by OVER(PARTITION BY) function. Below is the table schema: product_id (INTEGER) variant_id (INTEGER) client_code (VARCHAR) transaction_date (DATE) customer_id (INTEGER) store_id (INTEGER) invoice_id (VARCHAR) invoice_line_id (INTEGER) quantity (NUMERIC) net_sales_price (NUMERIC) Below is the SQL query: SELECT product_id, variant_id, client_code, transaction_date, ROW_NUMBER() OVER( PARTITION BY product_id, variant_id, store_id, customer_id, client_code ORDER BY transaction_date ASC, invoice_id ASC, invoice_line_id ASC, quantity DESC, net_sales_price ASC ) AS repeat_purchase_seq FROM transactions I tried some ways, such as: example 1: using pl.first().cum_count().over() new_df = ( df .sort(['product_id', 'variant_id', 'store_id', 'customer_id', 'client_code','transaction_date', 'invoice_id', 'invoice_line_id',pl.col('quantity').reverse(), 'net_sales_price']) .with_columns(repeat_purchase_seq = pl.first().cum_count().over(['product_id', 'variant_id', 'store_id', 'customer_id', 'client_code']).flatten()) ) example 2: using pl.rank('ordinal').over() new_df = ( df .sort(['transaction_date', 'invoice_id', 'invoice_line_id', 'quantity', 'net_sales_price'], descending=[False, False, False, True, False]) .with_columns(repeat_purchase_seq = pl.struct('transaction_date', 'invoice_id', 'invoice_line_id', 'quantity', 'net_sales_price').rank('ordinal').over(['product_id', 'variant_id', 'store_id', 'customer_id', 'client_code'])) ) Both the examples have some or the other problem, I tried to compare the table created by SQL with the dataframe created using Polars, out of 17 million rows, there are around 250,000 rows which doesn't match. So is there a better way to handle this ROW_NUMBER() OVER(PARTITION BY) situation? Edit - Below is the answer by @roman, which helped in my case: partition_by_keys = ["product_id", "variant_id", "store_id", "customer_id", "client_code"] order_by_keys = ["transaction_date", "invoice_id", "invoice_line_id", "quantity", "net_sales_price"] order_by_descending = [False, False, False, True, False] order_by = [-pl.col(col) if desc else pl.col(col) for col, desc in zip(order_by_keys, order_by_descending)] df.with_columns( pl.struct(order_by) .rank("ordinal") .over(partition_by_keys) .alias("rn") ) | You could use pl.Expr.rank() but it is applied to one pl.Expr/column. You can, of course, create this column out of sequence of columns with pl.struct() and rank it: partition_by_keys = ["product_id", "variant_id", "store_id", "customer_id", "client_code"] order_by_keys = ["transaction_date", "invoice_id", "invoice_line_id", "quantity", "net_sales_price"] df.with_columns( pl.struct(order_by_keys) .rank("ordinal") .over(partition_by_keys) .alias("rn") ) But there's a problem with applying asc and desc sort based on struct' fields. If you had numeric fields you could use negation, but you have string-typed columns there as well. In your case you can actually do it, cause the only column where you want to sort descending is quantity: partition_by_keys = ["product_id", "variant_id", "store_id", "customer_id", "client_code"] order_by_keys = ["transaction_date", "invoice_id", "invoice_line_id", "quantity", "net_sales_price"] order_by_descending = [False, False, False, True, False] order_by = [-pl.col(col) if desc else pl.col(col) for col, desc in zip(order_by_keys, order_by_descending)] df.with_columns( pl.struct(order_by) .rank("ordinal") .over(partition_by_keys) .alias("rn") ) And more generic approach would be to use pl.DataFrame.sort() and pl.int_range(). Iβve added pl.DataFrame.with_row_index() and additional sort to return back to original order. partition_by_keys = ["product_id", "variant_id", "store_id", "customer_id", "client_code"] order_by_keys = ["transaction_date", "invoice_id", "invoice_line_id", "quantity", "net_sales_price"] order_by_descending = [False, False, False, True, False] ( df.with_row_index() .sort(order_by_keys, descending=order_by_descending) .with_columns( rn = pl.int_range(pl.len()).over(partition_by_keys) ) .sort(βindexβ) .drop(βindexβ) ) | 4 | 1 |
79,227,254 | 2024-11-26 | https://stackoverflow.com/questions/79227254/shift-column-in-dataframe-without-deleting-one | Here is my dataframe: A B C First row to delete row to shift Second row to delete row to shift And I want this output : A B C First row to shift Second row to shift I tried this code : df.shift(-1, axis=1) A B C row to delete row to shift row to delete row to shift The fact is, is there a way to keep the first column not modified ? | Be explicit, chose the columns to affect and reassign (or update): df[['B', 'C']] = df[['B', 'C']].shift(-1, axis=1, fill_value='') Or: cols = ['B', 'C'] df[cols] = df[cols].shift(-1, axis=1, fill_value='') # or # df.update(df[cols].shift(-1, axis=1, fill_value='')) Output: A B C 0 First row to shift 1 Second row to shift You can do the same per index/column: cols = ['B', 'C'] idx = [1] df.loc[idx, cols] = df.loc[idx, cols].shift(-1, axis=1, fill_value='') # or # df.update(df.loc[idx, cols].shift(-1, axis=1, fill_value='')) Output: A B C 0 First row row 1 Second row to shift | 3 | 3 |
79,226,735 | 2024-11-26 | https://stackoverflow.com/questions/79226735/pandas-replace-and-downcasting-deprecation-since-version-2-2-0 | Replacing strings by numerical values used to be easy, but since pandas 2.2. the simple approach below throws a warning. What is the "correct" way to do this now? >>> s = pd.Series(["some", "none", "all", "some"]) >>> s.dtypes dtype('O') >>> s.replace({"none": 0, "some": 1, "all": 2}) FutureWarning: Downcasting behavior in `replace` is deprecated and will be removed in a future version. To retain the old behavior, explicitly call `result.infer_objects(copy=False)`. To opt-in to the future behavior, set `pd.set_option('future.no_silent_downcasting', True)` 0 1 1 0 2 2 3 1 dtype: int64 If I understand the warning correctly, the object dtype is "downcast" to int64. Perhaps pandas wants me to do this explicitly, but I don't see how I could downcast a string to a numerical type before the replacement happens. | When you run: s.replace({"none": 0, "some": 1, "all": 2}) The dtype of the output is currently int64, as pandas inferred that the values are all integers. print(s.replace({"none": 0, "some": 1, "all": 2}).dtype) # int64 In a future pandas version this won't happens anymore automatically, the dtype will remain object (you will still have integers but as objects, not int64): pd.set_option('future.no_silent_downcasting', True) print(s.replace({"none": 0, "some": 1, "all": 2}).dtype) # object You will have to explicitly downcast the objects to integers (after the replacement): s.replace({"none": 0, "some": 1, "all": 2}).infer_objects(copy=False) print(s.replace({"none": 0, "some": 1, "all": 2}) .infer_objects(copy=False).dtype) # int64 | 3 | 3 |
79,226,130 | 2024-11-26 | https://stackoverflow.com/questions/79226130/gps-tracking-streamlit-in-mobile-device | I'm running a Streamlit app where I try to retrieve the user's geolocation in streamlit. However, when using geocoder.ip("me"), the coordinates returned are 45, -121, which point to Oregon, USA, rather than my actual location. This is the function I use: def get_lat_lon(): # Use geocoder to get the location based on IP g = geocoder.ip('me') if g.ok: lat = g.latlng[0] # Latitude lon = g.latlng[1] # Longitude return lat, lon else: st.error("Could not retrieve location from IP address.") return None, None I would like to find a solution that can work in an streamlit app so by clicking a st.button I can call a function that retrieves my lat, long. | Consider the following simple Streamlit app: import streamlit as st import requests import geocoder from typing import Optional, Tuple def get_location_geocoder() -> Tuple[Optional[float], Optional[float]]: """ Get location using geocoder library """ g = geocoder.ip('me') if g.ok: return g.latlng[0], g.latlng[1] return None, None def get_location_ipapi() -> Tuple[Optional[float], Optional[float]]: """ Fallback method using ipapi.co service """ try: response = requests.get('https://ipapi.co/json/') if response.status_code == 200: data = response.json() lat = data.get('latitude') lon = data.get('longitude') if lat is not None and lon is not None: # Store additional location data in session state st.session_state.location_data = { 'city': data.get('city'), 'region': data.get('region'), 'country': data.get('country_name'), 'ip': data.get('ip') } return lat, lon except requests.RequestException as e: st.error(f"Error retrieving location from ipapi.co: {str(e)}") return None, None def get_location() -> Tuple[Optional[float], Optional[float]]: """ Tries to get location first using geocoder, then falls back to ipapi.co """ # Try geocoder first lat, lon = get_location_geocoder() # If geocoder fails, try ipapi if lat is None: st.info("Primary geolocation method unsuccessful, trying alternative...") lat, lon = get_location_ipapi() return lat, lon def show_location_details(): """ Displays the additional location details if available """ if 'location_data' in st.session_state: data = st.session_state.location_data st.write("Location Details:") col1, col2 = st.columns(2) with col1: st.write("π City:", data['city']) st.write("ποΈ Region:", data['region']) with col2: st.write("π Country:", data['country']) st.write("π IP:", data['ip']) def main(): st.title("IP Geolocation Demo") st.write("This app will attempt to detect your location using IP geolocation.") if st.button("Get My Location", type="primary"): with st.spinner("Retrieving your location..."): lat, lon = get_location() if lat is not None and lon is not None: st.success("Location retrieved successfully!") # Create two columns for coordinates col1, col2 = st.columns(2) with col1: st.metric("Latitude", f"{lat:.4f}") with col2: st.metric("Longitude", f"{lon:.4f}") # Show additional location details if available show_location_details() # Display location on a map st.write("π Location on Map:") st.map(data={'lat': [lat], 'lon': [lon]}, zoom=10) else: st.error("Could not determine your location. Please check your internet connection and try again.") if __name__ == "__main__": main() The app above allows users to click a button to generate their location using the geocoder library. I added a back up in case geocoder doesn't work as expected, using the ipapi.co api, it has been proven to be more accurate at times. There will be some limitations with this approach, it will be less accurate relative to browser geolocation API alternatives. Nevertheless it works on all devices and doesn't require permissions, whereas the browser geolocation api version will. Below are some pictures of what the app from above looks like. The geolocation button: And the resulting output: Depending on how exact the location requirements are it might be worth exploring the browser geolocation api alternative. | 1 | 2 |
79,225,406 | 2024-11-26 | https://stackoverflow.com/questions/79225406/how-do-you-express-the-identity-expression | How do you express the identity expression in Polars? By this I mean the expression idexpr that when you do lf.filter(idexpr) you get the entirety of lf. Similar to SELECT(*) in SQL. I'm resorting to a logical expression like idexpr = (pl.col("a") == 0) | (pl.col("a") != 0) | According to Documentation, what's passed to filter as a predicate needs to be an "Expression(s) that evaluates to a boolean Series." This you already know, since you are passing a logical expression to circumvent it. Easily enough, there's a very simple expression that always evaluates to true: pl.lit(True) or just True. import polars as pl df = pl.DataFrame({ "id": [1, 2, 3], "name": ["Alice", "Bob", "Charlie"], "age": [24, 28, 23], "bool": [True, False, True] }) print(df.filter(pl.lit(True))) This gives the output of: shape: (3, 4) βββββββ¬ββββββββββ¬ββββββ¬ββββββββ β id β name β age β bool β β --- β --- β --- β --- β β i64 β str β i64 β bool β βββββββͺββββββββββͺββββββͺββββββββ‘ β 1 β Alice β 24 β true β β 2 β Bob β 28 β false β β 3 β Charlie β 23 β true β βββββββ΄ββββββββββ΄ββββββ΄ββββββββ Edit: Careful though, I found at least two cases where this does not work. Series: This does not seem to work for them at all (polars 1.15): >>> series = pl.Series("A", ["test", "test"]) >>> series.filter(pl.lit(True)) ... AttributeError: 'Expr' object has no attribute '_s' I assume this is the case because pl.Series.filter works only with a mask anyway. This works however: series.filter([True]) it also works for a dataframe it seems, so if you want to use the same solution for both series and dataframes, this is the one. Null-column: When there's a column of type null (polars 1.12, seems to be fixed in 1.15 though): >>> null_frame = pl.DataFrame({"A": [None, None], "B": ["test", "test"}) >>> null_frame.filter(pl.lit(True)) ... polars.exceptions.ShapeError: filter's length: 1 differs from that of the series: 2 | 4 | 4 |
79,225,757 | 2024-11-26 | https://stackoverflow.com/questions/79225757/how-to-replace-the-deprecated-webelement-get-attribute-function-in-selenium-4 | On some web pages, due to rendering issues (e.g., hidden element), WebElement.text may not reveal the underlying text whereas WebElement.get_attribute("textContent") will. Therefore I have written the following utility function: from selenium.webdriver.remote.webelement import WebElement def text(e: WebElement) -> str: return e.text or e.get_attribute("textContent") or "n/a" WebElement.get_attribute() is deprecated in Selenium 4.27.0. The recommendation is to use WebElement.get_dom_attribute(). However, this is not a drop-in replacement because WebElement.get_dom_attribute() will only reveal attributes declared in the HTML markup. How would I achieve the same functionality without WebElement.get_attribute()? | Further research reveals that I can use WebElement.get_property() This mean a slight change to the utility function because this function can return any one of str | bool | WebElement | dict. Therefore, it becomes: from selenium.webdriver.remote.webelement import WebElement def text(e: WebElement) -> str | bool | dict | WebElement: return e.text or e.get_property("textContent") or False | 1 | 1 |
79,218,845 | 2024-11-23 | https://stackoverflow.com/questions/79218845/altair-chart-mark-text-background-color-visual-clarity | I've been playing with Altair charts in the context of Streamlit for making simple calculators. In one case I wanted to plot 3 bar charts next to each other to make an explainer for how progressive tax brackets work. In one column is the total income, in the next is a bar drawn between the start of the bracket and the point at which the rate is paid for that bracket. Finally I stack all those intervals together to give the total tax owed. I can't figure a good way to help the "Total Owed" number not have visual conflict with the grid lines. This is partly because I can't figure a good way to relate the chart's rendered size to the font size to do conditional checks for overlapping. I also believe there is no way to control the rendering of gridlines on a per-column basis. Ideally I want to keep the gridlines thick and visible without disturbing the text. I could place the text for that column inside the stacked bars, but for certain configurations the multicolored bars make the text just as unreadable. Here's a MRE: import altair as alt import pandas as pd import streamlit as st # Example data data = pd.DataFrame({ 'category': ['A', 'B', 'C', 'D'], 'value': [10, 20, 15, 25] }) # Base chart with gridlines chart = alt.Chart(data).mark_bar().encode( x='value:Q', y=alt.Y('category:N', axis=alt.Axis(grid=True, gridWidth=2, gridColor='darkslategray')) ) text = alt.Chart(data).mark_text( align='left', baseline='middle', color='black' ).encode( x='value:Q', y='category:N', text=alt.Text('value:Q') ) # Combine the charts final_chart = chart + text st.altair_chart(final_chart) I tried using a mark_rect as a sort of backdrop for the text by adding this snippet to the above: # Text with background text_background = alt.Chart(data).mark_rect( color='black', opacity=0.7, height=20, width=20 ).encode( x='value:Q', y='category:N' ) and updating the composition of the final chart. However those rectangles are centered on the end of the bar and (again) with no way to relate the chart's size and the text font size I don't see a straightforward way to center it on the text (or even set an appropriate width to cover the whole text in response to its width). Is there a convenient way to do this? Failing that, is there some other chart library I could use? | In addition to Joel's suggestion of using a background box, I have also had good results adding a copy of the text with a stroke behind the main text. This automatically adjusts the background to the right size. import altair as alt import pandas as pd # Example data data = pd.DataFrame({ 'category': ['A', 'B', 'C', 'D'], 'value': [10, 20, 15, 25] }) # Base chart with gridlines chart = alt.Chart(data).mark_bar().encode( x='value:Q', y=alt.Y('category:N', axis=alt.Axis(grid=True, gridWidth=2, gridColor='darkslategray')) ) text = alt.Chart(data).mark_text( align='left', baseline='middle', color='black', dx=3 # Nudge the text to the right ).encode( x='value:Q', y='category:N', text=alt.Text('value:Q') ) text_background = text.mark_text( align='left', baseline='middle', stroke='white', strokeWidth=5, strokeJoin='round', dx=3 ) # Combine the charts final_chart = chart + text_background + text final_chart | 3 | 3 |
Subsets and Splits