question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,184,421
2024-11-13
https://stackoverflow.com/questions/79184421/converting-an-array-of-floats-into-rgba-values-in-an-efficient-way
I am trying to create a system to take an array of floats which range from 0.0 to 1.0 and convert them into RGBA values based on a lookup table. The output should be an array that is one dimension larger that the input with the last dimension being size 4 and consisting of the RGBA values. Currently I have only been able to do this via loops. Dose anyone know of any numpy indexing methods that could achieve this same result more efficiently. import numpy as np import matplotlib.pyplot as plt cyan = np.array([(x*0,x*1,x*1,255) for x in range(256)]) input_array = np.arange(0,0.8,0.05).reshape(4,4) input_array = input_array*256 colour_array = [] for x in range(input_array.shape[0]): for y in range(input_array.shape[1]): colour_array.append(cyan[int(input_array[x,y])]) colour_array = np.array(colour_array).reshape(4,4,4) plt.imshow(colour_array)
Use the following: shape = input_array.shape index = input_array[*np.indices(shape).reshape(2, -1)].astype(int) colour_array1 = cyan[index].reshape(4, *shape) Confirm the two are equal: np.allclose(colour_array, colour_array1,atol=0) Out[62]: True USE THE OTHER SOLUTION!!!
1
3
79,185,792
2024-11-13
https://stackoverflow.com/questions/79185792/mypy-doesnt-detect-a-type-guard-why
I am trying to teach myself how to use type guards in my new Python project in combination with pydantic-settings, and mypy doesn't seem to pick up on them. What am I doing wrong here? Code: import logging from logging.handlers import SMTPHandler from functools import lru_cache from typing import Final, Literal, TypeGuard from pydantic import EmailStr, SecretStr from pydantic_settings import BaseSettings, SettingsConfigDict SMTP_PORT: Final = 587 class Settings(BaseSettings): """ Please make sure your .env contains the following variables: - BOT_TOKEN - an API token for your bot. - TOPIC_ID - an ID for your group chat topic. - GROUP_CHAT_ID - an ID for your group chat. - ENVIRONMENT - if you intend on running this script on a VPS, this improves logging information in your production system. Required only in production: - SMTP_HOST - SMTP server address (e.g., smtp.gmail.com) - SMTP_USER - Email username/address for SMTP authentication - SMTP_PASSWORD - Email password or app-specific password """ ENVIRONMENT: Literal["production", "development"] # Telegram bot configuration BOT_TOKEN: SecretStr TOPIC_ID: int GROUP_CHAT_ID: int # Email configuration SMTP_HOST: str | None = None SMTP_USER: EmailStr | None = None # If you're using Gmail, this needs to be an app password SMTP_PASSWORD: SecretStr | None = None model_config = SettingsConfigDict(env_file="../.env", env_file_encoding="utf-8") @lru_cache(maxsize=1) def get_settings() -> Settings: """This needs to be lazily evaluated, otherwise pytest gets a circular import.""" return Settings() type DotEnvStrings = str | SecretStr | EmailStr def is_all_email_settings_provided( host: DotEnvStrings | None, user: DotEnvStrings | None, password: DotEnvStrings | None, ) -> TypeGuard[DotEnvStrings]: """ Type guard that checks if all email settings are provided. Returns: True if all email settings are provided as strings, False otherwise. """ return all(isinstance(x, (str, SecretStr, EmailStr)) for x in (host, user, password)) def get_logger(): ... settings = get_settings() if settings.ENVIRONMENT == "development": level = logging.INFO else: # # We only email logging information on failure in production. if not is_all_email_settings_provided( settings.SMTP_HOST, settings.SMTP_USER, settings.SMTP_PASSWORD ): raise ValueError("All email environment variables are required in production.") level = logging.ERROR email_handler = SMTPHandler( mailhost=(settings.SMTP_HOST, SMTP_PORT), fromaddr=settings.SMTP_USER, toaddrs=settings.SMTP_USER, subject="Application Error", credentials=(settings.SMTP_USER, settings.SMTP_PASSWORD.get_secret_value()), # This enables TLS - https://docs.python.org/3/library/logging.handlers.html#smtphandler secure=(), ) And here is what mypy is saying: media_only_topic\media_only_topic.py:122: error: Argument "mailhost" to "SMTPHandler" has incompatible type "tuple[str | SecretStr, int]"; expected "str | tuple[str, int]" [arg-type] media_only_topic\media_only_topic.py:123: error: Argument "fromaddr" to "SMTPHandler" has incompatible type "str | None"; expected "str" [arg-type] media_only_topic\media_only_topic.py:124: error: Argument "toaddrs" to "SMTPHandler" has incompatible type "str | None"; expected "str | list[str]" [arg-type] media_only_topic\media_only_topic.py:126: error: Argument "credentials" to "SMTPHandler" has incompatible type "tuple[str | None, str | Any]"; expected "tuple[str, str] | None" [arg-type] media_only_topic\media_only_topic.py:126: error: Item "None" of "SecretStr | None" has no attribute "get_secret_value" [union-attr] Found 5 errors in 1 file (checked 1 source file) I would expect mypy here to read up correctly that my variables can't even in theory be None, but type guards seem to change nothing here, no matter how many times I change the code here. Changing to Pyright doesn't make a difference. What would be the right approach here?
Make a separate ProdSettings class. ProdSettings will raise an error if any of those values are missing. class ProdSettings(BaseSettings): ENVIRONMENT: Literal["production"] BOT_TOKEN: SecretStr TOPIC_ID: int GROUP_CHAT_ID: int SMTP_HOST: str SMTP_USER: EmailStr SMTP_PASSWORD: SecretStr model_config = SettingsConfigDict(env_file="../.env", env_file_encoding="utf-8") Change Settings to DevSettings and overwrite ProdSettings to allow for None in development. class DevSettings(ProdSettings): ENVIRONMENT: Literal["development"] SMTP_HOST: str | None = None SMTP_USER: EmailStr | None = None SMTP_PASSWORD: SecretStr | None = None Use a Discriminator to declare a Settings type, while asserting what makes them different. type Settings = Annotated[DevSettings | ProdSettings, Field(discriminator="ENVIRONMENT")] When you run get_settings, try dev first, then prod. Set the return type as Settings. @lru_cache(maxsize=1) def get_settings() -> Settings: # Alternatively, you could directly put `Annotated[DevSettings | ProdSettings, Field(discriminator="ENVIRONMENT")]` here. """This needs to be lazily evaluated, otherwise pytest gets a circular import.""" try: return DevSettings() except ValidationError: return ProdSettings() Now when you have if settings.ENVIRONMENT == "development": this acts as a TypeGuard (You could do this explicitly as well). You'll see that mypy recognizes true as meaning settings is an instance of DevSettings and otherwise an instance of ProdSettings. def get_logger(): settings = get_settings() if settings.ENVIRONMENT == "development": level = logging.INFO else: level = logging.ERROR email_handler = SMTPHandler( mailhost=(settings.SMTP_HOST, SMTP_PORT), fromaddr=settings.SMTP_USER, toaddrs=settings.SMTP_USER, subject="Application Error", credentials=(settings.SMTP_USER, settings.SMTP_PASSWORD.get_secret_value()), # This enables TLS - https://docs.python.org/3/library/logging.handlers.html#smtphandler secure=(), )
1
2
79,186,037
2024-11-13
https://stackoverflow.com/questions/79186037/what-is-the-pandas-version-of-np-select
I feel very silly asking this. I want to set a value in a DataFrame depending on some other columns. I.e: (Pdb) df = pd.DataFrame([['cow'], ['dog'], ['trout'], ['salmon']], columns=["animal"]) (Pdb) df animal 0 cow 1 dog 2 trout 3 salmon (Pdb) df["animal"] = np.select(df["animal"] == "dog", "canine", "not-canine") But the problem is that the above doesn't work! It's because I'm providing a single value, not an array. Arrgh, numpy. *** ValueError: list of cases must be same length as list of conditions (Pdb) I know about df.where and df.mask - but there seems to be no df.select. What ought I do?
np.select can be used here, but you need to wrap the conditions/assigned values in a list: df = pd.DataFrame([['cow'], ['dog'], ['trout'], ['salmon']], columns=['animal']) df['animal'] = np.select([df['animal'] == 'dog'], ['canine'], 'not-canine') Since you have a single condition here, better use numpy.where: df['animal'] = np.where(df['animal'] == 'dog', 'canine', 'not-canine') Output: animal 0 not-canine 1 canine 2 not-canine 3 not-canine To complement a bit @Tricky's answer, if you use case_when (a novel addition since pandas 2.2), you can have a default value by passing a catch-all condition: df['animal'].case_when([(df['animal'].eq('dog'), 1), (df['animal'].eq('trout'), 'B'), (np.repeat(True, len(df)), 3), ], ) Also note that np.select and case_when do not handle the dtypes the same way. np.select will force a type change to have an homogeneous array, case_when will allow to create an object column keeping the original type: df['select'] = np.select([df['animal'].eq('dog'), df['animal'].eq('trout')], [1, 'B'], -1 ) df['case_when'] = df['animal'].case_when([(df['animal'].eq('dog'), 1), (df['animal'].eq('trout'), 'B'), (np.repeat(True, len(df)), -1), ], ) Output: animal select case_when 0 cow -1 -1 1 dog 1 1 2 trout B B 3 salmon -1 -1 Difference in type: df['select'].to_list() # ['-1', '1', 'B', '-1'] # forced type change: 1 -> '1' df['case_when'].to_list() # [-1, 1, 'B', -1] # preserved type
2
3
79,185,216
2024-11-13
https://stackoverflow.com/questions/79185216/why-does-this-implementation-of-the-block-tridiagonal-thomas-algorithm-give-such
See below my attempt at implementing the block tridiagonal thomas algorithm. If you run this however, you get a relatively large (10^-2) error in the block TMDA compared to the np direct solve (10^-15), even for this very simple case. More complicated test cases give larger errors - I think the errors start growing on the back substitution. Any help as to why would be much appreciated! import numpy as np import torch def solve_block_tridiagonal(a, b, c, d): N = len(b) x = np.zeros_like(d) # Forward elimination with explicit C* and d* storage C_star = np.zeros_like(c) d_star = np.zeros_like(d) # Initial calculations for C_0* and d_0* C_star[0] = np.linalg.solve(b[0], c[0]) d_star[0] = np.linalg.solve(b[0], d[0]) # Forward elimination for i in range(1, N - 1): C_star[i] = np.linalg.solve(b[i] - a[i-1] @ C_star[i-1], c[i]) d_star[i] = np.linalg.solve(b[i] - a[i-1] @ C_star[i-1], d[i] - a[i-1] @ d_star[i-1]) # Last d_star update for the last block d_star[-1] = np.linalg.solve(b[-1] - a[-2] @ C_star[-2], d[-1] - a[-2] @ d_star[-2]) # Backward substitution x[-1] = d_star[-1] for i in range(N-2, -1, -1): x[i] = d_star[i] - C_star[i] @ x[i+1] return x def test_block_tridiagonal_solver(): N = 4 a = np.array([ [[1, 0.5], [0.5, 1]], [[1, 0.5], [0.5, 1]], [[1, 0.5], [0.5, 1]] ], dtype=np.float64) b = np.array([ [[5, 0.5], [0.5, 5]], [[5, 0.5], [0.5, 5]], [[5, 0.5], [0.5, 5]], [[5, 0.5], [0.5, 5]] ], dtype=np.float64) c = np.array([ [[1, 0.5], [0.5, 1]], [[1, 0.5], [0.5, 1]], [[1, 0.5], [0.5, 1]] ], dtype=np.float64) d = np.array([ [1, 2], [2, 3], [3, 4], [4, 5] ], dtype=np.float64) x = solve_block_tridiagonal(a, b, c, d) # Construct the equivalent full matrix A_full and right-hand side d_full A_full = np.block([ [b[0], c[0], np.zeros((2, 2)), np.zeros((2, 2))], [a[0], b[1], c[1], np.zeros((2, 2))], [np.zeros((2, 2)), a[1], b[2], c[2]], [np.zeros((2, 2)), np.zeros((2, 2)), a[2], b[3]] ]) d_full = d.flatten() # Flatten d for compatibility with the full system # Solve using numpy's direct solve for comparison x_np = np.linalg.solve(A_full, d_full).reshape((N, 2)) # Print the solutions for comparison print("Solution x from block tridiagonal solver (TMDA):\n", x, "\nResidual:", torch.sum(torch.abs(torch.tensor(A_full)@torch.tensor(x).flatten() - torch.tensor(d).flatten()))) print("Solution x from direct full matrix solver:\n", x_np, "\nResidual np:", torch.sum(torch.abs(torch.tensor(A_full)@torch.tensor(x_np).flatten() - torch.tensor(d).flatten()))) # Run the test function test_block_tridiagonal_solver()
It is dangerous to define a and c with a different number of elements to b, as it could lead to the wrong indexing in the TDMA. If you change the line setting the last element of d_star to the following: d_star[-1] = np.linalg.solve(b[-1] - a[-1] @ C_star[-1], d[-1] - a[-1] @ d_star[-2]) this should now work. Note the TWO occurrences of a[-1], NOT a[-2] and the occurrence of C_star[-1] rather than C_star[-2] ... but these are consequences of how you have sent under-length arrays a and c to the routine.
1
2
79,185,543
2024-11-13
https://stackoverflow.com/questions/79185543/curly-brace-expansion-fails-on-bash-in-linux-when-called-from-python
Consider this curly brace expansion in bash: for i in {1..10}; do echo $i; done; I call this script from the shell (on macOS or Linux) and the curly brace does expand: $ ./test.sh 1 2 3 4 5 6 7 8 9 10 I want to call this script from Python, for example: import subprocess print(subprocess.check_output("./test.sh", shell=True)) On macOS, this Python call expands the curly brace and I see this output: b'1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n' On Linux, this Python call fails to expand the curly brace and I see this output: b'{1..10}\n' Why does curly brace expansion work on the interactive shell (macOS or Linux) and when called from Python on macOS, but fails when called from Python on Linux?
{1..10} is a bash feature, it is not defined in POSIX sh. It seems that subprocess.check_output("./test.sh", shell=True) invokes bash (or another shell which supports this feature) in the first example (macOS) and sh in your second example (Linux). See: https://www.shellcheck.net/wiki/SC3009 Actual meaning of 'shell=True' in subprocess Does the `shell` in `shell=True` in subprocess mean `bash`?
1
1
79,184,836
2024-11-13
https://stackoverflow.com/questions/79184836/python-tkinter-notebook-gets-resized-when-i-pack-a-ttk-treeview-on-the-second-ta
I am trying to write a program to time sailing races, and to simplify adding the boats to a race by selecting from a list. I have used a Treeview linked to a tk.StringVar() to filter the boats by typing a part of the person or boats name to filter the list of boats to select. This works very well. I am writing this program in Debian 12 Linux Plasma I wanted to put a Treeview on the second tab to show the list of boats selected as entries, but when I pack this Treeview the Notebook doesn't expand to fill the window. Uncommenting the following line produces this behaviour. #tree2.pack(fill='both', expand=True) The following is the code import tkinter as tk from tkinter import ttk import csv # root window root = tk.Tk() root.geometry(str('1600x900')) # create a notebook notebook = ttk.Notebook(root) notebook.pack(expand=True) # create frames for tabs EntriesFrame = ttk.Frame(notebook, width=1600, height=880) # 20 seems to be the right amount RaceFrame = ttk.Frame(notebook, width=1600, height=880) # add frames to notebook as tabs notebook.add(EntriesFrame, text='Entries') notebook.add(RaceFrame, text='Race') # Set up a treeview in first tab (EntriesFrame) ColNames = ['SailNo', 'Boat', 'HelmName', 'CrewName', 'Class', 'Fleet', 'Yardstick'] tree = ttk.Treeview(EntriesFrame, columns=ColNames, show='headings') for ColName in ColNames: tree.heading(ColName, text=ColName) tree.pack(fill='both', expand=True) # a Treeview for the entries on the next tab. tree2 = ttk.Treeview(RaceFrame, columns=ColNames, show='headings') for ColName in ColNames: tree2.heading(ColName, text=ColName) #tree2.pack(fill='both', expand=True) root.mainloop() Hope someone can help.
Without calling tree2.pack(...), the size of RaceFrame is still around 1600x880, so the frames inside the notebook will all have the size of the biggest frame, that is RaceFrame. However, when tree2.pack(...) is called, the size of RaceFrame will be shrink to the size of tree2. So the size of the notebook will also be shrink to the size that can hold the biggest frame inside it because no fill option is used in notebook.pack(...). Therefore adding fill='both' to notebook.pack(...) will keep the notebook to fill the size of the window. And those frames inside the notebook will also be expanded.
2
1
79,184,247
2024-11-13
https://stackoverflow.com/questions/79184247/best-practices-for-using-property-with-enum-values-on-a-django-model-for-drf-se
Question: I'm looking for guidance on using @property on a Django model, particularly when the property returns an Enum value and needs to be exposed in a Django REST Framework (DRF) serializer. Here’s my setup: I’ve defined an Enum, AccountingType, to represent the possible accounting types: from enum import Enum class AccountingType(Enum): ASSET = "Asset" LIABILITY = "Liability" UNKNOWN = "Unknown" On my Account model, I use a @property method to determine the accounting_type based on existing fields: # Account fields ... @property def accounting_type(self) -> AccountingType: """Return the accounting type for this account based on the account sub type.""" if self.account_sub_type in constants.LIABILITY_SUB_TYPES: return AccountingType.LIABILITY if self.account_sub_type in constants.ASSET_SUB_TYPES: return AccountingType.ASSET return AccountingType.UNKNOWN In Django views, I can use this property directly without issues. For example: account = Account.objects.get(id=some_id) if account.accounting_type == AccountingType.LIABILITY: print("This account is a liability.") Problem: When trying to expose accounting_type in DRF, using serializers.ReadOnlyField() does not include the property in the serialized output: class AccountDetailSerializer(serializers.ModelSerializer): accounting_type = serializers.ReadOnlyField() class Meta: model = Account fields = ['accounting_type', 'account_id', ...] I found that switching to serializers.SerializerMethodField() resolves the issue, allowing me to return the Enum value as a string: class AccountDetailSerializer(serializers.ModelSerializer): accounting_type = serializers.SerializerMethodField() class Meta: model = Account fields = ['accounting_type', 'account_id', ...] def get_accounting_type(self, obj): return obj.accounting_type.value # Return the Enum value as a string Questions: Is there a reason serializers.ReadOnlyField() doesn’t work with @property when it returns an Enum? Does DRF handle @property fields differently based on the return type? Is SerializerMethodField the recommended approach when a property returns a complex type, like an Enum, that needs specific serialization? Are there best practices for exposing Enum values via model properties in DRF? Any insights would be appreciated.
Is there a reason serializers.ReadOnlyField() doesn’t work with @property when it returns an Enum? An Enum can not be JSON serialized. Make the AccountingType json serializable by making it a subclass of str as well: class AccountingType(str, Enum): ASSET = 'Asset' LIABILITY = 'Liability' UNKNOWN = 'Unknown' then it is sufficient to work with a ReadOnlyField: class AccountDetailSerializer(serializers.ModelSerializer): accounting_type = serializers.ReadOnlyField() # … Alternatively, we can just obtain the .value from the AccountingType: class AccountDetailSerializer(serializers.ModelSerializer): accounting_type = serializers.ReadOnlyField(source='accountingtype.value') # …
2
2
79,183,948
2024-11-13
https://stackoverflow.com/questions/79183948/how-can-i-override-a-method-where-the-ellipsis-is-assigned-as-the-default-value
In Python v3.10, the following code generates a Pylance error stating (Expression of type "EllipsisType" cannot be assigned to parameter of type "int") from typing import Any from PySide6.QtGui import QStandardItem class A(QStandardItem): def data(self, role: int = ...) -> Any: return super().data(role) pass In QtGui.pyi, The data method of QStandardItem is defined as follows def data(self, role: int = ...) -> Any: ... What is the correct way to subclass by specifying the typing accurately
... as the default value in a .pyi file does not mean a literal Ellipsis object. Rather, role: int = ... means that the parameter role is of type int and has a default value of that same type at runtime, but that value is omitted in the stub file. That said, you need to provide a default value of your own: class A(QStandardItem): def data(self, role: int = 42) -> Any: return super().data(role) If you don't care about LSP, just throw it away entirely: class A(QStandardItem): def data(self, role: int) -> Any: return super().data(role) Using None or a similar sentinel value is another choice: class A(QStandardItem): def data(self, role: int | None = None) -> Any: if role is None: return super().data() return super().data(role) I don't know PySide6, so take this with a grain of salt.
2
2
79,183,144
2024-11-13
https://stackoverflow.com/questions/79183144/discard-rows-with-a-string-value-from-a-dataframe
I have a dataframe like this: C1 C2 C3 C4 1 foo asd 23 foo foo asd 43 3 foo asd 1 4 foo asd bar I'm trying to filter (and discard) all rows that have strings in C1 or C4 columns, my final dataframe must be: C1 C2 C3 C4 1 foo asd 23 3 foo asd 1 I'm trying to do this using "isNaN" but I'm not sure how I should use it. This is my code: df = pd.read_csv( path_file, sep=",", usecols=columns, skiprows=0, skipfooter=0, engine="python", encoding="utf-8", skipinitialspace=True, on_bad_lines='warn', names=columns) df_new = df[df["C1"].notna()] df_new_2 = df[df["C4"].notna()] Any idea about how I can achieve this?
You can try something like this: df.loc[df[['C1', 'C4']].apply(pd.to_numeric, errors='coerce').dropna(how='any').index] Output: C1 C2 C3 C4 0 1 foo asd 23 2 3 foo asd 1 Use pd.to_numeric with errors='coerce' to give NaN that can be used to with dropna.
1
4
79,172,444
2024-11-9
https://stackoverflow.com/questions/79172444/accessing-the-end-of-of-a-file-being-written-while-live-plotting-of-high-speed-d
My question refers to the great answer of the following question: Real time data plotting from a high throughput source As the gen.py code of this answer was growing fast, I wrote own version gen_own.py below, which essentially imposes a delay of 1 ms before writing a new data on the file. I also adapted the code plot.py and wrote my own plot_own.py essentially adding debugging statements. Although I tried to read the doc on the several components of the f.seek(0, io.SEEK_END) line, there are still several points that I don't understand. Here are all the questions that I have My question is: how can we adapt plot_own.py to work with gen_own.py (with a slower datastream) Here is the code gen_own.py: #!/usr/bin/env python3 import time import random LIMIT_TIME = 100 # s DATA_FILENAME = "data.txt" def gen_data(filename, limit_time): start_time = time.time() elapsed_time = time.time() - start_time old_time = time.time() with open(filename, "w") as f: while elapsed_time < limit_time: new_time = time.time() if new_time > old_time + 0.001: f.write(f"{time.time():30.12f} {random.random():30.12f}\n") # produces 64 bytes f.flush() old_time = time.time() elapsed = old_time - start_time gen_data(DATA_FILENAME, LIMIT_TIME) for competeness here is the code of gen.py (copied from original question) #!/usr/bin/env python3 import time import random LIMIT_TIME = 100 # s DATA_FILENAME = "data.txt" def gen_data(filename, limit_time): start_time = time.time() elapsed_time = time.time() - start_time with open(filename, "w") as f: while elapsed_time < limit_time: f.write(f"{time.time():30.12f} {random.random():30.12f}\n") # produces 64 bytes f.flush() elapsed = time.time() - start_time gen_data(DATA_FILENAME, LIMIT_TIME) Here is the code plot_own.py: #!/usr/bin/env python3 import io import time import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.animation BUFFER_LEN = 64 DATA_FILENAME = "data.txt" PLOT_LIMIT = 20 ANIM_FILENAME = "video.gif" fig, ax = plt.subplots(1, 1, figsize=(10,8)) ax.set_title("Plot of random numbers from `gen.py`") ax.set_xlabel("time / s") ax.set_ylabel("random number / #") ax.set_ylim([0, 1]) def get_data(filename, buffer_len, delay=0.0): with open(filename, "r") as f: print("f.seek(0, io.SEEK_END): " + str(f.seek(0, io.SEEK_END))) data = f.read(buffer_len) print("f.tell(): " + str(f.tell())) print("f.readline(): " + f.readline()) print("data: " + data) if delay: time.sleep(delay) return data def animate(i, xs, ys, limit=PLOT_LIMIT, verbose=False): # grab the data try: data = get_data(DATA_FILENAME, BUFFER_LEN) if verbose: print(data) x, y = map(float, data.split()) if x > xs[-1]: # Add x and y to lists xs.append(x) ys.append(y) # Limit x and y lists to 10 items xs = xs[-limit:] ys = ys[-limit:] else: print(f"W: {time.time()} :: STALE!") except ValueError: print(f"W: {time.time()} :: EXCEPTION!") else: # Draw x and y lists ax.clear() ax.set_ylim([0, 1]) ax.plot(xs, ys) # save video (only to attach here) #anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1, frames=3 * PLOT_LIMIT, repeat=False) #anim.save(ANIM_FILENAME, writer='imagemagick', fps=10) #print(f"I: Saved to `{ANIM_FILENAME}`") # show interactively anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1) plt.show() plt.close() Here is the output of plot_own.py when run simultaneously with gen.py f.seek(0, io.SEEK_END): 36998872 f.tell(): 36998936 f.readline(): 1731141285.629011392593 0.423847536979 data: 1731141285.629006385803 0.946414017554 f.seek(0, io.SEEK_END): 37495182 f.tell(): 37495246 f.readline(): 1731141285.670451402664 0.405303398216 data: 1731141285.670446395874 0.103460518242 f.seek(0, io.SEEK_END): 38084306 f.tell(): 38084370 f.readline(): 1731141285.719735860825 0.360983611461 data: 1731141285.719730854034 0.318057761442 Here is the output of plot_own.py when run simultaneously with gen_own.py W: 1731141977.7246473 :: EXCEPTION! f.seek(0, io.SEEK_END): 156426 f.tell(): 156426 f.readline(): data: W: 1731141977.7611823 :: EXCEPTION! f.seek(0, io.SEEK_END): 158472 f.tell(): 158472 f.readline(): data: W: 1731141977.79479 :: EXCEPTION! f.seek(0, io.SEEK_END): 160518 f.tell(): 160518 f.readline(): 1731141977.828338146210 0.165056626254 data: W: 1731141977.8283837 :: EXCEPTION! f.seek(0, io.SEEK_END): 162626 f.tell(): 162626 f.readline(): data: W: 1731141977.8621912 :: EXCEPTION! f.seek(0, io.SEEK_END): 164734 f.tell(): 164734 f.readline(): data:
Even without delay, you have to note that only 1 in 2000 lines are being read and printed and displayed, with delay of 1ms it is 1 in 20 line, but in it there is some issue in seeking end and reading which causes data to be empty several times, you can implement the method tail function from this nice answer therefore your plot_own.py becomes: #!/usr/bin/env python3 import io import os import subprocess import time import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.animation def tail(f, lines=1, _buffer=4098): """Tail a file and get X lines from the end""" # place holder for the lines found lines_found = [] # block counter will be multiplied by buffer # to get the block size from the end block_counter = -1 # loop until we find X lines while len(lines_found) < lines: try: f.seek(block_counter * _buffer, os.SEEK_END) except IOError: # either file is too small, or too many lines requested f.seek(0) lines_found = f.readlines() break lines_found = f.readlines() # we found enough lines, get out # Removed this line because it was redundant the while will catch # it, I left it for history # if len(lines_found) > lines: # break # decrement the block counter to get the # next X bytes block_counter -= 1 return lines_found[-lines:] BUFFER_LEN = 64 DATA_FILENAME = "data.txt" PLOT_LIMIT = 20 ANIM_FILENAME = "video.gif" fig, ax = plt.subplots(1, 1, figsize=(10,8)) ax.set_title("Plot of random numbers from `gen.py`") ax.set_xlabel("time / s") ax.set_ylabel("random number / #") ax.set_ylim([0, 1]) def get_data(filename, buffer_len, delay=0.0): with open(filename, "r") as f: data=tail(f, 1, 65)[0] print(data) if delay: time.sleep(delay) return data def animate(i, xs, ys, limit=PLOT_LIMIT, verbose=False): # grab the data try: data = get_data(DATA_FILENAME, BUFFER_LEN) if data: if verbose: print(data) x, y = map(float, data.split()) if x > xs[-1]: # Add x and y to lists xs.append(x) ys.append(y) # Limit x and y lists to 10 items xs = xs[-limit:] ys = ys[-limit:] else: print(f"W: {time.time()} :: STALE!") except ValueError: print(f"W: {time.time()} :: EXCEPTION!") else: # Draw x and y lists ax.clear() ax.set_ylim([0, 1]) ax.plot(xs, ys) # save video (only to attach here) #anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1, frames=3 * PLOT_LIMIT, repeat=False) #anim.save(ANIM_FILENAME, writer='imagemagick', fps=10) #print(f"I: Saved to `{ANIM_FILENAME}`") # show interactively anim = mpl.animation.FuncAnimation(fig, animate, fargs=([time.time()], [None]), interval=1) plt.show() plt.close() or as for your error you can just make sure data is not empty before plotting so exception is not raised in your plot_own.py: def animate(i, xs, ys, limit=PLOT_LIMIT, verbose=False): # grab the data try: data = get_data(DATA_FILENAME, BUFFER_LEN) if data: if verbose: print(data) x, y = map(float, data.split()) if x > xs[-1]: # Add x and y to lists xs.append(x) ys.append(y) # Limit x and y lists to 10 items xs = xs[-limit:] ys = ys[-limit:] else: print(f"W: {time.time()} :: STALE!") except ValueError: print(f"W: {time.time()} :: EXCEPTION!") else: # Draw x and y lists ax.clear() ax.set_ylim([0, 1]) ax.plot(xs, ys) yes you are still losing data,but this second code is best, i.e. just validate data in your code before plotting with if data: Another approach would be to use que, possibly with some heuristics like display every 1 in 5 line,or display according to speed
3
1
79,161,068
2024-11-6
https://stackoverflow.com/questions/79161068/how-to-listen-for-hotkeys-in-a-separate-thread-using-python-with-win32-api-and-p
I’m setting up a hotkey system for Windows in Python, using the Win32 API and PySide6. I want to register hotkeys in a HotkeyManager class and listen for them in a separate thread, so the GUI remains responsive. However, when I move the listening logic to a thread, the hotkey events are not detected correctly. Here’s the code that works without using threads, where hotkeys are registered and detected on the main thread: from threading import Thread from typing import Callable, Dict from win32gui import RegisterHotKey, UnregisterHotKey, GetMessage from win32con import VK_NUMPAD0, MOD_NOREPEAT class HotkeyManager: def __init__(self): self.hotkey_id = 1 self._callbacks: Dict[int, Callable] = {} def register_hotkey(self, key_code: int, callback: Callable): self._callbacks[self.hotkey_id] = callback RegisterHotKey(0, self.hotkey_id, MOD_NOREPEAT, key_code) self.hotkey_id += 1 def listen(self): while True: print("Listener started.") msg = GetMessage(None, 0, 0) hotkey_id = msg[1] if hotkey_id in self._callbacks: self._callbacks[hotkey_id]() In the main code, this setup works as expected: from PySide6 import QtWidgets from win32con import VK_NUMPAD0 def on_press(): print("Numpad 0 pressed!") app = QtWidgets.QApplication([]) manager = HotkeyManager() manager.register_hotkey(VK_NUMPAD0, on_press) manager.listen() # Initialize window widget = QtWidgets.QMainWindow() widget.show() app.exec() When I try to move the listen() method to a separate thread, however, the hotkey doesn’t respond properly: class HotkeyManager: def listen(self): def run(): while True: print("Listener started.") msg = GetMessage(None, 0, 0) hotkey_id = msg[1] if hotkey_id in self._callbacks: self._callbacks[hotkey_id]() thread = Thread(target=run, daemon=True) thread.start() How can I correctly listen for hotkeys in a separate thread without losing functionality? It seems that the issue may be due to the hotkeys being registered on the main thread while the listening logic runs in a secondary thread. How could I solve this so everything works as expected?
Regarding the statement: "Here’s the code that works without using threads", there's nothing about code in the question that actually works. Let me detail: from win32con import VK_NUMPAD0, MOD_NOREPEAT would yield ImportError. Check [GitHub]: mhammond/pywin32 - feat: add MOD_NOREPEAT (RegisterHotKey) constant (that I submitted a few minutes ago) [GitHub.MHammond]: win32gui.GetMessage (documentation) seems to be wrong, the return is a tuple of 2 elements: Underlying GetMessage return code (BOOL) A [GitHub.MHammond]: PyMSG Object Submitted [GitHub]: mhammond/pywin32 - win32gui.GetMessage documentation is incorrect for this This way of having a thread monitoring for WM_HOTKEY, seems a bit clumsy as the window has its own message loop, and if it was for me to chose, I'd try to handle these messages in the window's message processing function, but a shallow Google search didn't reveal anything useful. Posting a solution that works based on your code. The idea is to wrap the hotkey registering and listening in a function to be executed in a thread. code00.py: #!/usr/bin/env python import sys import threading import typing import win32con as wcon import win32gui as wgui from PySide6 import QtWidgets MOD_NOREPEAT = 0x4000 class HotkeyManager: def __init__(self): self.hotkey_id = 1 self._callbacks: typing.Dict[int, typing.Callable] = {} self.threads = [] def __del__(self): self.clear() def register_hotkey(self, key_code: int, callback: typing.Callable): t = threading.Thread(target=self._register_hotkey, args=(key_code, callback), daemon=True) self.threads.append(t) t.start() def _register_hotkey(self, key_code: int, callback: typing.Callable): self._callbacks[self.hotkey_id] = callback wgui.RegisterHotKey(None, self.hotkey_id, MOD_NOREPEAT, key_code) self.hotkey_id += 1 self._listen() def _listen(self): print(f"Listener started ({threading.get_ident()}, {threading.get_native_id()})") while True: res = wgui.GetMessage(None, 0, 0) print(f" GetMessage returned: {res}") # @TODO - cfati: Check what it returns! rc, msg = res hotkey_id = msg[2] if hotkey_id in self._callbacks: self._callbacks[hotkey_id]() def clear(self): for hkid in self._callbacks: wgui.UnregisterHotKey(None, hkid) self._callbacks = {} def on_press_np0(): print(f"NumPad0 pressed!") def main(*argv): print(f"Main thread ({threading.get_ident()}, {threading.get_native_id()})") app = QtWidgets.QApplication([]) widget = QtWidgets.QMainWindow() widget.setWindowTitle("SO q079161068") widget.show() print("HotKeys") hkm = HotkeyManager() hkm.register_hotkey(wcon.VK_NUMPAD0, on_press_np0) print("App mesage loop") app.exec() if __name__ == "__main__": print( "Python {:s} {:03d}bit on {:s}\n".format( " ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform, ) ) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) Output: [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q079161068]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ./code00.py Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] 064bit on win32 Main thread (23772, 23772) HotKeys App mesage loop Listener started (23672, 23672) GetMessage returned: [1, (0, 786, 1, 6291456, 514232828, (1, 0))] NumPad0 pressed! GetMessage returned: [1, (0, 786, 1, 6291456, 514234437, (1919, 0))] NumPad0 pressed! GetMessage returned: [1, (0, 786, 1, 6291456, 514235703, (1919, 1079))] NumPad0 pressed! GetMessage returned: [1, (0, 786, 1, 6291456, 514237421, (0, 1079))] NumPad0 pressed! GetMessage returned: [1, (0, 786, 1, 6291456, 514238718, (933, 534))] NumPad0 pressed! GetMessage returned: [1, (0, 786, 1, 6291456, 514244375, (1650, 301))] NumPad0 pressed! Done. Also posting a screenshot: As a note, while thread is running NumPad 0 keystroke doesn't reach to other windows.
2
2
79,180,832
2024-11-12
https://stackoverflow.com/questions/79180832/custom-tk-scrollable-frame-scrolls-when-the-window-is-not-full
I've been trying to create a scrollable frame in with Tkinter / ttk using the many answers on here and guides scattered around. The frame is to hold a table which can have rows added and deleted. One issue I'm having that I can't find elsewhere is that if there is space for more content within the frame, I can scroll the contents of the table to the bottom of the frame. When the frame is full, it behaves as expected. I can't convert to a gif at the moment, so I've added some screenshots to try and illustrate. Here is my scrollable frame class, it's pretty standard from examples on the web; class ScrollableFrame(ttk.Frame): def __init__(self, parent): super().__init__(parent) self.canvas = tk.Canvas(self) self.scrollbar = ttk.Scrollbar(self, orient="vertical", command=self.canvas.yview) self.scrollableFrame = ttk.Frame(self.canvas) self.canvas.configure(yscrollcommand=self.scrollbar.set) self.scrollableFrame.bind("<Configure>", lambda e: self.canvas.configure(scrollregion=self.canvas.bbox("all"))) self.bind("<Enter>", self.bind_to_mousewheel) self.bind("<Leave>", self.unbind_from_mousewheel) self.scrollbar.pack(side='right', fill='y') self.canvas.pack(side='top', expand=0, fill='x') self.canvas.create_window((0, 0), window=self.scrollableFrame) def bind_to_mousewheel(self, event): self.canvas.bind_all("<MouseWheel>", self.on_mousewheel) def unbind_from_mousewheel(self, event): self.canvas.unbind_all("<MouseWheel>") def on_mousewheel(self, event): self.canvas.yview_scroll(int(-1*(event.delta/120)), "units") The table rows are added to the ScrollableFrame.scrollableFrame widget by the parent GUI, which I figured was necessary to trigger the <Configure> bound to the ScrollableFrame.canvas. Does anyone have any suggestions? No scroll required as frame not full Table still scrolls Scroll stops with row widgets at bottom of frame
I found an answer thanks to the hint from @acw1668; I adjusted the <Configure> command to check the size of the scrollregion and compare it to the parent ScrollableFrame class window. If it is smaller, I change the scroll region to be the size of the ScrollableFrame. Here are the adjustments I made to my class, including changing the window anchor to sit at the top left of the frame; class ScrollableFrame(ttk.Frame): """ """ def __init__(self, parent): """ """ ... self.scrollableFrame.bind("<Configure>", self.update_scroll_region) ... self.canvas.create_window((0, 0), window=self.scrollableFrame, anchor='nw') def update_scroll_region(self, event): bbox = self.canvas.bbox('all') sfHeight = self.winfo_height() if (bbox[3] - bbox[1]) < sfHeight: newBbox = (bbox[0], 0, bbox[2], sfHeight) self.canvas.configure(scrollregion=newBbox) else: self.canvas.configure(scrollregion=self.canvas.bbox("all")) As pointed out, it would be possible and more Pythonic to use bbox[-1] = max(bbox[-1], sfHeight), then call self.canvas.configure(scrollregion=bbox) however I found due to my canvas' placement I had to set the first Y coordinate to 0.
1
2
79,182,114
2024-11-12
https://stackoverflow.com/questions/79182114/error-importing-python-modules-in-nextflow-script-block
I have a similar problem to those described here and here. The code is as follows: process q2_predict_dysbiosis { publishDir 'results', mode: 'copy' input: path abundance_file path species_abundance_file path stratified_pathways_table path unstratified_pathways_table output: path "${abundance_file.baseName}_q2pd.tsv" script: """ #!/usr/bin/env python from q2_predict_dysbiosis import calculate_index import pandas as pd pd.set_option('display.max_rows', None) taxa = pd.read_csv("${species_abundance_file}", sep="\\t", index_col=0) paths_strat = pd.read_csv("${stratified_pathways_table}", sep="\\t", index_col=0) paths_unstrat = pd.read_csv("${unstratified_pathways_table}", sep="\\t", index_col=0) score_df = calculate_index(taxa, paths_strat, paths_unstrat) score_df.to_csv("${abundance_file.baseName}_q2pd.tsv", sep="\\t", float_format="%.2f") """ } Obtained error: Caused by: Process `q2_predict_dysbiosis (1)` terminated with an error exit status (1) Command executed: #!/usr/bin/env python from q2_predict_dysbiosis import calculate_index import pandas as pd pd.set_option('display.max_rows', None) taxa = pd.read_csv("abundance1-taxonomy_table.txt", sep="\t", index_col=0) paths_strat = pd.read_csv("pathways_stratified.txt", sep="\t", index_col=0) paths_unstrat = pd.read_csv("pathways_unstratified.txt", sep="\t", index_col=0) score_df = calculate_index(taxa, paths_strat, paths_unstrat) score_df.to_csv("abundance1_q2pd.tsv", sep="\t", float_format="%.2f") Command exit status: 1 Command output: (empty) Command error: Traceback (most recent call last): File ".command.sh", line 3, in <module> from q2_predict_dysbiosis import calculate_index ModuleNotFoundError: No module named 'q2_predict_dysbiosis' I have followed the instructions in this link, but it still doesn't work. I would like to keep the code block like that, and not run a script.py file. I am using the code from this repository. Thanks in advance! UPDATE To try to resolve the import error I have done the following: Creating a bin/ directory which is in the same directory as script.nf. No results. Changing the shebang declaration. No results. q2_predict_dysbiosis is not installed (it has no installation instructions), but it runs locally. I think the problem is that Nextflow doesn't locate q2_predict_dysbiosis.py, even though it is in the ./bin directory.
The Python import system uses the following sequence to locate packages and modules to import: The current working directory (i.e. $PWD): This is the directory from which the Python interpreter was launched. The PYTHONPATH environment variable: If set, this environment variable can specify additional directories for Python to search for packages and modules. The sys.path list in the program: The paths in this list determine where Python looks for modules, and you can modify sys.path within your code to include additional directories. System-wide or virtual environment installed packages: These are the packages that have been globally installed on the system or within a virtual environment. A quick solution is to simply set the PYTHONPATH environment variable using the env scope in your nextflow.config. For example, with q2_predict_dysbiosis.py in a folder called packages in the root directory of your project repository (i.e. the directory where the main.nf script is located): env { PYTHONPATH = "${projectDir}/packages" } Tested using main.nf: process q2_predict_dysbiosis { debug true script: """ #!/usr/bin/env python import sys print(sys.path) from q2_predict_dysbiosis import calculate_index assert 'q2_predict_dysbiosis' in sys.modules """ } workflow { q2_predict_dysbiosis() } Results: $ nextflow run main.nf N E X T F L O W ~ version 24.10.0 Launching `main.nf` [grave_avogadro] DSL2 - revision: 2f0c31286e executor > local (1) [8f/50976f] q2_predict_dysbiosis [100%] 1 of 1 ✔ [ '/path/to/project/work/8f/50976fe453d54fd6e11b3501d4b05a', '/path/to/project/packages', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/usr/lib/python3.12/site-packages' ] A better solution, though, is to refactor. Move your custom code into a separate file (e.g. your_script.py), place it in your bin directory and make it executable (chmod a+x bin/your_script.py). Also move q2_predict_dysbiosis.py into this directory or into a sub-directory called utils. I use the latter in my example below. Your directory structure might look like: $ find . . ./main.nf ./bin ./bin/utils ./bin/utils/q2_predict_dysbiosis.py ./bin/your_script.py And your_script.py might look like the following using argparse to provide a user-friendly command-line interface: #!/usr/bin/env python import argparse import pandas as pd from utils.q2_predict_dysbiosis import calculate_index pd.set_option('display.max_rows', None) def custom_help_formatter(prog): return argparse.HelpFormatter(prog, max_help_position=80) def parse_args(): parser = argparse.ArgumentParser( description="Calculate dysbiosis index using abundance and pathways tables.", formatter_class=custom_help_formatter, ) parser.add_argument( "species_abundance_file", help="Path to the species abundance file", ) parser.add_argument( "stratified_pathways_table", help="Path to the stratified pathways table file", ) parser.add_argument( "unstratified_pathways_table", help="Path to the unstratified pathways table file", ) parser.add_argument( "output_file", help="Path to the output file to save the results", ) return parser.parse_args() def main( species_abundance_file, stratified_pathways_table, unstratified_pathways_table, output_file ): taxa = pd.read_csv(species_abundance_file, sep="\t", index_col=0) paths_strat = pd.read_csv(stratified_pathways_table, sep="\t", index_col=0) paths_unstrat = pd.read_csv(unstratified_pathways_table, sep="\t", index_col=0) score_df = calculate_index(taxa, paths_strat, paths_unstrat) score_df.to_csv(output_file, sep="\t", float_format="%.2f") if __name__ == "__main__": args = parse_args() main( args.species_abundance_file, args.stratified_pathways_table, args.unstratified_pathways_table, args.output_file, ) Tested using main.nf: $ cat main.nf process q2_predict_dysbiosis { debug true script: """ your_script.py --help """ } workflow { q2_predict_dysbiosis() } Results: $ nextflow run main.nf N E X T F L O W ~ version 24.10.0 Launching `main.nf` [peaceful_stonebraker] DSL2 - revision: fea21868c7 executor > local (1) [88/538f31] q2_predict_dysbiosis [100%] 1 of 1 ✔ usage: your_script.py [-h] species_abundance_file stratified_pathways_table unstratified_pathways_table output_file Calculate dysbiosis index using abundance and pathways tables. positional arguments: species_abundance_file Path to the species abundance file stratified_pathways_table Path to the stratified pathways table file unstratified_pathways_table Path to the unstratified pathways table file output_file Path to the output file to save the results options: -h, --help show this help message and exit If your dependencies also require certain local files to run, place the required files into a sub-directory in your project repository. Declare these files in your workflow block (e.g. using data_dir = path("${projectDir}/data")) and append entries for these in your processes' input block. If the names of the input files are hardcoded in your Python script, supply a string value to path to ensure that Nextflow stages the files with the correct filename(s) (e.g. using path 'data'). Once the files are localized in the process working directory, python should be able to find them. This assumes the path(s) in your Python script are relative and not absolute paths. If they are absolute paths, you will need to make them relative. A minimal example might look like: process test_proc { debug true input: path 'data' script: """ ls -1 data/{foo,bar,baz}.txt """ } workflow { data_dir = "${projectDir}/data" test_proc( data_dir ) } $ mkdir data $ touch data/{foo,bar,baz}.txt $ nextflow run main.nf N E X T F L O W ~ version 24.10.0 Launching `main.nf` [prickly_nightingale] DSL2 - revision: 83d939e180 executor > local (1) [ec/bf1f56] process > test_proc [100%] 1 of 1 ✔ data/bar.txt data/baz.txt data/foo.txt
1
2
79,182,908
2024-11-12
https://stackoverflow.com/questions/79182908/how-can-i-implement-email-verification-in-django
Completely stumped! I'm using the console as my email backend. I end up with False in token_generator.check_token as a result "Invalid or expired token." is displayed in my homepage when I navigate to say "http://localhost:8000/user/verify-email/?token=cgegv3-ec1fe9eb2cebc34e240791d72fb10d7d&[email protected]" Here's my code from django.contrib.auth.tokens import PasswordResetTokenGenerator class CustomPasswordResetTokenGenerator(PasswordResetTokenGenerator): pass # Define a single instance of the token generator token_generator = CustomPasswordResetTokenGenerator() def verify_email(request): email = request.GET.get("email") token = request.GET.get("token") try: user = CustomUser.objects.get(email=email) except CustomUser.DoesNotExist: messages.error(request, "Invalid verification link.") return redirect("home") if token_generator.check_token(user, token): user.is_active = True user.save() messages.success(request, "Your email has been verified!") return redirect("sign_in") else: messages.error(request, "Invalid or expired token.") return redirect("home") from django.core.mail import send_mail from django.urls import reverse from user_management.utils import token_generator def send_verification_email(user, request): token = token_generator.make_token(user) verification_url = request.build_absolute_uri( reverse("verify_email") + f"?token={token}&email={user.email}" ) send_mail( "Verify your email", f"Click the link to verify your email: {verification_url}", "[email protected]", [user.email], fail_silently=False, )
The code you posted seems fine. The check_token method performs several checks and figuring out which exact one fails should lead you to a solution. You can add breakpoints or print statements in place where the Django package is installed or bring the code into your project. Since you're already subclassing PasswordResetTokenGenerator you can: from django.conf import settings from django.contrib.auth.tokens import PasswordResetTokenGenerator from django.utils.crypto import constant_time_compare from django.utils.http import int_to_base36 class CustomPasswordResetTokenGenerator(PasswordResetTokenGenerator): def check_token(self, user, token): """ Check that a password reset token is correct for a given user. """ # Use breakpoints / print statements to figure out # which of the conditions fails here and where to go next if not (user and token): return False # Parse the token try: ts_b36, _ = token.split("-") except ValueError: return False try: ts = base36_to_int(ts_b36) except ValueError: return False # Check that the timestamp/uid has not been tampered with for secret in [self.secret, *self.secret_fallbacks]: if constant_time_compare( self._make_token_with_timestamp(user, ts, secret), token, ): break else: return False # Check the timestamp is within limit. if (self._num_seconds(self._now()) - ts) > settings.PASSWORD_RESET_TIMEOUT: return False return True # Define a single instance of the token generator token_generator = CustomPasswordResetTokenGenerator()
2
2
79,172,747
2024-11-9
https://stackoverflow.com/questions/79172747/polars-ndcg-optimized-calculation
The problem here is to implement NDCG calculation on Polars that would be efficient for huge datasets. Main idea of NDCG is to calculate DCG and IDCG, let's skip the gain part and only think about discount part, which depends on ranks from ideal and proposed orderings. So the tricky part for me here is to properly and effectively calculate the positions of similar items from ideal and proposed parts, i.e.: ideal: a b c d e f propsed: d b g e h so we have intersection of {b, d, e} items with idx_ideal=[2,4,5] (starting from 1) and idx_propsed=[2,1,4] So what I want is to calculate those idx_proposed and idx_ideal for dataframe with columns (user, ideal, proposed), so the resulting DF would have columns: (user, ideal, proposed, idx_ideal, idx_propsed) # so its only ideal idx, then I create extra for proposed idx and join them ( df .explode('ideal') .with_columns(idx=pl.int_range(pl.len()).over('user')) .filter(pl.col('ideal').is_in(pl.col('proposed'))) .group_by('user', maintain_order=True) .agg(pl.col('idx')) ) I explode ideal over user and find position by adding extra idx column and leaving only ideal=proposed rows, but this results in extra DF with a subset of rows, which I would have to join back, which is probably not very optimal. Then I have to calculate it once again for proposed side. Moreover, I will have to explode over (idx_ideal, idx_proposed) on the next step to calculate user's IDCG, DCG and NDCG. Could you help me optimize those calculations? I think I should use that users do not interact with each-other and separate rows could be processed separately. Here is the random data generator import polars as pl import random num_users = 100_000 min_len = 10 max_len = 200 item_range = 10_000 def generate_user_data(): length = random.randint(min_len, max_len) ideal = random.sample(range(item_range), length) length = random.randint(min_len, max_len) predicted = random.sample(range(item_range), length) return ideal, predicted data = [] for user_id in range(num_users): ideal, predicted = generate_user_data() data.append({ 'user': user_id, 'ideal': ideal, 'proposed': predicted }) df = pl.DataFrame(data) print(df.head()) shape: (5, 3) ┌──────┬──────────────────────┬──────────────────────┐ │ user ┆ ideal ┆ proposed │ │ --- ┆ --- ┆ --- │ │ i64 ┆ list[i64] ┆ list[i64] │ ╞══════╪══════════════════════╪══════════════════════╡ │ 0 ┆ [9973, 313, … 5733] ┆ [8153, 3461, … 4602] │ │ 1 ┆ [3756, 9053, … 1014] ┆ [435, 9407, … 6159] │ │ 2 ┆ [8152, 1615, … 2873] ┆ [5078, 9006, … 8157] │ │ 3 ┆ [6104, 2929, … 2606] ┆ [5110, 790, … 363] │ │ 4 ┆ [1863, 6801, … 271] ┆ [5571, 5555, … 5591] │ └──────┴──────────────────────┴──────────────────────┘
In my testing, I got the fastest results by first getting the intersection: .list.set_intersection() There is currently no List API method to get the index positions, but it can be emulated if we .flatten() the lists .over() each "row". (we use with_row_index as the group id) .arg_true() is used to get the indexes where the .is_in() check is True mapping_strategy="join" will give us back a list column. pl.Config(fmt_str_lengths=100, fmt_table_cell_list_len=10) df = pl.DataFrame({ "user": [1, 2], "ideal": [["a", "b", "c", "d", "a"], ["x", "y", "z"]], "proposed": [["f", "a", "e", "r", "d"], ["y", "e", "s"]] }) (df .with_row_index() .with_columns( pl.col("ideal").list.set_intersection("proposed") .alias("intersection") ) .with_columns( pl.col("ideal").flatten().is_in(pl.col("intersection").flatten()) .arg_true() .over("index", mapping_strategy="join") .alias("idx_ideal"), pl.col("proposed").flatten().is_in(pl.col("intersection").flatten()) .arg_true() .over("index", mapping_strategy="join") .alias("idx_proposed") ) ) shape: (2, 7) ┌───────┬──────┬───────────────────────────┬───────────────────────────┬──────────────┬───────────┬──────────────┐ │ index ┆ user ┆ ideal ┆ proposed ┆ intersection ┆ idx_ideal ┆ idx_proposed │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ list[str] ┆ list[str] ┆ list[str] ┆ list[u32] ┆ list[u32] │ ╞═══════╪══════╪═══════════════════════════╪═══════════════════════════╪══════════════╪═══════════╪══════════════╡ │ 0 ┆ 1 ┆ ["a", "b", "c", "d", "a"] ┆ ["f", "a", "e", "r", "d"] ┆ ["a", "d"] ┆ [0, 3, 4] ┆ [1, 4] │ │ 1 ┆ 2 ┆ ["x", "y", "z"] ┆ ["y", "e", "s"] ┆ ["y"] ┆ [1] ┆ [0] │ └───────┴──────┴───────────────────────────┴───────────────────────────┴──────────────┴───────────┴──────────────┘ First appearance only As per the updated requirements, you could add .is_first_distinct() to pick the first appearance in the case of duplicate values. (df.with_row_index() .with_columns(pl.col("ideal").list.set_intersection("proposed").alias("intersection")) .with_columns( ( pl.col("ideal").flatten().is_first_distinct() & pl.col("ideal").flatten().is_in(pl.col("intersection").flatten()) ) .arg_true() .over("index", mapping_strategy="join") .alias("idx_ideal") ) ) shape: (2, 6) ┌───────┬──────┬───────────────────────────┬───────────────────────────┬──────────────┬───────────┐ │ index ┆ user ┆ ideal ┆ proposed ┆ intersection ┆ idx_ideal │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ list[str] ┆ list[str] ┆ list[str] ┆ list[u32] │ ╞═══════╪══════╪═══════════════════════════╪═══════════════════════════╪══════════════╪═══════════╡ │ 0 ┆ 1 ┆ ["a", "b", "c", "d", "a"] ┆ ["f", "a", "e", "r", "d"] ┆ ["a", "d"] ┆ [0, 3] │ │ 1 ┆ 2 ┆ ["x", "y", "z"] ┆ ["y", "e", "s"] ┆ ["y"] ┆ [1] │ └───────┴──────┴───────────────────────────┴───────────────────────────┴──────────────┴───────────┘ Build expressions with a function As the expression has become rather complex, it may be desirable to put it inside a helper function. def index_of_intersection(expr): return ( ( expr.flatten().is_first_distinct() & expr.flatten().is_in(pl.col("intersection").flatten()) ) .arg_true() .over("index", mapping_strategy="join") .name.prefix("idx_") ) And use .pipe() to call it. (df .with_row_index() .with_columns( pl.col("ideal").list.set_intersection("proposed") .alias("intersection") ) .with_columns( pl.col("ideal", "proposed").pipe(index_of_intersection) ) ) shape: (2, 7) ┌───────┬──────┬───────────────────────────┬───────────────────────────┬──────────────┬───────────┬──────────────┐ │ index ┆ user ┆ ideal ┆ proposed ┆ intersection ┆ idx_ideal ┆ idx_proposed │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ i64 ┆ list[str] ┆ list[str] ┆ list[str] ┆ list[u32] ┆ list[u32] │ ╞═══════╪══════╪═══════════════════════════╪═══════════════════════════╪══════════════╪═══════════╪══════════════╡ │ 0 ┆ 1 ┆ ["a", "b", "c", "d", "a"] ┆ ["f", "a", "e", "r", "d"] ┆ ["a", "d"] ┆ [0, 3] ┆ [1, 4] │ │ 1 ┆ 2 ┆ ["x", "y", "z"] ┆ ["y", "e", "s"] ┆ ["y"] ┆ [1] ┆ [0] │ └───────┴──────┴───────────────────────────┴───────────────────────────┴──────────────┴───────────┴──────────────┘
2
1
79,180,366
2024-11-12
https://stackoverflow.com/questions/79180366/unable-to-set-any-property-when-protecting-an-excel-worksheet-using-pywin32
I am using pywin32 for a project to automate a few excel files using python. In the excel file, I want to protect all the cells that contain a formula. So, I first unlock all the cells and then only lock those cells which has a formula. When I protect the worksheet with a password, I also pass all the relevant protection properties such as; AllowFormattingCells, AllowFormattingColumns, AllowFormattingRows, AllowInsertingColumns, AllowInsertingRows, AllowSorting and AllowFiltering and set them to True. However, if I check what the properties are after I have protected the worksheet, they return as False. When I open the file using Excel, the sheet is being protected and I am able to edit the contents of the unlocked cells but I am unable to format, filter, insert columns or rows or do anything else. Relevant Information Python version: 3.11.9 pywin32 version: 306 Excel Version: Microsoft® Excel® for Microsoft 365 MSO (Version 2402 Build 16.0.17328.20550) 64-bit Windows: Edition Windows 11 Enterprise Version 23H2 Installed on ‎18-‎10-‎2023 OS build 22631.4317 Experience Windows Feature Experience Pack 1000.22700.1041.0 Below is the python code for reproducibility import win32com.client excel_app = win32com.client.DispatchEx("Excel.Application") excel_app.Visible = False workbook = excel_app.Workbooks.Open("path_to_file.xlsx") sheet = workbook.Sheets("Sheet1") all_cells = sheet.Cells merge_cells = sheet.Cells(1, 1).MergeArea edited_cell = merge_cells.Cells(1, 1) value = edited_cell.Formula if edited_cell.HasFormula else edited_cell.Value edited_cell.Formula = "=1+1" formula_cells = all_cells.SpecialCells(Type=-4123) # -4213 represent xlCellTypeFormulas all_cells.Locked = False formula_cells.Locked = True if isinstance(value, str) and value.startswith("="): edited_cell.Formula = value else: edited_cell.Value = value merge_cells.Locked = False sheet.Protect( Password="random_password", Contents=True, UserInterfaceOnly=True, AllowFormattingCells=True, AllowFormattingColumns=True, AllowFormattingRows=True, AllowInsertingColumns=True, AllowInsertingRows=True, AllowSorting=True, AllowFiltering=True, ) print("AllowFormattingCells: ", sheet.Protection.AllowFormattingCells) print("AllowFormattingColumns: ", sheet.Protection.AllowFormattingColumns) print("AllowFormattingRows: ", sheet.Protection.AllowFormattingRows) print("AllowInsertingColumns: ", sheet.Protection.AllowInsertingColumns) print("AllowInsertingRows: ", sheet.Protection.AllowInsertingRows) print("AllowSorting: ", sheet.Protection.AllowSorting) print("AllowFiltering: ", sheet.Protection.AllowFiltering) workbook.Save() excel_app.Quit() When I manually set the protection while using Excel, the protection is working as expected. However, when I am setting it using pywin32, protection is working, but the protections properties are not set, hence I am unable to add rows, use filters, etc. I've tried all combinations for the 'Contents' and the 'UserInterfaceOnly' to see if that may cause it to change.
I tested originally on a Windows 10 PC with Excel 2013 and as stated your code worked. I then tried a Windows 10 PC with Excel 2021 and it appeared to exhibit the same issue as you, the protections remained as False after being set to True. But another run later and it all worked fine on that Excel version. I tested with older versions of pywin32 306, 307 and then upgraded to 308. Anyway have applied the same protections using Xlwings (which also uses the Excel App and is similar to win32com) it works on both my systems so you can try it with your set up, assuming there is no issues using Xlwings. import xlwings as xw from xlwings import constants in_excelfile = 'unprotected.xlsx' out_excelfile = 'protected.xlsx' with xw.App(visible=True) as xl: wb = xw.Book(path_to_excel) ws = wb.sheets["Sheet1"] # Get range of used cells, this is the range of cells in the sheet with values sheet_range = ws.used_range # Set all cells in the sheet_range Locked status to False ws.range(sheet_range).api.Locked = False # Select all cells with formulas in the sheet_range and set Locked to True ws.range(sheet_range).api.SpecialCells(constants.CellType.xlCellTypeFormulas, 23).Locked = True # Set the protection states ws.api.Protect( Password='random_password', Contents=True, UserInterfaceOnly=True, AllowFormattingCells=True, AllowFormattingColumns=True, AllowFormattingRows=True, AllowInsertingColumns=True, AllowInsertingRows=True, AllowSorting=True, AllowFiltering=True, ) # Print the Status of the Protections print(f"AllowFormattingCells: {ws.api.Protection.AllowFormattingCells}") print(f"AllowFormattingColumns: {ws.api.Protection.AllowFormattingColumns}") print(f"AllowFormattingRows: {ws.api.Protection.AllowFormattingRows}") print(f"AllowInsertingColumns: {ws.api.Protection.AllowInsertingColumns}") print(f"AllowInsertingRows: {ws.api.Protection.AllowInsertingRows}") print(f"AllowSorting: {ws.api.Protection.AllowSorting}") print(f"AllowFiltering: {ws.api.Protection.AllowFiltering}") # Save file wb.save(out_excelfile) If Xlwings doesn't work for you there is also Openpyxl, which doesn't use the Excel App but rather edits the underlying XML files that make up the XLSX files, that might do the trick.
2
0
79,182,496
2024-11-12
https://stackoverflow.com/questions/79182496/how-do-i-find-all-combinations-of-pairs-such-that-no-elements-of-the-combinatio
I have a list of number-letter pairs like: all_edges_array = [ [1,'a'],[1,'b'],[1,'c'], [2,'c'],[2,'d'], [3,'b'],[3,'c'] ] Notice that the input pairs are not a cross-product of the letters and numbers used - for example, [2, 'a'] is missing. I want to efficiently find the combinations of some number of pairs, such that within a group, no two pairs use the same number or the same letter. For the above input, there should be 5 total results: [([1, 'a'], [2, 'c'], [3, 'b']), ([1, 'a'], [2, 'd'], [3, 'b']), ([1, 'a'], [2, 'd'], [3, 'c']), ([1, 'b'], [2, 'd'], [3, 'c']), ([1, 'c'], [2, 'd'], [3, 'b'])]. Other combinations are not valid: for example, ([1, 'a'], [1, 'b'], [3, 'c']) contains two pairs using the same number (1), and ([1, 'b'], [2, 'c'], [3, 'b']) contains two pairs using the same letter (b). I have code which brute-forces this by using itertools.combinations and then filtering the result: from itertools import combinations number_letter_dict = {1:['a','b','c'], 2:['c','d'], 3:['b','c']} # create the edges based on the connections in the number_letter_dict all_edges_array = [] for key in number_letter_dict.keys(): for item in number_letter_dict[key]: all_edges_array.append([key, item]) # get the number of connections relative to the number of keys in dict number_of_connections = len(number_letter_dict.keys()) # Find all 35 combinations all_combinations_array = list(combinations(all_edges_array, number_of_connections)) # cut down the list of combinations to what I actually need all_good_combinations = [] for collection in all_combinations_array: duplicated_item = False seen_indices = [] for item in collection: if item[0] in seen_indices: duplicated_item = True break if item[1] in seen_indices: duplicated_item = True break seen_indices.append(item[0]) seen_indices.append(item[1]) # all clear--add the collection! :) if not duplicated_item: all_good_combinations.append(collection) This works, but it's inefficient - it takes an unacceptably long time to run for my actual input. Many more combinations are generated than are valid, which only gets worse the more edges and connections there are. How can I improve on this algorithm? I assume that it involves not generating invalid combinations in the first place, but I don't see a way to accomplish that. I found the previous Q&A Python: How to generate all combinations of lists of tuples without repeating contents of the tuple, but it doesn't answer my question. The answers there assume that the input contains all possible pairs (and also that the number of pairs in the combinations should be equal to the number of possibilities for the more constrained pair element). EDIT: I have replaced some of the minimized code because it caused more confusion than it saved: oops? Also, this code does work. When given enough time it will reliably give the correct answer. That said, spending five days processing one image is not quite fast enough for my tastes.
It is always inefficient to generate all combinations and throw away unwanted combinations when you can selectively generate wanted combinations to begin with. The desired combinations can be most efficiently generated by recursively yielding combinations of unused letters paired with the number of the current recursion level, where a set can be used to keep track used letters: def unique_combinations(number_letter_dict): def _unique_combinations(index): if index == size: yield () return number, letters = number_letters[index] for letter in letters: if letter not in used_letters: used_letters.add(letter) for combination in _unique_combinations(index + 1): yield [number, letter], *combination used_letters.remove(letter) number_letters = list(number_letter_dict.items()) size = len(number_letters) used_letters = set() return list(_unique_combinations(0)) so that: number_letter_dict = {1: ['a', 'b', 'c'], 2: ['c', 'd'], 3: ['b', 'c']} print(unique_combinations(number_letter_dict)) outputs: [([1, 'a'], [2, 'c'], [3, 'b']), ([1, 'a'], [2, 'd'], [3, 'b']), ([1, 'a'], [2, 'd'], [3, 'c']), ([1, 'b'], [2, 'd'], [3, 'c']), ([1, 'c'], [2, 'd'], [3, 'b'])] Demo: https://ideone.com/FfZwod
3
2
79,181,977
2024-11-12
https://stackoverflow.com/questions/79181977/in-python-pytz-how-can-i-add-a-day-to-a-datetime-in-a-dst-aware-fashion-s
I'm doing some datetime math in python with the pytz library (although I'm open to using other libraries if necessary). I have an iterator that needs to increase by one day for each iteration of the loop. The problem comes when transitioning from November 3rd to November 4th in the Eastern timezone, which crosses the daylight saving boundary (there are 25 hours between the start of November 3rd and the start of November 5th, instead of the usual 24). Whenever I add a "day" that crosses the boundary, I get a time that is 24 hours in the future, instead of the expected 25. This is what I've tried: import datetime import pytz ET = pytz.timezone("US/Eastern") first_day = ET.localize(datetime.datetime(2024, 11, 3)) next_day = first_day + datetime.timedelta(days=1) first_day.isoformat() # '2024-11-03T00:00:00-04:00' next_day.isoformat() # '2024-11-04T00:00:00-04:00' assert next_day == ET.localize(datetime.datetime(2024, 11, 4)) # This fails!! # I want next_day to be '2024-11-04T00:00:00-05:00' or '2024-11-04T01:00:00-04:00' I also tried throwing a normalize() in there, but that didn't produce the right result either: ET.normalize(next_day).isoformat() # '2024-11-03T23:00:00-05:00' (That's one hour earlier than my desired output) I suppose I could make a copy of my start_day that increments the day field, but then I'd have to be aware of month and year boundaries, which doesn't seem ideal to me.
It looks like you want "wall time", which is the same "wall clock" time the next day, regardless of daylight savings time transitions. I would use the built-in zoneinfo module. You may need to install the "1st party" tzdata module if using Windows to have up-to-date time zone information (pip install tzdata): import datetime as dt import zoneinfo as zi ET = zi.ZoneInfo('US/Eastern') first_day = dt.datetime(2024, 11, 3, tzinfo=ET) # time zone aware next_day = first_day + dt.timedelta(days=1) # "wall clock" math in time zone print(first_day) print(next_day) assert next_day == dt.datetime(2024, 11, 4, tzinfo=ET) Output: 2024-11-03 00:00:00-04:00 2024-11-04 00:00:00-05:00 Note that if instead you want exactly 24 hours later regardless of DST transitions, do the time delta math in UTC: import datetime as dt import zoneinfo as zi ET = zi.ZoneInfo("US/Eastern") first_day = dt.datetime(2024, 11, 3, tzinfo=ET) # time zone aware # Convert to UTC, do math, convert back to desired time zone next_day = (first_day.astimezone(dt.UTC) + dt.timedelta(days=1)).astimezone(ET) print(first_day) print(next_day) Output: 2024-11-03 00:00:00-04:00 2024-11-03 23:00:00-05:00
1
2
79,179,293
2024-11-11
https://stackoverflow.com/questions/79179293/parallel-depth-first-ops-in-dagster-with-ops-graphs-and-jobs-together
(also posted on r/dagster) Dagster N00b here. I have a very specific use-case. My ETL executes the following steps: Query a DB to get a list of CSV files Go to a filesystem and for each CSV file: load it into DuckDB transform some columns to date transform some numeric codes to text categories export clean table to a .parquet file run a profile report for the clean data The DuckDB tables are named just the same as the CSV files for convenience. 2a through 2e can be done in parallel FOR EACH CSV FILE. Within the context of a single CSV file, they need to run SERIALLY. My current code is: @op def get_csv_filenames(context) -> List[str]: @op(out=DynamicOut()) def generate_subtasks(context, csv_list:List[str]): for csv_filename in csv_list: yield DynamicOutput(csv_filename, mapping_key=csv_filename) def load_csv_into_duckdb(context, csv_filename) def transform_dates(context, csv_filename) def from_code_2_categories(context, csv_filename) def export_2_parqu
If I understand correctly, you want depth-first processing, instead of breadth first? I think you might be able to trigger depth-first processing using a nested graph after the dynamic output step. You're also conceptually missing how to set dependencies between ops in Dagster. Something like this should work: @op def get_csv_filenames(context) -> List[str]: @op(out=DynamicOut()) def generate_subtasks(context, csv_list:List[str]): for csv_filename in csv_list: yield DynamicOutput(csv_filename, mapping_key=csv_filename) @op def load_csv_into_duckdb(context, csv_filename) ... return csv_filename @op def transform_dates(context, csv_filename) ... return csv_filename @op def from_code_2_categories(context, csv_filename) ... return csv_filename @op def export_2_parquet(context, csv_filename) ... return csv_filename @op def profile_dataset(context, csv_filename) ... return csv_filename @graph def process(context, csv_filename:str): profile_dataset(export_2_parquet(from_code_2_categories(transform_dates(load_csv_into_duckdb(csv_filename))))) @job def pipeline(): csv_filename_list = get_csv_filenames() generate_subtasks(csv_filename_list).map(process)
2
2
79,181,689
2024-11-12
https://stackoverflow.com/questions/79181689/polars-read-csv-to-read-from-string-and-not-from-file
Is it possible to read from string with pl.read_csv() ? Something like this, which would work : content = """c1, c2 A,1 B,3 C,2""" pl.read_csv(content) I know of course about this : pl.DataFrame({"c1":["A", "B", "C"],"c2" :[1,3,2]}) But it is error-prone with long tables and you have to count numbers to know which value to modify. I also know about dictionaries but I have more than 2 columns in my real life example. Context: I used to fread() content with R data.table and it was very useful, especially when you want to convert a column with the help of a join, instead of complicated ifelse() statements Thanks !
pl.read_csv() accepts IO as source parameter. source: str | Path | IO[str] | IO[bytes] | bytes So you can use io.StringIO: from io import StringIO content = """ c1,c2 A,1 B,3 C,2 """ data = StringIO(content) pl.read_csv(data) shape: (3, 2) ┌─────┬─────┐ │ c1 ┆ c2 │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════╪═════╡ │ A ┆ 1 │ │ B ┆ 3 │ │ C ┆ 2 │ └─────┴─────┘ As you can see above, you can also pass bytes as source parameter. You can use str.encode() method for that: content = """ c1,c2 A,1 B,3 C,2 """ pl.read_csv(content.encode()) shape: (3, 2) ┌─────┬─────┐ │ c1 ┆ c2 │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════╪═════╡ │ A ┆ 1 │ │ B ┆ 3 │ │ C ┆ 2 │ └─────┴─────┘
2
5
79,180,528
2024-11-12
https://stackoverflow.com/questions/79180528/draw-a-circle-with-periodic-boundary-conditions-matplotlib
I am doing a project that involves lattices. A point of coordinates (x0, y0) is chosen randomly and I need to color blue all the points that are in the circle of center (x0, y0) and radius R and red all the other points and then draw a circle around. The tricky part is that there is periodic boundary conditions, meaning that if my circle is near the left border then I need to draw the rest of it on the right side, the same goes for up and down. Here is my code that plots the lattice, I have managed to color the points depending on whether or not they are in the circle but I am yet to draw the circle. from matplotlib import pyplot as plt import numpy as np class lattice: def __init__(self, L): self.L = L self.positions = np.array([[[i, j] for i in range(L)] for j in range(L)]) def draw_lattice(self, filename): X = self.positions[:, :, 0].flatten() Y = self.positions[:, :, 1].flatten() plt.scatter(X, Y, s=10) plt.xticks([]) plt.yticks([]) plt.title("Lattice") plt.savefig(filename) def dist_centre(self): x0, y0 = np.random.randint(0, self.L), np.random.randint(0, self.L) self.c0 = (x0, y0) self.distance = np.zeros((self.L, self.L)) for i in range(self.L): for j in range(self.L): x = self.positions[i, j, 0] y = self.positions[i, j, 1] # Distance with periodic boundary conditions. Dx = -self.L/2 + ((x0-x)+self.L/2)%self.L Dy = -self.L/2 + ((y0-y)+self.L/2)%self.L dist = np.sqrt(Dx**2 + Dy**2) self.distance[i, j] = dist def draw_zone(self, filename, R): colormap = np.where(self.distance <= R, "blue", "red").flatten() X = self.positions[:, :, 0].flatten() Y = self.positions[:, :, 1].flatten() plt.clf() plt.scatter(X, Y, s=10, color=colormap) plt.xticks([]) plt.yticks([]) plt.title("Lattice") plt.savefig(filename) if __name__ == "__main__": L = 10 R = 3 filename = "test.pdf" latt = lattice(L) latt.draw_lattice(filename) latt.dist_centre() latt.draw_zone(filename, R) The formula for the distance is modified because of the periodic boundary conditions.
The comment from Tino_D gave me the answer. I imagined a bigged lattice, my lattice and 8 lattices surrounding it and drew a total of 9 circles with a centers that were translated to another sub lattice and then I restricted my plot to the original lattice. def draw_zone(self, filename, R): colormap = np.where(self.distance <= R, "blue", "red").flatten() X = self.positions[:, :, 0].flatten() Y = self.positions[:, :, 1].flatten() x0, y0 = self.c0 centers = [(x0-L, y0-L), (x0-L, y0), (x0-L, y0+L), (x0, y0-L), (x0, y0), (x0, y0+L), (x0+L, y0-L), (x0+L, y0), (x0+L, y0+L)] plt.clf() for (x,y) in centers: circle = plt.Circle((x, y), R, alpha=0.2, color="grey") plt.gca().add_patch(circle) plt.scatter(X, Y, s=10, color=colormap) plt.xticks([]) plt.yticks([]) plt.xlim(0, self.L-1) plt.ylim(0, self.L-1) plt.title("Lattice") plt.savefig(filename)
3
3
79,177,394
2024-11-11
https://stackoverflow.com/questions/79177394/evaluate-expression-inside-custom-class-in-polars
I am trying to extend the functionality of polars to manipulate categories of Enum. I am following this guide and this section of documentation orig_df = pl.DataFrame({ 'idx': pl.int_range(5, eager=True), 'orig_series': pl.Series(['Alpha', 'Omega', 'Alpha', 'Beta', 'Gamma'], dtype=pl.Enum(['Alpha', 'Beta', 'Gamma', 'Omega']))}) @pl.api.register_expr_namespace('fct') class CustomEnumMethodsCollection: def __init__(self, expr: pl.Expr): self._expr = expr def rev(self) -> pl.Expr: cats = self._expr.cat.get_categories() tmp_sr = self._expr.cast(pl.Categorical) return tmp_sr.cast(dtype=pl.Enum(cats.str.reverse())) (orig_df .with_columns(rev_series=pl.col("orig_series").fct.rev()) ) This errors with TypeError: Series constructor called with unsupported type 'Expr' for the values parameter because cats is an unevaluated expression, not a list or a series, as pl.Enum(dtype=) expects it. How do I evaluate the cats into the actual list/series to provide the new categories for my cast(pl.Enum) method?
You can use .map_batches() @pl.api.register_expr_namespace('fct') class CustomEnumMethodsCollection: def __init__(self, expr: pl.Expr): self._expr = expr def rev(self) -> pl.Expr: return self._expr.map_batches(lambda s: s.cast(pl.Enum(s.cat.get_categories().reverse())) ) df = pl.DataFrame({ 'idx': pl.int_range(5, eager=True), 'orig_series': pl.Series(['Alpha', 'Omega', 'Alpha', 'Beta', 'Gamma'], dtype=pl.Enum(['Alpha', 'Beta', 'Gamma', 'Omega']))}) df.with_columns(rev_series=pl.col('orig_series').fct.rev()).schema Schema([('idx', Int64), ('orig_series', Enum(categories=['Alpha', 'Beta', 'Gamma', 'Omega'])), ('rev_series', Enum(categories=['Omega', 'Gamma', 'Beta', 'Alpha']))])
1
2
79,179,193
2024-11-11
https://stackoverflow.com/questions/79179193/calculating-the-correlation-coefficient-of-time-series-data-of-unqual-length
Suppose you have a dataframe like this data = {'site': ['A', 'A', 'B', 'B', 'C', 'C'], 'item': ['x', 'x', 'x', 'x', 'x', 'x'], 'date': ['2023-03-01', '2023-03-10', '2023-03-20', '2023-03-27', '2023-03-5', '2023-03-12'], 'quantity': [10,20,30, 20, 30, 50]} df_sample = pd.DataFrame(data=data) df_sample.head() Where you have different sites and items with a date and quantity. Now, what you want to do is calculate the correlation between say site A and site B for item x and their associated quantity. Although, they could be of different length in the dataframe. How would you go about doing this. The actual data in consideration here can be found here here. Now, what I tried was just setting up two different dataframes like this df1 = df_sample[(df_sample['site'] == 'A']) & (df_sample['item'] == 'x')] df2 = df_sample[(df_sample['site'] == 'B']) & (df_sample['item'] == 'x')] then just force them to have the same size, and calculate the correlation coefficient from there but I am sure there is a better way to do this.
Reshape to wide form with pivot_table and add zeros to missing data points, this will allow a correct comparison. You can then select the item you want and compute the correlation of all combinations of columns with corr: tmp = df_sample.pivot_table(index='date', columns=['item', 'site'], values='quantity', fill_value=0) out = tmp['x'].corr() Output: site A B C site A 1.000000 -0.449618 -0.442627 B -0.449618 1.000000 -0.464363 C -0.442627 -0.464363 1.000000 Intermediate tmp: item x site A B C date 2023-03-01 10.0 0.0 0.0 2023-03-10 20.0 0.0 0.0 2023-03-12 0.0 0.0 50.0 2023-03-20 0.0 30.0 0.0 2023-03-27 0.0 20.0 0.0 2023-03-5 0.0 0.0 30.0
1
1
79,161,150
2024-11-6
https://stackoverflow.com/questions/79161150/how-to-publish-an-update-to-pip-just-for-older-python-versions
I have a library published on pip which previously had a minimum Python version of 3.7, and now has a minimum Python version of 3.9. This means that, when a user with Python 3.7 or 3.8 does pip install my-package, they silently get the last version that was published with 3.7 support, rather than the most recent version. This means they're missing updates that I've made since then; in particular, I just changed the format of an external file that my library fetches as part of its operation, and the old version just breaks on it. Is there any way to publish a new version of the library just for Python 3.7, so I can print a deprecation message and quit, rather than having it fail with mysterious errors? Would it work, for example, to just go back to the last 3.7 commit, add the deprecation message and kill switch, change my setuptools to advertise 3.7 support, and publish that on pip as a new version, then immediately revert back to my existing code (advertising 3.9) and publish that on top?
Note: pip is an installer, not a package index. In this answer I'm guessing that "published on pip" means "published to a repository which pip can install from", such as PyPI. Since you're publishing to a package index, you must have version strings because it is a required field in the packaging metadata. Since you're specifying a minimum package version, you must also be setting the Requires-Python metadata field. Suppose the last version of your package which had a minimum supported Python version of 3.7 was packaged like this: In setup.py: from setuptools import setup setup( name="mypkg", version="1.2.3", python_requires=">=3.7", ) Or in pyproject.toml: [project] name = "mypkg" version = "1.2.3" requires-python = ">=3.7" And the first version which had a minimum supported Python version of 3.9 was packaged like this: In setup.py: from setuptools import setup setup( name="mypkg", version="2.0.0", python_requires=">=3.9", ) Or in pyproject.toml: [project] name = "mypkg" version = "2.0.0" requires-python = ">=3.9" What you can do is go back to the source code of v1.2.3 and make the necessary deprecation / error messages, then publish another release with a version string v which satisfies this inequality >>> from packaging.version import Version >>> Version("1.2.3") < Version(v) < Version("2.0.0") True For example, v="1.3" would do the trick here. So, checkout the revision from which v1.2.3 was generated (for git users this would probably mean git checkout v1.2.3 and then start a new branch), then make the metadata changes to the version number but leave the Python version requirement as is: from setuptools import setup setup( name="mypkg", version="1.3", python_requires=">=3.7", ) Or in pyproject.toml: [project] name = "mypkg" version = "1.3" requires-python = ">=3.7" If you don't know the revision from which 1.2.3 was produced, you could also just download the existing release and edit it manually, but it would be better to use a VCS for this part so that you can push up a tag for the new release as well. It is crucial that this new version is still compatible with Python 3.7, so leave the Requires-Python field as ">=3.7" here. It is not necessary for the new release to explicitly exclude 3.9+, like the question suggests. However, since the question asks about making a release "just for Python 3.7", I'll add that this is also possible by using Requires-Python value of ">=3.7,<3.8" where the comma signifies a logical AND. Personally, I recommend not to add any upper bound on the Python version, because solvers on 3.9+ will automatically choose the existing "more recent" (by version number) packages anyway, so adding an upper bound just complicates matters unnecessarily.
3
3
79,178,550
2024-11-11
https://stackoverflow.com/questions/79178550/python-polars-method-find-returns-an-incorrect-value-for-strings-when-using-u
The behavior of the str.find() method in polars differs from str.find() in pandas in Python. Is there a parameter for processing utf-8 characters? Or is it a bug? Example code in python: import polars as pl # Define a custom function that wraps the str.find() method def find_substring(s, substring): return int(s.find(substring)) # Test df df = pl.DataFrame({ "text": ["testтестword",None,''] }) # Apply the custom function to the "text" column using map_elements() substr = 'word' df = df.with_columns( pl.col('text').str.find(substr,literal=True,strict=True).alias('in_polars'), pl.col("text").map_elements(lambda s: find_substring(s, substr), return_dtype=pl.Int64).alias('find_check') ) print(df) Results: There is no parameter in the documentation for setting the character encoding. Using my function is a solution, but it's very slow. Can you suggest something faster and without map_elements? Thanks. pl.col("text").map_elements(lambda s: find_substring(s, substr), return_dtype=pl.Int64).alias('find_check')
There is an open issue on the Github tracker. https://github.com/pola-rs/polars/issues/14190 I think we should update the docs to make clear that we return byte offsets. As for the actual goal - it seems you want split a string into 2 parts and take the right hand side. You could use regex e.g. with .str.extract() df.with_columns(after = pl.col("url").str.extract(r"\?(.*)")) shape: (1, 2) ┌────────────────────────────────────┬──────────────┐ │ url ┆ after │ │ --- ┆ --- │ │ str ┆ str │ ╞════════════════════════════════════╪══════════════╡ │ https://тестword.com/?foo=bar&id=1 ┆ foo=bar&id=1 │ └────────────────────────────────────┴──────────────┘ .str.splitn() could be another option. df = pl.DataFrame({ "url": ["https://тестword.com/?foo=bar&id=1"] }) df.with_columns( pl.col("url").str.splitn("?", 2) .struct.rename_fields(["before", "after"]) .struct.unnest() ) shape: (1, 3) ┌────────────────────────────────────┬───────────────────────┬──────────────┐ │ url ┆ before ┆ after │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞════════════════════════════════════╪═══════════════════════╪══════════════╡ │ https://тестword.com/?foo=bar&id=1 ┆ https://тестword.com/ ┆ foo=bar&id=1 │ └────────────────────────────────────┴───────────────────────┴──────────────┘ It returns a Struct which we rename/unnest into columns. .struct.rename_fields() .struct.unnest()
2
1
79,179,299
2024-11-11
https://stackoverflow.com/questions/79179299/split-dataframe-according-to-sub-lists-by-cutoff-value
I want to split the dataframe according to the sublists that given from diving a list into parts where the only value above a cutoff is the first. e.g. Cutoff = 3 [4,2,3,5,2,1,6,7] => [4,2,3], [5,2,1], [6], [7] I still need to keep track of the other fields in the dataframe. I should get the given result from this df data = { "uid": ["Alice", "Bob", "Charlie"], "time_deltas": [ [4,2, 3], [1,1, 4, 8, 3], [1,1, 7, 3, 2], ], "other_field": [["x", "y", "z"], ["x", "y", "z", "x", "y"], ["x", "y", "z", "x", "y"]] } df = pl.DataFrame(data) cutoff = 3 # Split the time_delta column into lists where the maximum time_delta (excluding the first value) is greater than the cutoff. Ensure that the other_field column is also split accordingly. # Expected Output # +--------+----------------------+----------------------+ # | uid | time_deltas | other_field | # | --- | --- | --- | # | str | list[duration[ms]] | list[str] | # +--------+----------------------+----------------------+ # | Alice | [4, 2, 3] | ["x", "y", "z"] | # | Bob | [1, 1] | ["x", "y"] | # | Bob | [4] | ["z"] | # | Bob | [8, 3] | ["x", "y"] | # | Charlie| [1, 1] | ["x", "y"] | # | Charlie| [7,3,2] | ["z", "x", "y"] |
If you explode/flatten the lists, you can use .cum_sum() of the comparison .over() each "group" to assign a group id/index to identify each sublist. (df.explode("time_deltas", "other_field") .with_columns(bool = pl.col("time_deltas") > cutoff) .with_columns(index = pl.col("bool").cum_sum().over("uid")) ) shape: (13, 5) ┌─────────┬─────────────┬─────────────┬───────┬───────┐ │ uid ┆ time_deltas ┆ other_field ┆ bool ┆ index │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ str ┆ bool ┆ u32 │ ╞═════════╪═════════════╪═════════════╪═══════╪═══════╡ │ Alice ┆ 4 ┆ x ┆ true ┆ 1 │ │ Alice ┆ 2 ┆ y ┆ false ┆ 1 │ │ Alice ┆ 3 ┆ z ┆ false ┆ 1 │ │ Bob ┆ 1 ┆ x ┆ false ┆ 0 │ │ Bob ┆ 1 ┆ y ┆ false ┆ 0 │ │ Bob ┆ 4 ┆ z ┆ true ┆ 1 │ │ Bob ┆ 8 ┆ x ┆ true ┆ 2 │ │ Bob ┆ 3 ┆ y ┆ false ┆ 2 │ │ Charlie ┆ 1 ┆ x ┆ false ┆ 0 │ │ Charlie ┆ 1 ┆ y ┆ false ┆ 0 │ │ Charlie ┆ 7 ┆ z ┆ true ┆ 1 │ │ Charlie ┆ 3 ┆ x ┆ false ┆ 1 │ │ Charlie ┆ 2 ┆ y ┆ false ┆ 1 │ └─────────┴─────────────┴─────────────┴───────┴───────┘ If uid is not unique, you could .with_row_index() first before exploding and use that in the .over() You can then use .group_by() to reassmble the lists. (df.explode("time_deltas", "other_field") .with_columns(index = (pl.col("time_deltas") > cutoff).cum_sum().over("uid")) .group_by("index", "uid", maintain_order=True) .all() ) shape: (6, 4) ┌───────┬─────────┬─────────────┬─────────────────┐ │ index ┆ uid ┆ time_deltas ┆ other_field │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ list[i64] ┆ list[str] │ ╞═══════╪═════════╪═════════════╪═════════════════╡ │ 1 ┆ Alice ┆ [4, 2, 3] ┆ ["x", "y", "z"] │ │ 0 ┆ Bob ┆ [1, 1] ┆ ["x", "y"] │ │ 1 ┆ Bob ┆ [4] ┆ ["z"] │ │ 2 ┆ Bob ┆ [8, 3] ┆ ["x", "y"] │ │ 0 ┆ Charlie ┆ [1, 1] ┆ ["x", "y"] │ │ 1 ┆ Charlie ┆ [7, 3, 2] ┆ ["z", "x", "y"] │ └───────┴─────────┴─────────────┴─────────────────┘
2
1
79,177,302
2024-11-11
https://stackoverflow.com/questions/79177302/polars-read-excel-incorrectly-adds-suffix-to-column-names
I am using polars v1.12.0 to read data from an Excel sheet. pl.read_excel( "test.xlsx", sheet_name="test", has_header=True, columns=list(range(30, 49)) ) The requested columns are being imported correctly. However, polars adds a suffix _1 to every column name. There's one column header where a _3 has been added. In the requested columns, all column headers are unique, i.e. no duplicates. However, columns before this import area do have the same values. For example, the header that has been suffixed _3 does occur two times before my import area. It looks like polars is scanning all column headers from column "A" starting, no matter if I start to read from column "AE". I am wondering what is going on? Is this a bug or did I make a mistake?
I don't think you have made a mistake, the behaviour just seems to differ wildly between different engines, and none of them do what you want to do. I have the following excel: alpha | bravo | charlie | charlie | delta | echo | foxtrot | alfa 1 | a | 1 | a | 1 | a | 1 | a For the following code snippet: df = pl.read_excel( "test.xlsx", sheet_name="test", has_header=True, columns=[3, 4, 5, 6, 7], ) Here's what I get when using different excel engines: Calamine (default) ┌───────────┬───────┬──────┬─────────┬────────┐ │ charlie_1 ┆ delta ┆ echo ┆ foxtrot ┆ alfa_1 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ str ┆ i64 ┆ str │ ╞═══════════╪═══════╪══════╪═════════╪════════╡ │ a ┆ 1 ┆ a ┆ 1 ┆ a │ └───────────┴───────┴──────┴─────────┴────────┘ So the sequence seems to be: Read all the columns, adding postfix to duplicates Select only columns mentioned in columns Xlsx2csv (previous default) ┌─────────┬───────────────────┐ │ foxtrot ┆ alfa_duplicated_0 │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════════╪═══════════════════╡ │ 1 ┆ a │ └─────────┴───────────────────┘ Yes, really, it's dropping charlie, delta and echo completely. I think that's a straight up bug. If you start the indexing from 0 and list all the columns, it shows all columns, but if you start from 1, it already removes alfa AND bravo. Openpyxl ┌───────┬──────┬─────────┐ │ delta ┆ echo ┆ foxtrot │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ i64 │ ╞═══════╪══════╪═════════╡ │ 1 ┆ a ┆ 1 │ └───────┴──────┴─────────┘ This is now dropping all the columns with the duplicate names first, and then taking the column indices defined in columns. Strictly speaking, not even dropping, but first taking them in and keeping the column ordering then overriding duplicate column data with the last column of the same name filtering columns based on the indices in columns without taking the duplicate names in to account The columns 3, 4 and 5 are now delta, echo foxtrot, and 6 and 7 point nowhere. What to do So, based on this, your best bet I think is to use the default calamine engine and then manually override the columns: df.columns = ["charlie", "delta", "echo", "foxtrot", "alfa"] As for your dilemma in comments about stacking the differently named columns, this "works", but only when you know the column names and the schema beforehand. It is also stupidly ugly and probably not very performant. I hope there are better ways. I any case, a solution along the lines of reading all excel columns and then manipulating the df is probably easier than trying to manipulate the reader. For excel with columns alfa | bravo | charlie | alfa | bravo | charlie import polars as pl df = pl.read_excel( "test.xlsx", sheet_name="test", has_header=True, ) new_df = pl.DataFrame( schema={"alfa": pl.Int64, "bravo": pl.Int64, "charlie": pl.Int64}) n = 3 for i in range(0, len(df.columns) // n): slice = df.select(pl.nth(range(i * n, i * final_columns + n))) slice.columns = ["alfa", "bravo", "charlie"] new_df = new_df.vstack( slice, )
1
2
79,178,919
2024-11-11
https://stackoverflow.com/questions/79178919/count-elements-in-a-row-and-create-column-counter-in-pandas
I have created the following pandas dataframe: import pandas as pd ds = {'col1' : ['A','A','B','C','C','D'], 'col2' : ['A','B','C','D','D','A']} df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 col2 0 A A 1 A B 2 B C 3 C D 4 C D 5 D A The possible values in col1 and col2 are A, B, C and D. I need to create 4 new columns, called: countA: it counts how many A are in each row / record countB: it counts how many B are in each row / record countC: it counts how many C are in each row / record countD: it counts how many D are in each row / record So, from the example above, the resulting dataframe would look like this: Can anyone help me please?
Here is a way using pd.get_dummies() df.join(pd.get_dummies(df,prefix='',prefix_sep='').T.groupby(level=0).sum().T.rename('count{}'.format,axis=1)) and here is a way using value_counts() df.join(df.stack().groupby(level=0).value_counts().unstack(fill_value = 0).rename('count{}'.format,axis=1)) Output: col1 col2 countA countB countC countD 0 A A 2 0 0 0 1 A B 1 1 0 0 2 B C 0 1 1 0 3 C D 0 0 1 1 4 C D 0 0 1 1 5 D A 1 0 0 1
3
3
79,176,006
2024-11-10
https://stackoverflow.com/questions/79176006/why-are-parameterized-queries-not-possible-with-do-end
The following works fine: conn = psycopg.connect(self.conn.params.conn_str) cur = conn.cursor() cur.execute(""" SELECT 2, %s; """, (1,), ) But inside a DO: cur.execute(""" DO $$ BEGIN SELECT 2, %s; END$$; """, (1,), ) it causes psycopg.errors.UndefinedParameter: there is no parameter $1 LINE 1: SELECT 2, $1 ^ QUERY: SELECT 2, $1 CONTEXT: PL/pgSQL function inline_code_block line 3 at SQL statement Is this expected?
import psycopg from psycopg import sql con = psycopg.connect("postgresql://postgres:[email protected]:5432/test") cur = con.cursor() cur.execute(sql.SQL(""" DO $$ BEGIN PERFORM 2, {}; END$$; """).format(sql.Literal(1)) ) This uses the sql module of psycopg to build a dynamic SQL statement using proper escaping. DO can't return anything so you will not get any result from the function.
3
2
79,177,845
2024-11-11
https://stackoverflow.com/questions/79177845/matplotlib-patches-rectangle-produces-rectangles-with-unequal-size-of-linewidth
I am using matplotlib to plot the columns of a matrix as separate rectangles using matplotlib.patches.Rectangle. Somehow, all the "inner" lines are wider than the "outer" lines? Does somebody know what's going on here? Is this related to this Github issue? Here's an MRE: import numpy as np import matplotlib.pyplot as plt import seaborn as sns import matplotlib.patches as patches # set seed np.random.seed(42) # define number of cols and rows num_rows = 5 num_cols = 5 # define gap size between matrix columns column_gap = 0.3 # define linewidth linewidth = 5 # Determine the width and height of each square cell cell_size = 1 # Set the side length for each square cell # Initialize the matrix matrix = np.random.rand(num_rows, num_cols) # Create the plot fig, ax = plt.subplots(figsize=(8,6)) # Create a seaborn color palette (RdYlBu) and reverse it palette = sns.color_palette("RdYlBu", as_cmap=True).reversed() # Plot each cell individually with column gaps for i in range(num_rows): for j in range(num_cols): # Compute the color for the cell color = palette(matrix[i, j]) if column_gap > 0: edgecolor = 'black' else: edgecolor = None # Add a rectangle patch with gaps only in the x-direction rect = patches.Rectangle( (j * (cell_size + column_gap), i * cell_size), # x position with gap applied to columns only cell_size, # width of each cell cell_size, # height of each cell facecolor=color, edgecolor=edgecolor, linewidth=linewidth ) ax.add_patch(rect) if column_gap > 0: # Remove the default grid lines and ticks ax.spines[:].set_visible(False) # Set axis limits to fit all cells ax.set_xlim(0, num_cols * (cell_size + column_gap) - column_gap) ax.set_ylim(0, num_rows * cell_size) # Disable x and y ticks ax.set_xticks([]) ax.set_yticks([]) fig.show() which produces:
Your rectangles' edges are getting clipped by the axis boundaries. Add clip_on=False to Rectangle: rect = patches.Rectangle( (j * (cell_size + column_gap), i * cell_size), # x position with gap applied to columns only cell_size, # width of each cell cell_size, # height of each cell facecolor=color, edgecolor=edgecolor, linewidth=linewidth, clip_on=False, ) Output (small size for the demo): To better see what's going on, let's add some transparency to your rectangles and change the axis background color: ax.patch.set_facecolor('red')
3
2
79,177,384
2024-11-11
https://stackoverflow.com/questions/79177384/count-occurrences-of-each-type-of-event-within-a-time-window-in-pandas
I have a DataFrame with the following structure: event_timestamp: Timestamp of each event. event_type: Type of the event. I need to add a column for each unique event_type to count how many events of that event_type occurred within a 10ms window before each row's event_timestamp. data = { 'event_timestamp': [ '2024-02-01 08:02:09.065315961', '2024-02-01 08:02:09.125612099', '2024-02-01 08:02:09.160326512', '2024-02-01 08:02:09.540206541', '2024-02-01 08:02:09.571751697', '2024-02-01 08:02:09.571784060', '2024-02-01 08:02:09.574368029', '2024-02-01 08:02:09.574390737', '2024-02-01 08:02:09.578245099', '2024-02-01 08:02:10.077399943', '2024-02-01 08:02:10.077424252', '2024-02-01 08:02:10.081648527' ], 'event_type': [ 'A', 'B', 'A', 'A', 'C', 'B', 'A', 'C', 'B', 'A', 'C', 'B' ] } df = pd.DataFrame(data) df['event_timestamp'] = pd.to_datetime(df['event_timestamp']) For the above input, I want an output like this: event_timestamp event_type count_A count_B count_C 0 2024-02-01 08:02:09.065315961 A 0 0 0 1 2024-02-01 08:02:09.125612099 B 0 0 0 2 2024-02-01 08:02:09.160326512 A 0 0 0 3 2024-02-01 08:02:09.540206541 A 0 0 0 4 2024-02-01 08:02:09.571751697 C 0 0 0 5 2024-02-01 08:02:09.571784060 B 0 0 1 6 2024-02-01 08:02:09.574368029 A 0 1 1 7 2024-02-01 08:02:09.574390737 C 1 1 1 8 2024-02-01 08:02:09.578245099 B 1 1 2 9 2024-02-01 08:02:10.077399943 A 0 0 0 10 2024-02-01 08:02:10.077424252 C 1 0 0 11 2024-02-01 08:02:10.081648527 B 1 1 0 The columns count_A, count_B, and count_C represent the number of occurrences of event_type 'A', 'B', and 'C' that happened within a 10ms window before each row's event_timestamp. For example, for the row with event_timestamp 2024-02-01 08:02:09.065315961, we see: count_A is 1 because there was 1 event of type 'A' within the 10ms window before that timestamp. count_B is 0 and count_C is 0 because there were no events of type 'B' or 'C' in that window.
IIUC, you could produce the columns with get_dummies, then perform a rolling.sum on 10ms to get the counts, finally merge back to the original DataFrame: out = df.merge(pd .get_dummies(df['event_type']).add_prefix('count_') .set_axis(df['event_timestamp']).sort_index() .rolling('10ms').sum().convert_dtypes(), left_on='event_timestamp', right_index=True, ) Variant: out = df.merge(df .set_index('event_timestamp').sort_index() ['event_type'].str.get_dummies().add_prefix('count_') .rolling('10ms').sum().convert_dtypes(), left_on='event_timestamp', right_index=True, ) Output: event_timestamp event_type count_A count_B count_C 0 2024-02-01 08:02:09.065315961 A 1 0 0 1 2024-02-01 08:02:09.125612099 B 0 1 0 2 2024-02-01 08:02:09.160326512 A 1 0 0 3 2024-02-01 08:02:09.540206541 A 1 0 0 4 2024-02-01 08:02:09.571751697 C 0 0 1 5 2024-02-01 08:02:09.571784060 B 0 1 1 6 2024-02-01 08:02:09.574368029 A 1 1 1 7 2024-02-01 08:02:09.574390737 C 1 1 2 8 2024-02-01 08:02:09.578245099 B 1 2 2 9 2024-02-01 08:02:10.077399943 A 1 0 0 10 2024-02-01 08:02:10.077424252 C 1 0 1 11 2024-02-01 08:02:10.081648527 B 1 1 1 And if you want only the previous: tmp = (pd.get_dummies(df['event_type']).add_prefix('count_') .set_axis(df['event_timestamp']).sort_index() ) out = df.merge(tmp.rolling('10ms').sum().sub(tmp).convert_dtypes(), left_on='event_timestamp', right_index=True, ) Output: event_timestamp event_type count_A count_B count_C 0 2024-02-01 08:02:09.065315961 A 0 0 0 1 2024-02-01 08:02:09.125612099 B 0 0 0 2 2024-02-01 08:02:09.160326512 A 0 0 0 3 2024-02-01 08:02:09.540206541 A 0 0 0 4 2024-02-01 08:02:09.571751697 C 0 0 0 5 2024-02-01 08:02:09.571784060 B 0 0 1 6 2024-02-01 08:02:09.574368029 A 0 1 1 7 2024-02-01 08:02:09.574390737 C 1 1 1 8 2024-02-01 08:02:09.578245099 B 1 1 2 9 2024-02-01 08:02:10.077399943 A 0 0 0 10 2024-02-01 08:02:10.077424252 C 1 0 0 11 2024-02-01 08:02:10.081648527 B 1 0 1
2
1
79,176,155
2024-11-11
https://stackoverflow.com/questions/79176155/how-to-bump-python-package-version-using-uv
Poetry has the version command to increment a package version. Does uv package manager has anything similar?
Currently uv package manager does not have a built-in command to bump package versions like poetry's version command. You can manually edit pyproject.toml or automate it with a script. For example: import toml from typing import Literal def bump_version(file_path: str, part: Literal["major", "minor", "patch"] = "patch") -> None: with open(file_path, "r") as f: pyproject = toml.load(f) version = pyproject["tool"]["poetry"]["version"] major, minor, patch = map(int, version.split(".")) if part == "major": major += 1 minor = 0 patch = 0 elif part == "minor": minor += 1 patch = 0 elif part == "patch": patch += 1 else: raise ValueError("Invalid part value. Choose 'major', 'minor', or 'patch'.") pyproject["tool"]["poetry"]["version"] = f"{major}.{minor}.{patch}" with open(file_path, "w") as f: toml.dump(pyproject, f) print(f"Version bumped to {major}.{minor}.{patch}")
4
3
79,175,533
2024-11-10
https://stackoverflow.com/questions/79175533/rolling-sum-using-duckdbs-python-relational-api
Say I have data = {'id': [1, 1, 1, 2, 2, 2], 'd': [1, 2, 3, 1, 2, 3], 'sales': [1, 4, 2, 3, 1, 2]} I want to compute a rolling sum with window of 2 partitioned by 'id' ordered by 'd' Using SQL I can do: duckdb.sql(""" select *, sum(sales) over w as rolling_sales from df window w as (partition by id order by d rows between 1 preceding and current row) """) Out[21]: ┌───────┬───────┬───────┬───────────────┐ │ id │ d │ sales │ rolling_sales │ │ int64 │ int64 │ int64 │ int128 │ ├───────┼───────┼───────┼───────────────┤ │ 1 │ 1 │ 1 │ 1 │ │ 1 │ 2 │ 4 │ 5 │ │ 1 │ 3 │ 2 │ 6 │ │ 2 │ 1 │ 3 │ 3 │ │ 2 │ 2 │ 1 │ 4 │ │ 2 │ 3 │ 2 │ 3 │ └───────┴───────┴───────┴───────────────┘ This works great, but how can I do it using the Python Relational API? I've got as far as rel = duckdb.sql('select * from df') rel.sum( 'sales', projected_columns='*', window_spec='over (partition by id order by d rows between 1 preceding and current row)' ) which gives ┌───────────────────────────────────────────────────────────────────────────────────────┐ │ sum(sales) OVER (PARTITION BY id ORDER BY d ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) │ │ int128 │ ├───────────────────────────────────────────────────────────────────────────────────────┤ │ 3 │ │ 4 │ │ 3 │ │ 1 │ │ 5 │ │ 6 │ └───────────────────────────────────────────────────────────────────────────────────────┘ This is close, but it's not quite right - how do I get the name of the last column to be rolling_sales?
I'm not an expert in DuckDB relational API but this works: rel.sum( 'sales', projected_columns='*', window_spec='over (partition by id order by d rows between 1 preceding and current row) as rolling_sales' ) ┌───────┬───────┬───────┬───────────────┐ │ id │ d │ sales │ rolling_sales │ │ int64 │ int64 │ int64 │ int128 │ ├───────┼───────┼───────┼───────────────┤ │ 1 │ 1 │ 1 │ 1 │ │ 1 │ 2 │ 4 │ 5 │ │ 1 │ 3 │ 2 │ 6 │ │ 2 │ 1 │ 3 │ 3 │ │ 2 │ 2 │ 1 │ 4 │ │ 2 │ 3 │ 2 │ 3 │ └───────┴───────┴───────┴───────────────┘
3
1
79,175,860
2024-11-10
https://stackoverflow.com/questions/79175860/invert-colors-of-a-mask-in-pygame
I have a pygame mask of a text surface that I would like to invert, in the sense that the black becomes white, and the white becomes black. The black is the transparent part of the text, and the white is the non-transparent part, but I'd like it flipped so I can make a text outline. I can't really figure it out. If anyone knows, it would be much appreciated Ill attach the code that generates and blits the mask :) location_text = self.font.render(self.location, True, self.text_color) pos = ((screen_width - location_text.get_width()) / 2, 320) mask = pygame.mask.from_surface(location_text) mask_surf = mask.to_surface() mask_surf.set_colorkey((0, 0, 0)) screen.blit(mask_surf, (pos[0], pos[1]))
There are different possibilities. You can invert the mask with invert. mask = pygame.mask.from_surface(location_text) mask.invert() mask_surf = mask.to_surface() mask_surf.set_colorkey((255, 255, 255)) You can also set the colors when you turn the mask into a surface. Make the setcolor black and the unsetcolor white (see to_surface()): mask = pygame.mask.from_surface(location_text) mask_surf = mask.to_surface(setcolor=(0, 0, 0, 0), unsetcolor=(255, 255, 255, 255))
3
3
79,175,794
2024-11-10
https://stackoverflow.com/questions/79175794/is-a-lock-recommended-in-python-when-updating-a-bool-in-one-direction-in-a-threa
Is a lock necessary in a situation where thread 1 checks if a bool has flipped from False to True periodically within a loop, the bool being updated in thread 2? As I understand a bool is atomic in python, it should not be possibly for the bool to be incorrectly updated or take on a garbage value like in C++ for example. Only one thread will ever update the bool, and it will only ever go from False to True (there is no position where it switches back to False). As the check is done periodically and timing is not critical, even if thread 1 reads the old value (False) while thread 2 is updating it to True, thread 1 should still be able to read the new value once thread 2 has finished on the next iteration of the loop. Is this an incorrect understanding of this specific situation? I understand it is generally best to use locks, and I don't mind adding one, but I wonder if it would actually just be introducing more potential problems (like deadlocks) without actually solving any problem except maybe saving thread 1 from looping for one more iteration. This is the loop in question. The two variables are being updated in another thread, and read in this one. Both are bool values starting as False and being set to True when the other threads have finished. while True: time.sleep(0.2) if self.joined_queries: self.stop() break if self.timeout: self.stop() break There were some other threads but the answers were very general without going into specifics much, and the situation was a bit different. I would like to understand why this is a bad idea or not! This piece of documentation makes me believe it is not a problem, but I am not really sure if I am interpreting it correctly: http://www.python.org/doc/faq/library/#what-kinds-of-global-value-mutation-are-thread-safe
As I understand a bool is atomic in python, it should not be possibly for the bool to be incorrectly updated or take on a garbage value like in C++ for example Practically, a bool variable in cPython is going to be a pointer to one of the two objects Py_True or Py_False. The implementation details shouldn't matter, but it's relevant to know that a bool in Python will be exactly as atomic as a pointer in C. In a cPython build without the GIL, this will be a data race and hence Undefined Behaviour according to the underlying C code. This piece of documentation makes me believe it is not a problem The page you linked explicitly says both that this sort of code is safe due to the GIL, and that work on removing the GIL to improve performance is ongoing. It seems unwise to write new code that depends on it. Note that C's Undefined Behaviour may be perfectly well-defined on a given hardware platform, it's just non-portable. However, the code will be brittle in the face of newer interpreters and platforms.
1
2
79,175,123
2024-11-10
https://stackoverflow.com/questions/79175123/adding-previous-rows-to-generate-another-row-in-pandas
I am trying to solve a problem in my data-frame df.head() 0 Key value 1 10 500 2 11 500 3 12 600 4 12 800 5 13 1000 6 13 1200 . . . 200++ output is to put the values in the above data-frame or have another data-frame with all the values of above with additional info show as below. Expected Output: 0 Key value 1 10 500 2 11 500 3 12 600 4 12 800 5 12 1400 -----> Addition of 800+600 as keys are same 6 13 1000 7 13 1200 8 13 2200 -----> Addition of 1000+12000 as keys are same . . 200++ I am just starting out in Pandas, any help will be appreciate.
A possible solution, which takes the following steps: First, groupby is used to organize df by Key, and then filter is applied with a lambda function that selects only groups with more than one row, ensuring sums are computed only for repeated Key values. Next, this filtered group is re-aggregated with groupby to calculate the sum of value within each group of Key values using sum. The concatenated dataframe is then organized by Key and value columns through sort_values, with ignore_index=True to reset indexing. (pd.concat([ df, df.groupby('Key').filter(lambda x: len(x) > 1) .groupby('Key', as_index=False)['value'].sum()]) .sort_values('Key', ignore_index=True)) Another possible solution: pd.concat( [df, df.groupby('Key', as_index=False)['value'] .apply(lambda x: None if len(x) == 1 else sum(x)) .dropna()]).sort_values('Key', ignore_index=True) Output: Key value 0 10 500 1 11 500 2 12 600 3 12 800 4 12 1400 5 13 1000 6 13 1200 7 13 2200
1
2
79,174,726
2024-11-10
https://stackoverflow.com/questions/79174726/is-there-a-way-to-show-which-line-of-a-cell-is-being-proccesed-in-jupyter-lab
I want to know if there's a way to show an indicator or something that tells me at which line my Jupyter Lab code is while executing. Google Colab does this with a little green arrow next to the line (see image below), and I'm wondering if there's something similar for JL.
Not quite like the way that corporation adapted the open source product, but some of these options may get you to what you want.... JupyterLab lets you attach a console to monitor everything as it runs. You want to make show you activate Show All Kernel Activity. See here, here, and here. See more about JupyterLab's code console in the documentation here. Now that doesn't quite still do what you want, but you can add a shortcut that lets you specify lines to run to allow you to narrow the running code to what you specify. See 'Feature Request - Ability to Split and Independently Execute Sections of a Single Cell'. Alternatively... It's not the same thing, but related. You can use the Visual Debugger built into JupyterLab to step through the callstack after you set at least one breakpoint. It will show the variable values as they change. See here for more about how it relates to what you'd like to do, and the official documentation for the Visual Debugger is here. Documenting related posts beyond references in this reply: Jupyter Discourse post 'Line currently being executed' (Nothing posted there at this time but maybe it will be updated down the line...) Also Related: 'How to run a single line or selected code in a Jupyter Notebook or JupyterLab cell?' 'Feature Request - Ability to Split and Independently Execute Sections of a Single Cell' There's also a log console, see here.
1
2
79,173,053
2024-11-9
https://stackoverflow.com/questions/79173053/how-to-convert-character-indices-to-bert-token-indices
I am working with a question-answer dataset UCLNLP/adversarial_qa. from datasets import load_dataset ds = load_dataset("UCLNLP/adversarial_qa", "adversarialQA") How do I map character-based answer indices to token-based indices after tokenizing the context and question together using a tokenizer like BERT. Here's an example row from my dataset: d0 = ds['train'][0] d0 {'id': '7ba1e8f4261d3170fcf42e84a81dd749116fae95', 'title': 'Brain', 'context': 'Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.', 'question': 'What sare the benifts of the blood brain barrir?', 'answers': {'text': ['isolated from the bloodstream'], 'answer_start': [195]}, 'metadata': {'split': 'train', 'model_in_the_loop': 'Combined'}} After tokenization, the answer indices are 56 and 16: from transformers import BertTokenizerFast bert_tokenizer = BertTokenizerFast.from_pretrained('bert-large-uncased', return_token_type_ids=True) bert_tokenizer.decode(bert_tokenizer.encode(d0['question'], d0['context'])[56:61]) 'isolated from the bloodstream' I want to create a new dataset with the answer's token indices, e.g., 56 ad 60. This is from a linkedin learning class. The instructor did the conversion and created the csv file but he did not share it or the code to do that. This is the expected result:
You should encode both the question and context, locate the token span for the answer within the tokenized context, and update the dataset with the token-level indices. The following function does the above for you: def get_token_indices(example): # Tokenize with `return_offsets_mapping=True` to get character offsets for each token encoded = tokenizer( example['question'], example['context'], return_offsets_mapping=True ) # Find character start and end from the original answer char_start = example['answers']['answer_start'][0] char_end = char_start + len(example['answers']['text'][0]) # Identify token indices for the answer start_token_idx = None end_token_idx = None for i, (start, end) in enumerate(encoded['offset_mapping']): if start <= char_start < end: start_token_idx = i if start < char_end <= end: end_token_idx = i break example['answer_start_token_idx'] = start_token_idx example['answer_end_token_idx'] = end_token_idx return example Here's how you can use and test this function: ds = load_dataset("UCLNLP/adversarial_qa", "adversarialQA") tokenizer = BertTokenizerFast.from_pretrained('bert-large-uncased', return_token_type_ids=True) tokenized_ds = ds['train'].map(get_token_indices) # Example d0_tokenized = tokenized_ds[0] print("Tokenized start index:", d0_tokenized['answer_start_token_idx']) print("Tokenized end index:", d0_tokenized['answer_end_token_idx']) answer_tokens = tokenizer.decode( tokenizer.encode(d0_tokenized['question'], d0_tokenized['context'])[d0_tokenized['answer_start_token_idx']:d0_tokenized['answer_end_token_idx']+1] ) print("Tokenized answer:", answer_tokens) Output: Tokenized start index: 56 Tokenized end index: 60 Tokenized answer: isolated from the bloodstream
2
2
79,164,983
2024-11-7
https://stackoverflow.com/questions/79164983/numerically-integrating-signals-with-absolute-value
Suppose I have a numpy s array of acceleration values representing some signal sampled at a fixed rate dt. I want to compute the cumulative absolute velocity, i.e. np.trapz(np.abs(s), dx=dt). This is great except if dt is "large" (e.g. 0.01) and the signal s is both long and crossing between positive and negative values frequently (an unfortunately common occurrence), an error is accumulated from the fact that taking |s| drops the information about the original sign of s. See the picture for a better idea of what this error actually looks like. I have some custom code that can correctly account for this error by creating a modified trapezium rule with numba, but there are other very similar functions that I need to implement doing things like np.trapz(np.square(s), dx=dt). Is there an off the shelf solution for this sort of numerical integral that: Can be reused for both integrals np.trapz(np.square(s), dx=dt) and np.trapz(np.abs(s), dx=dt), etc... Is ideally vectorised so that integration can be done for tens of thousands of signals at once in a reasonable time? For the record, the following parallel numba code is what I am using to integrate the signals @numba.njit(parallel=True) def cav_integrate( waveform: npt.NDArray[np.float32], dt: float ) -> npt.NDArray[np.float32]: """Compute the Cumulative Absolute Velocity (CAV) of a waveform.""" cav = np.zeros((waveform.shape[0],), dtype=np.float32) for i in range(waveform.shape[0]): for j in range(waveform.shape[1] - 1): if np.sign(waveform[i, j]) * np.sign(waveform[i, j + 1]) >= 0: cav[i] += dt / 2 * (np.abs(waveform[i, j]) + np.abs(waveform[i, j + 1])) else: slope = (waveform[i, j + 1] - waveform[i, j]) / dt x0 = -waveform[i, j] / slope cav[i] += x0 / 2 * np.abs(waveform[i, j]) + (dt - x0) / 2 * np.abs( waveform[i, j + 1] ) return cav Example Data I have uploaded a small broadband ground motion simulation to a dropbox link (approx. 91MiB) for testing. This data comes from a finite difference simulation of a recent earthquake near Wellington, New Zealand plus some empirically derived high-frequency noise. The file is an HDF5 containing some station data (irrelevant for our purposes), and simulation waveforms in the "waveforms" key. The array has shape (number of stations, timesteps, components) = (1433, 5876, 3). The 1d numpy array waveform[i, :, j] is the simulated acceleration for the ith station in the jth component. We need to compute the cumulative absolute velocity (CAV) for each component and each station independently. The benchmark code to do this can be found below: import time import h5py import numba import numpy as np import numpy.typing as npt broadband_input_file = h5py.File("broadband.h5", "r") # Load the entire dataset into memory so that the first method is not arbitrarily slowed down by file I/O waveforms = np.array(broadband_input_file["waveforms"]) dt = 0.01 start = time.process_time() cav_naive = np.trapz(np.abs(waveforms), dx=dt, axis=1) print(f"CAV naive time: {time.process_time() - start}") @numba.njit def cav_integrate( waveform: npt.NDArray[np.float32], dt: float ) -> npt.NDArray[np.float32]: """Compute the Cumulative Absolute Velocity (CAV) of a waveform.""" cav = np.zeros((waveform.shape[0], waveform.shape[-1]), dtype=np.float32) for c in range(waveform.shape[-1]): for i in range(waveform.shape[0]): for j in range(waveform.shape[1] - 1): if np.sign(waveform[i, j, c]) * np.sign(waveform[i, j + 1, c]) >= 0: cav[i, c] += ( dt / 2 * (np.abs(waveform[i, j, c]) + np.abs(waveform[i, j + 1, c])) ) else: slope = (waveform[i, j + 1, c] - waveform[i, j, c]) / dt x0 = -waveform[i, j, c] / slope cav[i, c] += x0 / 2 * np.abs(waveform[i, j, c]) + ( dt - x0 ) / 2 * np.abs(waveform[i, j + 1, c]) return cav # Warm up the numba compilation cache _ = cav_integrate(waveforms, dt) start = time.process_time() cav_bespoke = cav_integrate(waveforms, dt) print(f"Custom CAV time: {time.process_time() - start}") print( f"Ratio naive CAV / custom CAV (0, 25, 50, 75, 100% quartiles): {np.percentile(cav_naive / cav_bespoke, [0, 25, 50, 75, 100])}" ) Which gives the following output CAV naive time: 0.14353649699999993 Custom CAV time: 0.11182449700000019 Ratio naive CAV / custom CAV (0, 25, 50, 75, 100% quartiles): [1.00607312 1.00999796 1.01163089 1.01318455 1.02221394] These differences are reasonably small, better examples of larger differences are shown in the comments. Some of the observed waveforms have 20-40% differences between the methods. Even 2% differences might be important for some of the researchers I support. Note also that the CAV calculation is done on a single thread for comparison, but I would parallelise both methods in reality for the largest waveform arrays (having 6 or 7x the stations and 10-20x the timesteps depending on the temporal resolution of the simulation). Funnily enough the parallel overhead for this small file makes cav_integrate slower than the naive approach if enabled. We actually do the CAV calculation for all linear combinations cos(theta) * waveform[i, :, 0] + sin(theta) * waveform[i, :, 1] where theta = 0, 1,...180° to obtain orientation independent measurements of CAV. This is part of the reason it needs to be fast.
This answer focus more on the performance/vectorization aspects than numerical integration. Faster implementation Is ideally vectorised so that integration can be done for tens of thousands of signals at once in a reasonable time? Technically, Numba code can run with njit (and without errors) are always vectorized based on the definition on Numpy (a vectorized function is basically a natively compiled function). However, it can be made faster. The first thing to do is to use multiple threads so the code can benefit from multiple CPU cores. Funnily enough the parallel overhead for this small file makes cav_integrate slower than the naive approach if enabled. This is because there is 2 issues: process_time returns the sum of the system and user CPU time (i.e. amount of parallel work) of the current process. Its does not measures the wall clock time. Thus, the benchmark is biased. You should use time() to measure the wall clock time instead. Numba don't automatically parallelize loops. It only parallelize some basic array operation but to parallelize loops, you need to use prange instead of range or otherwise the loop will be sequential (so the code of the question is not actually parallel). To efficiently parallelize the code, we should swap the i-based and c-based loops. Moreover, there are other things to consider when it comes to performance: you should avoid updating cav[i, c] and accumulate values in a local variable instead (be careful to use it only within the parallel loop). you should also be careful to avoid implicit 64-bit FP numbers conversions because they are often more expensive and certainly not needed here since you operate on 32-bit FP numbers. For example, dt is a 64-bit number so any operation involving it will results in a 64-bit number. I think you can use a min-max check instead of a sign-product-based one (the former is more efficient and SIMD friendly) you should use math tricks so to avoid some expensive mathematical computations like divisions (and avoiding repeated computation like multiplication by a constant for all terms of a sum) adapt the code so to benefit from SIMD units and improve memory accesses (this can typically improve the scalability of the code): this is typically done by physically swapping the c and j axis (though this requires a different input layout). Here is the modified code considering all points except the last one (about SIMD and the memory layout): @numba.njit(parallel=True) def cav_integrate_opt( waveform: npt.NDArray[np.float32], dt: float ) -> npt.NDArray[np.float32]: """Compute the Cumulative Absolute Velocity (CAV) of a waveform.""" cav = np.zeros((waveform.shape[0], waveform.shape[-1]), dtype=np.float32) dtf = np.float32(dt) half = np.float32(0.5) for i in numba.prange(waveform.shape[0]): for c in range(waveform.shape[-1]): tmp = np.float32(0) for j in range(waveform.shape[1] - 1): v1 = waveform[i, j, c] v2 = waveform[i, j + 1, c] if min(v1, v2) >= 0 or max(v1, v2) <= 0: tmp += dtf * (np.abs(v1) + np.abs(v2)) else: inv_slope = dtf / (v2 - v1) x0 = -v1 * inv_slope tmp += x0 * np.abs(v1) + (dtf - x0) * np.abs(v2) cav[i, c] = tmp * half return cav Here are performance results on my AMD Ryzen 5700U CPU (with 8 cores): naive trapz (seq): 315 ms initial cav_integrate (seq): 244 ms optimized cav_integrate (par): 10 ms <----- Th optimized implementation is 25 times faster than cav_integrate and 31 times faster than the naive approach. For better performance, please consider the last optimization point (more precisely about SIMD). That being said, this can be a bit complex to perform here. It might requires the else branch to be rarely executed (i.e. <5%) so to be pretty efficient. More generic integration Can be reused for both integrals np.trapz(np.square(s), dx=dt) and np.trapz(np.abs(s), dx=dt), etc... Here are some thoughts: For np.trapz(np.abs(s), dx=dt), a solution consists in computing the minimum value of the signal, then subtract the minimum to the signal, compute np.trapz of the resulting adapted signal so to finally correct the result. This solution is more efficient than you current one because it can benefit from SIMD instructions. However, it does not work for np.square. A generic solution is to add new points close to the problematic area (thanks to an interpolation function). This solution is not optimal because to increase the computational time and is not numerically exact either (though using a lot of point should give a pretty accurate solution). You do not need to interpolate all points nor to generate new array for the whole array : you can do that line by line or even on the fly (a bit more complicated). This can save a lot of RAM and computation time. Another generic solution is to pass a generic function in parameter to the numba function for computing differently the case where the sign change. However, this solution should be significantly slower than your specialized solution because it does not benefit from SIMD instructions and add an expensive function call that can hardly be inlined. You can mix the two last solution so to build a generic solution which should be still faster once running with multiple threads and optimized like above. The idea is to add just one point where the curve cross the line y=0 and and split the integration in two parts. A linear interpolation should give results similar to cav_integrate_opt (if not even equal). Here is an example: @numba.njit(parallel=True) def cav_integrate_opt_generic(waveform, dt, fun): cav = np.zeros((waveform.shape[0], waveform.shape[-1]), dtype=np.float32) dtf = np.float32(dt) half = np.float32(0.5) for i in numba.prange(waveform.shape[0]): for c in range(waveform.shape[-1]): tmp = np.float32(0) for j in range(waveform.shape[1] - 1): v1 = waveform[i, j, c] v2 = waveform[i, j + 1, c] if min(v1, v2) < 0 and max(v1, v2) > 0: # Basic linear interp # Consider passing another generic function in # parameter to find roots if needed (more expensive). inv_slope = dtf / (v2 - v1) x0 = -v1 * inv_slope tmp += x0 * fun(v1) + (dtf - x0) * fun(v2) else: tmp += dtf * (fun(v1) + fun(v2)) cav[i, c] = tmp * half return cav # Needs to be wrapped in a Numba function for sake of performance # (so Numba can call it directly like a native C function) @numba.njit def numba_abs(y): return np.abs(y) # Note `cav_integrate_opt_generic` is recompiled for each different provided function. cav_bespoke = cav_integrate_opt_generic(waveforms, dt, numba_abs) If you want to do that using a higher-order interpolation and integration then you certainly need to consider more points and a generic function to find roots (which is certainly much more expensive when it is even possible to find analytical solutions). It turns out this more generic function is only 5~10% slower for np.abs on my machine. Result are the same for np.abs.
4
1
79,172,783
2024-11-9
https://stackoverflow.com/questions/79172783/polars-sql-case
Is this a bug, non-conformant behavior, or standardized behavior? A Polars SQL statement is calculating the average of values based on a condition. The CASE WHEN doesn't include an ELSE because those values should be ignored. Polars complains that an ELSE is required. If I include an ELSE, with no value, it's a syntax error. The solution is to use ELSE NULL. For comparison, duckdb doesn't required an ELSE. Should I open an issue on github? Is ELSE NULL the conforming solution? Or is duckdb giving me a break? SELECT AVG(CASE WHEN A <= B OR A <= C THEN D END) FROM df
Is this a bug, non-conformant behavior, or standardized behavior? Fun-fact: zero RDBMS today implement the ISO SQL specification de-jure. In my mind the spec is both aspirational but also something that no-one should actually conform to (because ISO SQL is just so horribly unergonomic, and the ISO itself can be better-described as an economic rent-seeking operation disguised as a standards organization. Editorials aside... I can cite the actual ISO SQL spec to answer your question because I actually spent my own money buying a criminally overpriced copy of it (and yes, alcohol was involved). Under US copyright's fair-use exception, I think I can get-away with sharing a screenshot of the relevant spec pages: The CASE WHEN doesn't include an ELSE because those values should be ignored. Omiting an ELSE does not mean anything gets "ignore"; it's just shorthand for ELSE NULL. Quoteth the Holy Word of the Sacred ISO: 6.12 <case expression>, Syntax Rules: Point 4 (page 252 of the 2023 edition): If an <else clause> is not specified, then ELSE NULL is implicit Polars complains that an ELSE is required I found the bit in Polars that complains. and git blame located this commit which embued Polars with support for CASE WHEN ELSE only recently in 2023, and it looks like the requirement for an ELSE branch is indeed a shortcoming given it's a hard, absolute requirement - but Polars has loads of other similar limitations: right in the same commit we also see errors like "CASE operand is not yet supported" (which rejects CASE operands classified as "some") and more besides seen in this related commit, such as an admission that their JOIN clauses support only = and AND (i.e. equijoins, but not other kinds of joins). ...so Polars' SQL support (as of 2023) is somewhat lacking. If I include an ELSE, with no value, it's a syntax error. The solution is to use ELSE NULL Correct. You should use an explicit CASE WHEN [...] ELSE NULL END. The ISO SQL spec says it's equivalent to a CASE expression that omits the ELSE NULL so you have nothing to worry about, especially because AVG will ignore NULLs: Quoteth the Ferengi robber-barons of the ISO: (emphasis mine; in the context of an expression like SELECT AVG( <value expression> ) FROM T1) 10.9 <aggregate function>, General Rules: Point 7 (page 814 of the 2023 edition): Let TX be the single-column table that is the result of applying the <value expression> to each row of T1 and eliminating null values. If one or more null values are eliminated, then a completion condition is raised: warning — null value eliminated in set function (01003). In conclusion: Polars' current SQL implementation is a fraction of a subset of a minority of a portion of the ISO SQL specification, with noticable (but not unduly bothersome) deviations from the spec. Should I open an issue on github? Probably not: it's only a bug if the behaviour violates Polars' own project specifications or if they claim to closely or strictly implement ISO SQL (which they are not claiming, btw). Is ELSE NULL the conforming solution? Yes, and you should continue to use ELSE NULL and not worry about it. It's good-practice, imo anyway (I always explicitly specify ELSE NULL myself, unless it's unnecessarily verbose). Or is duckdb giving me a break? DuckDB is hardly conforming to the ISO SQL spec either; DuckDB explicitly says their implementation is based on Postgres' dialect and therefore is not based on ISO SQL.
2
3
79,171,202
2024-11-8
https://stackoverflow.com/questions/79171202/jinja-templating-with-recursive-in-dict-doesnt-works
I'm stuck in a Jinja implementation problem. Here is my little python script: path = Path(__file__).parent env = Environment( loader=FileSystemLoader(path / "templates") ) template = env.get_template("template1.rst") rendered = template.render(sample={"a": {"b": "c"}}) And here is my template for jinja: .. toctree:: :maxdepth: 3 {% for k, v in sample.items() recursive %} - {{ k }}: {%- if v is string %} {{ v }} {%- else %} {{ loop(v) }} {%- endif -%} {%endfor%} The execution returns this error: File "/home/jaja/Bureau/coding/bac_a_sable/sphinx_test/templates/template1.rst", line 5, in top-level template code {% for k, v in sample.items() recursive %} ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/jaja/Bureau/coding/bac_a_sable/sphinx_test/templates/template1.rst", line 21, in template {{ loop(v) }} ^^^^^^^^^^^^^^^^^^ File "/home/jaja/Bureau/coding/bac_a_sable/sphinx_test/templates/template1.rst", line 5, in template {% for k, v in sample.items() recursive %} ^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1) As the v value of first loop is {"b": "c"}, it must work, but it doesn't. Is Jinja unable to recursively loop in dictionaries ?
immediate fix When you start the loop, you use sample.items() iterator - {% for k, v in sample.items() recursive %} ^^^^^^^^^^^^^^ When you recur, you are passing the dict itself - {%- else %} {{ loop(v) }} ^ Simply change this to - {%- else %} {{ loop(v.items()) }} ^^^^^^^^^ naive test An improvement to the entire loop would be to change the test to mapping, instead of string. {% for k, v in sample.items() recursive %} - {{ k }}: {%- if v is mapping %} {{ loop(v.items()) }} {%- else %} {{ v }} {%- endif -%} {%endfor%} This ensures that you only recur on dicts. In the original code, the else will recur any non-string input. If the data included a number, you would've encountered a different error. nested input, nested output Preserving the levels of nesting in the output can be challenging. Consider using the loop.depth helper to create the correct whitespace - {% for k, v in sample.items() recursive %} {{ " " * (loop.depth - 1) }}- {{ k }}: {%- if v is mapping %} {{- loop(v.items()) }} {%- else %} {{ v }} {%- endif -%} {%endfor%} Given sample input - sample = { 'a': { 'b': 1, 'c': { 'd': 2 }, 'e': 'f', }, } Output - a: - c: - d: 2 - b: 1 - e: f
2
1
79,172,182
2024-11-9
https://stackoverflow.com/questions/79172182/why-does-sympy-perfect-power-64-return-false
The documentation for sympy.perfect_power says: Return (b, e) such that n == b**e if n is a unique perfect power with e > 1, else False (e.g. 1 is not a perfect power). A ValueError is raised if n is not Rational. Yet evaluating sympy.perfect_power(-64) results in False. However, -64 == (-4)**3, so sympy.perfect_power(-64) should return (-4, 3) (also because there is no other integer base with integer exponent > 1). Is this a bug? Or am I missing something here?
At the documentation, click on [source] and you'll find this: if n < 0: pp = perfect_power(-n) if pp: b, e = pp if e % 2: return -b, e return False Given -64, that first computes that 64 is 2^6 and then gives up because 6 isn't odd. I do think it's a bug and it should try to remove the factor 2 from the exponent. Maybe like this: if n < 0: pp = perfect_power(-n) if pp: b, e = pp e2 = e & -e if e2 != e: return -(b**e2), e//e2 return False
4
6
79,171,631
2024-11-8
https://stackoverflow.com/questions/79171631/how-do-i-determine-whether-a-zoneinfo-is-an-alias
I am having trouble identifying whether a ZoneInfo is built with an alias: > a = ZoneInfo('Atlantic/Faeroe') > b = ZoneInfo('Atlantic/Faroe') > a == b False It seems like these ZoneInfos are identical in practice. How do I identify that they are the same, as opposed to e.g. EST and UTC which are different?
The tzdata package publishes the data for the IANA time zone database. If installed, one could compare the data files for those zones located in: <PYTHON_DIR>\Lib\site-packages\tzdata\zoneinfo\Atlantic The data files for Faeroe and Faroe compare as binary same. Programmatically, the binary data can be directly read and compared: from importlib import resources with resources.open_binary('tzdata.zoneinfo.Atlantic', 'Faroe') as f: data1 = f.read() with resources.open_binary('tzdata.zoneinfo.Atlantic', 'Faeroe') as f: data2 = f.read() print(data1 == data2) # True Ref: https://tzdata.readthedocs.io/en/latest/#examples There is also the file <PYTHON_DIR>\Lib\site-packages\tzdata\zoneinfo\tzdata.zi which is a text representation of the IANA database, and contains the line: L Atlantic/Faroe Atlantic/Faeroe An L line means Link LINK_FROM LINK_TO to define an alias. This indicates that Atlantic/Faeroe is an alias for the Atlantic/Faroe time zone defined earlier in the file.
2
3
79,169,874
2024-11-8
https://stackoverflow.com/questions/79169874/performance-issues-of-using-lambda-for-assigning-variables-in-pandas-in-a-method
When working with pandas dataframes, I like to use method chains, because it makes the workflow similar to the tidyverse approach in R, where you use a string of pipes. Consider the example in this answer: N = 10 df = ( pd.DataFrame({"x": np.random.random(N)}) .assign(y=lambda d: d['x']*0.5) .assign(z=lambda d: d.y * 2) .assign(w=lambda d: d.z*0.5) ) I think I've heard that manipulating dataframes using lambda is inefficient, because it is not a vectorized operation, but some looping goes on under the hood. Is this an issue with examples like the one above? Are there alternatives to using lambda in a method chain that retain the tidyverse-like approach?
Your operations are vectorized, the lambda is not operating as the level of the values but rather for the column names. The running time of the function will be negligible for large enough datasets. However, each assign call is generating a new DataFrame. You could use a single assign call, this would avoid generating an intermediate for each step: df = (pd.DataFrame({'x': np.random.random(N)}) .assign(y=lambda d: d['x'] * 0.5, z=lambda d: d.y * 2, w=lambda d: d.z * 0.5, ) ) There is a significant gain in performance: NB. I'm only timing .assign(x,y,z) vs .assign(x).assign(y).assign(z), the DataFrame is pre-generated.
2
5
79,169,451
2024-11-8
https://stackoverflow.com/questions/79169451/calculating-sums-of-nested-dictionaries-into-the-dictionary
I'm writing a program that helps collate data from a few sources to perform analysis on. I currently have a dictionary that looks like this: output = { "main": { "overall": { "overall": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, }, "Sub A": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, }, "Sub B": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, }, }, "A": { "overall": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, }, "Sub A": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 10,"q2": 8,"q3": 19,"q4": 7}, "Loss": {"q1": 4,"q2": 2,"q3": 6,"q4": 10}, }, "Sub B": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 50,"q2": 70,"q3": 54,"q4": 77}, "Loss": {"q1": 2,"q2": 8,"q3": 5,"q4": 40}, }, }, "B": { "overall": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, }, "Sub A": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 75,"q2": 23,"q3": 25,"q4": 12}, "Loss": {"q1": 64,"q2": 22,"q3": 12,"q4": 5}, }, "Sub B": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 65,"q2": 53,"q3": 3,"q4": 5}, "Loss": {"q1": 10,"q2": 12,"q3": 1,"q4": 2}, }, } }, }, }, So far, I have the profit and loss data in the non overall dictionaries. What I would like to do is have a function that populates the overall totals, profits and losses. For the sake of this, let's just say that profits and losses are aggregated so a profit of 1 and a loss of 1 makes a total of 2. From looking at some similar questions and some thinking, i have the following: def calculateOveralls(dictionary, query): for a in dictionary[query]: #A.B,C for b in dictionary[query][a]: #Sub A, Sub B, Sub C if b == "overall": pass else: for c in dictionary[query][a][b]: # Profit/Loss if c == "total": pass else: for d in dictionary[query][a][b][c]: # quarters dictionary[query][a][b]["total"][d] = dictionary[query][a][b]["total"][d] + dictionary[query][a][b][c][d] Any help would be greatly appreciated. Many thanks!
You appear to have two levels of nested dictionaries within output["main"] and want to generate totals for each quarter and each permutation of level: for main_key, main_value in output["main"].items(): if main_key == "overall": continue for sub_key, sub_value in main_value.items(): if sub_key == "overall": continue for q in ("q1", "q2", "q3", "q4"): profit = sub_value["Profit"][q] loss = sub_value["Loss"][q] sub_value["total"][q] = profit + loss output["main"]["overall"]["overall"]["Profit"][q] += profit output["main"]["overall"]["overall"]["Loss"][q] += loss output["main"]["overall"]["overall"]["total"][q] += profit + loss output["main"]["overall"][sub_key]["Profit"][q] += profit output["main"]["overall"][sub_key]["Loss"][q] += loss output["main"]["overall"][sub_key]["total"][q] += profit + loss output["main"][main_key]["overall"]["Profit"][q] += profit output["main"][main_key]["overall"]["Loss"][q] += loss output["main"][main_key]["overall"]["total"][q] += profit + loss from pprint import pprint pprint(output, width=100) Which outputs: {'main': {'A': {'Sub A': {'Loss': {'q1': 4, 'q2': 2, 'q3': 6, 'q4': 10}, 'Profit': {'q1': 10, 'q2': 8, 'q3': 19, 'q4': 7}, 'total': {'q1': 14, 'q2': 10, 'q3': 25, 'q4': 17}}, 'Sub B': {'Loss': {'q1': 2, 'q2': 8, 'q3': 5, 'q4': 40}, 'Profit': {'q1': 50, 'q2': 70, 'q3': 54, 'q4': 77}, 'total': {'q1': 52, 'q2': 78, 'q3': 59, 'q4': 117}}, 'overall': {'Loss': {'q1': 6, 'q2': 10, 'q3': 11, 'q4': 50}, 'Profit': {'q1': 60, 'q2': 78, 'q3': 73, 'q4': 84}, 'total': {'q1': 66, 'q2': 88, 'q3': 84, 'q4': 134}}}, 'B': {'Sub A': {'Loss': {'q1': 64, 'q2': 22, 'q3': 12, 'q4': 5}, 'Profit': {'q1': 75, 'q2': 23, 'q3': 25, 'q4': 12}, 'total': {'q1': 139, 'q2': 45, 'q3': 37, 'q4': 17}}, 'Sub B': {'Loss': {'q1': 10, 'q2': 12, 'q3': 1, 'q4': 2}, 'Profit': {'q1': 65, 'q2': 53, 'q3': 3, 'q4': 5}, 'total': {'q1': 75, 'q2': 65, 'q3': 4, 'q4': 7}}, 'overall': {'Loss': {'q1': 74, 'q2': 34, 'q3': 13, 'q4': 7}, 'Profit': {'q1': 140, 'q2': 76, 'q3': 28, 'q4': 17}, 'total': {'q1': 214, 'q2': 110, 'q3': 41, 'q4': 24}}}, 'overall': {'Sub A': {'Loss': {'q1': 68, 'q2': 24, 'q3': 18, 'q4': 15}, 'Profit': {'q1': 85, 'q2': 31, 'q3': 44, 'q4': 19}, 'total': {'q1': 153, 'q2': 55, 'q3': 62, 'q4': 34}}, 'Sub B': {'Loss': {'q1': 12, 'q2': 20, 'q3': 6, 'q4': 42}, 'Profit': {'q1': 115, 'q2': 123, 'q3': 57, 'q4': 82}, 'total': {'q1': 127, 'q2': 143, 'q3': 63, 'q4': 124}}, 'overall': {'Loss': {'q1': 80, 'q2': 44, 'q3': 24, 'q4': 57}, 'Profit': {'q1': 200, 'q2': 154, 'q3': 101, 'q4': 101}, 'total': {'q1': 280, 'q2': 198, 'q3': 125, 'q4': 158}}}}} fiddle
1
1
79,169,880
2024-11-8
https://stackoverflow.com/questions/79169880/how-to-add-a-row-for-sorted-multi-index-dataframe
I have a multiindex dataframe, which comes from groupby. Here is a demo: In [54]: df = pd.DataFrame({'color': ['blue', 'grey', 'blue', 'grey', 'black'], 'name': ['pen', 'pen', 'pencil', 'pencil', 'box'],'price':[2.5, 2.3, 1.5, 1.3, 5.2],'bprice':[2.2, 2, 1.3, 1.2, 5.0]}) In [55]: df Out[55]: color name price bprice 0 blue pen 2.5 2.2 1 grey pen 2.3 2.0 2 blue pencil 1.5 1.3 3 grey pencil 1.3 1.2 4 black box 5.2 5.0 In [56]: a = df.groupby(['color', 'name'])[['price', 'bprice']].sum() In [57]: a Out[57]: price bprice color name black box 5.2 5.0 blue pen 2.5 2.2 pencil 1.5 1.3 grey pen 2.3 2.0 pencil 1.3 1.2 I want to add a row in every color index, the ideal output is: price bprice color name black * 5.2 5.0 box 5.2 5.0 blue * 4.0 3.5 pen 2.5 2.2 pencil 1.5 1.3 grey * 3.6 3.2 pen 2.3 2.0 pencil 1.3 1.2 There are two requirements: The new * row should be in the first row of each group expect for the * row, the other row should be sorted by price I tried a lot of methods, but not find a elegant method. Insert a row into a multiindex dataframe with specified position seems be hard. could you help on this?
Compute a groupby.sum on a, then append a level with * and concat, finally sort_index based on color: # compute the sum per color/name # sort by descending price a = (df.groupby(['color', 'name'])[['price', 'bprice']].sum() .sort_values(by='price', ascending=False) ) # compute the sum per color # concatenate, sort_index in a stable way out = (pd.concat([a.groupby('color').sum() .assign(name='*') .set_index('name', append=True), a]) .sort_index(level='color', kind='stable', sort_remaining=False) ) Output: price bprice color name black * 5.2 5.0 box 5.2 5.0 blue * 4.0 3.5 pen 2.5 2.2 pencil 1.5 1.3 grey * 3.6 3.2 pen 2.3 2.0 pencil 1.3 1.2
1
3
79,168,495
2024-11-8
https://stackoverflow.com/questions/79168495/interpolating-battery-capacity-data-in-logarithmic-scale-with-python
I'm working on interpolating battery capacity data based on the relationships between hour_rates, capacities and currents. Here’s a sample of my data: import numpy as np import pandas as pd from scipy.interpolate import interp1d import matplotlib.pyplot as plt # Data from Rolls S-480 flooded battery capacity_data = [ [1, 135, 135], [2, 191, 95.63], [3, 221, 73.75], [4, 244, 60.94], [5, 263, 52.5], [6, 278, 46.25], [8, 300, 37.5], [10, 319, 31.88], [12, 334, 27.81], [15, 352, 23.45], [20, 375, 18.75], [24, 386, 16.09], [50, 438, 8.76], [72, 459, 6.38], [100, 486, 4.86] ] capacity = pd.DataFrame(capacity_data, columns=['hour_rates', 'capacities_o', 'currents']) capacity['capacities'] = np.around(capacity['currents'] * capacity['hour_rates'], 3) The columns relate as follows: hour_rates (h) = capacities (Ah) / currents (A) capacities (Ah) = hour_rates (h) * currents (A) currents (A) = capacities (Ah) / hour_rates (h) Objective: I want to interpolate capacities and hour_rates for a range of currents values using logarithmic scaling for better accuracy. Code Custom interpolation class and function to achieve this. Here’s the code: from typing import Union class interpolate1d(interp1d): """Extend scipy interp1d to interpolate/extrapolate per axis in log space""" def __init__(self, x, y, *args, xspace='linear', yspace='linear', **kwargs): self.xspace = xspace self.yspace = yspace if self.xspace == 'log': x = np.log10(x) if self.yspace == 'log': y = np.log10(y) super().__init__(x, y, *args, **kwargs) def __call__(self, x, *args, **kwargs): if self.xspace == 'log': x = np.log10(x) if self.yspace == 'log': return 10**super().__call__(x, *args, **kwargs) else: return super().__call__(x, *args, **kwargs) def interpolate_cap_by_current(df: list, current_values: list, kind: Union[str, int] = 'linear', hr_limit: int = 600 ): """ Interpolate Battery Capacity Values From Current list values """ result = 0 if isinstance(np_data, np.ndarray): # Create interpolation functions for hour rates and capacities # Setting kind='cubic' for better fitting to nonlinear data hour_rate_interp_func = interpolate1d( df['currents'], df['hour_rates'], xspace='log', yspace='log', fill_value="extrapolate", kind=kind ) capacity_interp_func = interpolate1d( df['currents'], df['capacities'], xspace='log', yspace='log', fill_value="extrapolate", kind=kind ) # , kind='cubic' # Calculate interpolated values for new currents hour_rate_interpolated = hour_rate_interp_func(current_values) capacity_interpolated = capacity_interp_func(current_values) # Create a DataFrame for the results calc_cap = np.around(current_values * hour_rate_interpolated, 3) calc_hr = np.around(capacity_interpolated / current_values, 3) diff_cap = np.around(capacity_interpolated - calc_cap, 3) diff_hr = np.around(hour_rate_interpolated - calc_hr, 3) real_hr = np.around(hour_rate_interpolated - diff_hr, 3) real_cap = np.around(current_values * real_hr, 3) real_current = np.around(real_cap / real_hr, 3) result = pd.DataFrame({ 'currents': current_values, 'hour_rates': hour_rate_interpolated, 'capacities': capacity_interpolated, 'calc_cap': calc_cap, 'real_cap': real_cap, 'diff_cap': diff_cap, 'calc_hr': calc_hr, 'real_hr': real_hr, 'diff_hr': diff_hr, 'real_current': real_current, 'diff_current': np.around(current_values - real_current, 3), }) result = result[result['hour_rates'] < hr_limit] return result def plot_grid(major_ticks: list, minor_ticks: list, ): """Set X Grid ticks""" ax=plt.gca() ax.grid(True) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor=True) ax.grid(which='minor', alpha=0.2) ax.grid(which='major', alpha=0.5) Visualisation: currents_list = np.array([ 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 1, 1.5, 1.7, 2, 2.2, 2.5, 3, 4, 5, 6, 7, 8, 9, 10, 11, 15, 17, 20, 22, 25, 27, 30, 32, 35, 37, 40, 60, 80, 120, 150, 180, 220, 250 ]) capacities = interpolate_cap_by_current( df=capacity, current_values=currents_list, kind='quadratic' ) rel_current = np.around(capacity['capacities']/capacity['hour_rates'], 3) # linear, nearest, nearest-up, zero, slinear, quadratic, cubic, previous, or next. zero, slinear, quadratic and cubic plt.figure(figsize=(18, 15)) plt.subplot(3, 1, 1) plt.plot(capacities['real_hr'], capacities['capacities'], label='Interpolated Capacitiy') plt.plot(capacities['real_hr'], capacities['calc_cap'], label='Calculated Capacitiy') plt.plot(capacities['real_hr'], capacities['real_cap'], label='Real Capacitiy') plt.plot(capacity['hour_rates'], capacity['capacities'], label='Capacitiy') plt.ylabel('Capacity (A/h)') plt.xlabel('Hour Rate (h)') plt.title('Battery Hour Rate / Capacity relationship') plt.legend() max_tick = capacities['hour_rates'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 2) plt.plot(capacities['real_hr'], capacities['currents'], label='Interpolated Current (A)') plt.plot(capacities['real_hr'], capacities['real_current'], label='Real Current (A)') plt.plot(capacity['hour_rates'], rel_current, label='Calculated Original Current Relation (A)') plt.plot(capacity['hour_rates'], capacity['currents'], label='Current (A)') plt.ylabel('Current (A)') plt.xlabel('Hour Rate (h)') plt.title('Battery Hour Rate / Current relationship') plt.legend() plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 3) plt.plot(capacities['currents'], capacities['capacities'], label='Interpolated capacity / current') plt.plot(capacities['currents'], capacities['calc_cap'], label='Calculated capacity / current') plt.plot(capacity['currents'], capacity['capacities'], label='capacity / current') plt.ylabel('Capacity (A/h)') plt.xlabel('Current (A)') plt.title('Battery Current / Capacity relationship') plt.xscale('linear') plt.yscale('linear') plt.legend() max_tick = capacities['currents'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) Problem Even though I've configured the interpolation in logarithmic space, the interpolated values still don’t match the calculated values when verified against the relationships provided. I’ve illustrated this discrepancy in the plots below, where I calculate the difference by applying the original relationships to the interpolated results. plt.figure(figsize=(18, 15)) plt.subplot(3, 1, 1) plt.plot(capacities['hour_rates'], capacities['diff_cap'], label='Diff Capacity') plt.plot(capacities['hour_rates'], capacities['diff_hr'], label='Diff Hour Rate') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Hour Rate (h)') plt.title('Interpolation Data Relationship By Hour Rate') plt.legend() max_tick = capacities['hour_rates'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 2) plt.plot(capacities['capacities'], capacities['diff_cap'], label='Diff Capacity') plt.plot(capacities['capacities'], capacities['diff_hr'], label='Diff Hour Rate') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Capacity (A/h)') plt.title('Interpolation Data Relationship By Capacity') plt.legend() max_tick = capacities['capacities'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 3) plt.plot(capacities['currents'], capacities['diff_cap'], label='Diff Capacity') plt.plot(capacities['currents'], capacities['diff_hr'], label='Diff Hour Rate') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Current (A)') plt.title('Interpolation Data Relationship By Current') plt.legend() max_tick = capacities['currents'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) Is there a way to improve the accuracy of the interpolation on a logarithmic scale for this type of data relationship? I understand that current values outside the range of (4.86 A, 135 A) may lead to inaccurate results due to extrapolation. Edit I’ve updated the code above to improve interpolation accuracy: The original capacity values appeared to be rounded in the source data. These values are now corrected prior to interpolation to enhance precision. Added a second graph to evaluate the accuracy of the relationship for the calculated current values. plt.figure(figsize=(18, 15)) plt.subplot(3, 1, 1) plt.plot(capacities['real_hr'], capacities['diff_current'], label='Diff Current') plt.plot(capacity['hour_rates'], capacity['currents'] - rel_current, label='Diff Original Current Relation') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Hour Rate (h)') plt.title('Interpolation Data Relationship By Hour Rate') plt.legend() max_tick = capacities['hour_rates'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 2) plt.plot(capacities['real_cap'], capacities['diff_current'], label='Diff Current') plt.plot(capacity['capacities'], capacity['currents'] - rel_current, label='Diff Original Current Relation') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Capacity (A/h)') plt.title('Interpolation Data Relationship By Capacity') plt.legend() max_tick = capacities['capacities'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) plt.subplot(3, 1, 3) plt.plot(capacities['currents'], capacities['diff_current'], label='Diff Current') plt.plot(capacity['currents'], capacity['currents'] - rel_current, label='Diff Original Current Relation') plt.ylabel('Diff Interpolated / Calculated') plt.xlabel('Current (A)') plt.title('Interpolation Data Relationship By Current') plt.legend() max_tick = capacities['currents'].max() + 10 plot_grid( major_ticks=np.arange(0, max_tick, 20), minor_ticks=np.arange(0, max_tick, 5) ) Edit 2 I’ve made additional updates to the code to further improve interpolation accuracy: - Rounded all values to 3 decimal places to minimize insignificant errors. - Observing the updated graphs, `hour_rate` interpolation values are more accurate than `capacity` interpolation values. I’ve adjusted the code to interpolate only `hour_rate` and then calculate `capacity` using the relationship `capacity = hour_rate * current`. Below are the updated graphs: Data Visualization Difference Between Interpolated and Calculated Capacity and Hour Rate Difference Between Interpolated and Calculated Current
Looking on your currency data described relations: hour_rates (h) = capacities (Ah) / currents (A) capacities (Ah) = hour_rates (h) * currents (A) currents (A) = capacities (Ah) / hour_rates (h) These are not met explicitly in the data you presented. I've created the data which are exactly like the presented results: capacity_data_corr = capacity[['hour_rates', 'capacities']] capacity_data_corr['currents'] = capacity_data_corr['capacities']/capacity_data_corr['hour_rates'] Interpolation is almost ideal This means, that the interpolation obtained can be good, but the data does not meet assumed relations. If these relations are only approximate, in such long horizon error like this should not be as bad as it looks.
2
1
79,166,062
2024-11-7
https://stackoverflow.com/questions/79166062/is-pythons-cffi-an-adequate-tool-to-parse-c-definitions-from-a-header-file
From python, I want to fetch the details of structures/arrays/enums defined in C headers: the list of defined types, the list and types of struct members, the names and values defined in enums, the size of arrays, etc. I don't plan to link a C lib in python, but I wanted to use a battle-tested tool to "parse" C definitions so I picked CFFI and tried the following: Start with a dummy test.h file typedef struct { int a; int b[3]; float c; } other_struct_T; typedef struct { bool i; bool j; other_struct_T k; } main_struct_T; preprocess it once to be sure to resolve #includes, #defines, etc. gcc -E -P -xc test.h -o test.preprocessed.h Then load it with CFFI like this from pathlib import Path from cffi import FFI u = FFI() txt = Path("test.preprocessed.h").read_text() u.cdef(txt) k = u.typeof("main_struct_T") print(k) print(k.elements) which prints <ctype 'main_struct_T'> first. But fails at the second one (and k seems to contains neither .length, not .item, nor .relements, as one could expect from a ctype instance, as mentioned here) Traceback (most recent call last): File "./parse_header.py", line 14, in <module> print(k.elements) ^^^^^^^^^^^ AttributeError: elements What do I miss ? How would you do it differently ?
Self reply here! I made progress! The CFFI documentation didn't helped much :D using dir() on objects returned did. Two options were found, the easiest one is this snippet (more complete answer at the end) : if k.kind == 'struct' : for f in k.fields : name, obj = f print(name, obj.type, obj.offset) where k is obtained exactly as explained in the question. This gives: i <ctype '_Bool'> 0 j <ctype '_Bool'> 1 k <ctype 'other_struct_T'> 4 recursion can be used to dig for other_struct_T The other option is derived from another question (Using Python cffi for introspection) and lead to this partial snippet: for k in u._parser._declarations : v = u._parser._declarations[k][0] if isinstance(v, cffi.model.EnumType) : z[v.name] = list() print(v.enumerators, v.enumvalues, v.get_c_name(), v.get_official_name()) for name, value in zip(v.enumerators, v.enumvalues) : z[v.name].append((name, value)) elif isinstance(v, cffi.model.StructType) : print(v.fldnames, v.fldtypes, v.fldquals, v.fldbitsize) z[v.name] = list() for name, ctype, qual, size in zip(v.fldnames, v.fldtypes, v.fldquals, v.fldbitsize) : z[v.name].append((name, ctype, qual, size)) ... classes are different, methods and properties are different... the information inside should be the same, using u._parser._declarations feels ugly though Update Here is an (unperfect but functionnal) code: #!/usr/bin/env python3 import collections import inspect from cc_pathlib import Path import cffi class ExtraFace() : def __init__(self, h_pth) : self.ffi = cffi.FFI() self.ffi.cdef(h_pth.read_text()) self.e_map = dict() # map of types already parsed def parse(self, name, recurse=False) : e = self.ffi.typeof(name) e_set = {e,} # set of types to be parsed while e_set : e = e_set.pop() if e in self.e_map : continue if e.kind == 'struct' : e_set |= self.parse_struct(e) if e.kind == 'enum' : self.parse_enum(e) def parse_struct(self, e) : s_map = collections.OrderedDict() e_set = set() for f in e.fields : name, m = f if m.type.kind == 'array' : s_map[name] = (m.type.item.cname, m.type.length, m.offset) else : s_map[name] = (m.type.cname, 0, m.offset) if m.type.kind != 'primitive' : e_set.add(m.type) self.e_map[e.cname] = s_map return e_set def parse_enum(self, e) : self.e_map[e.cname] = e.relements if __name__ == '__main__' : u = ExtraFace(Path("test.preprocessed.h")) u.parse("main_struct_T") Path("e.json").save(u.e_map, verbose=True)
2
0
79,164,605
2024-11-6
https://stackoverflow.com/questions/79164605/pyqt6-dbus-signal-not-being-received
I'm trying to create a system to keep track of the currently-playing media via mpris. Adapted from this question's PyQt6 answer, I have tried the following code: from PyQt6 import QtCore, QtWidgets, QtDBus import sys class MainWindow(QtWidgets.QMainWindow): def __init__ (self): super().__init__() service = 'org.mpris.MediaPlayer2.vlc' path = '/org/mpris/MediaPlayer2' iface = 'org.mpris.MediaPlayer2' conn = QtDBus.QDBusConnection.systemBus() conn.registerObject('/', self) conn.connect(service, path, iface, 'PropertiesChanged', self.nochangeslot) @QtCore.pyqtSlot(QtDBus.QDBusMessage) def nochangeslot(self, msg): print(f'signature: {msg.signature()!r}, ' f'arguments: {msg.arguments()!r}') app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec() This should connect any VLC Media Player instance (replace with service of your choice or find it programatically) emitting PropertiesChanged to a simple function that prints the message. PropertiesChanged should be emitted when doing things like changing the current song. However, nothing is printed when doing so. I also tried changing iface to 'org.mrpis.MediaPlayer2.Player', but it didn't improve matters. Any idea why this isn't working?
There are two problems: You won't find MPRIS players (nor any desktop other apps) on the system bus¹. They are all connected to the user's individual session bus i.e. .sessionBus(). Generally the only services you will find on the system bus are those which are global to the system (e.g. NetworkManager), whereas multiple users on the same system could have their own copies of the same player or app running, so accordingly each user also has a separate "session bus" (which is now more often really a "user bus"). Use d-spy or d-feet, or qdbusviewer6, or busctl --acquired (with the --user option for the session bus) to see which D-Bus services are active where. $ busctl --user --acquired NAME org.mpris.MediaPlayer2.quodlibet ¹ (Such apps may still connect to the system bus as clients, though – just not as services.) Since properties are a generic D-Bus concept, their signals and methods are implemented as part of the generic org.freedesktop.DBus.Properties interface, not as part of the object's custom interfaces. (The "real" interface that each property belongs to is actually provided as an argument to those signals and methods.) Use D-Spy/D-Feet/QDBusViewer or gdbus introspect or busctl introspect to see which signals exist under which interfaces. $ busctl --user introspect \ org.mpris.MediaPlayer2.quodlibet /org/mpris/MediaPlayer2 NAME TYPE SIGNATURE RESULT/VALUE org.freedesktop.DBus.Properties interface - - .PropertiesChanged signal sa{sv}as - $ gdbus introspect -e \ -d org.mpris.MediaPlayer2.quodlibet \ -o /org/mpris/MediaPlayer2 node /org/mpris/MediaPlayer2 { interface org.freedesktop.DBus.Properties { signals: PropertiesChanged(s interface_name, a{sv} changed_properties, as invalidated_properties); }; }; If in doubt, run dbus-monitor or busctl monitor (or use Bustle, or even Wireshark) to see what is being transferred over each bus. dbus-monitor --session "type=signal,member=PropertiesChanged" Note: With services that use the 'legacy' libdbus, there can sometimes be a mismatch between what the service declares via its .Introspect() and what it actually sends, as is a very low-level library that offers no facilities for managing objects or generating correct introspection data. (It requires the service to have its own implementation of Introspect() which often returns a hand-written XML string – such as this example – which may or may not be 100% accurate to what other parts of the code actually send or accept.) This is always a bug in the service, but it means that you should always double-check with bus monitor tools.
1
3
79,168,650
2024-11-8
https://stackoverflow.com/questions/79168650/why-doesnt-python-put-the-iterator-class-in-mro-when-using-map-mro-bu
I dont understand why in python when i write: from collections.abc import * print(map.__mro__) print(issubclass(map,Iterator)) The output I get is: (<class 'map'>, <class 'object'>) True but the iterator class is not displayed in the return of map.___mro___, why does that happen? It is related on how mro works? I expected that "class Iterator" appeared in the console when i wrote print(map.___mro___)
This is because collections.abc.Iterator implements a __subclasshook__ method that considers a given class to be its subclass as long as it has the __iter__ and __next__ methods defined: class Iterator(Iterable): __slots__ = () @abstractmethod def __next__(self): 'Return the next item from the iterator. When exhausted, raise StopIteration' raise StopIteration def __iter__(self): return self @classmethod def __subclasshook__(cls, C): if cls is Iterator: return _check_methods(C, '__iter__', '__next__') return NotImplemented
3
4
79,168,379
2024-11-7
https://stackoverflow.com/questions/79168379/pandas-slowing-way-down-after-processing-10-000-rows
I am working on a small function to do a simple cleanup of a csv using pandas. Here is the code: def clean_charges(conn, cur): charges = pd.read_csv('csv/all_charges.csv', parse_dates=['CreatedDate', 'PostingDate', 'PrimaryInsurancePaymentPostingDate', 'SecondaryInsurancePaymentPostingDate', 'TertiaryInsurancePaymentPostingDate']) # Split charges into 10 equal sized dataframes num_splits = 10 charges_split = np.array_split(charges, num_splits) cur_month = datetime.combine(datetime.now().date().replace(day=1), datetime.min.time()) count = 0 total = 0 for cur_charge in charges_split: for index, charge in cur_charge.iterrows(): if total % 1000 == 0: print(total) total += 1 # Delete it from the dataframe if its a charge from the current month if charge['PostingDate'] >= cur_month: count += 1 charges.drop(index, inplace=True) continue # Delete the payments if they were applied in the current month if charge['PrimaryInsurancePaymentPostingDate'] >= cur_month: charge['TotalBalance'] = charge['TotalBalance'] + charge['PrimaryInsuranceInsurancePayment'] charge['PrimaryInsurancePayment'] = 0 if charge['SecondaryInsurancePaymentPostingDate'] >= cur_month: charge['TotalBalance'] = charge['TotalBalance'] + charge['SecondaryInsuranceInsurancePayment'] charge['SecondaryInsurancePayment'] = 0 if charge['TertiaryInsurancePaymentPostingDate'] >= cur_month: charge['TotalBalance'] = charge['TotalBalance'] + charge['TertiaryInsuranceInsurancePayment'] charge['TertiaryInsurancePayment'] = 0 # Delete duplicate payments if charge['AdjustedCharges'] - (charge['PrimaryInsuranceInsurancePayment'] + charge['SecondaryInsuranceInsurancePayment'] + charge['TertiaryInsuranceInsurancePayment'] + charge['PatientPaymentAmount']) != charge['TotalBalance']: charge['SecondaryInsurancePayment'] = 0 charges = pd.concat(charges_split) charges.to_csv('csv/updated_charges.csv', index=False) The total size of all_charges.csv is about 270,000 rows, but I am running into an issue where it will process the first 10,000 rows very quickly, and then slow way down. Approximate timing is 5 seconds for the first 10,000 and then about 2 minutes for every thousand after that. This was an issue when I was working on the full set as one dataframe, and when I split it out into 10 as you can see in my code now. I don't see anything that would be the cause of this, my code may not be 100% optimized but I feel like I'm not doing anything incredibly stupid. My computer is also only running at 15% CPU usage and 40% Memory usage, so I don't think its a hardware issue. I would appreciate any help I can get to figure out why this is running so slowly!
Dropping records from a dataframe is reported to be slow so it would be better to use pandas filtering features. Generating a 70000 records csv and processing only first 10000 def clean_charges(charges): flt_date = datetime(2024, 9, 1) count = 0 total = 0 # for cur_charge in charges_split: for index, charge in charges.iterrows(): if total % 1000 == 0: print(total) total += 1 # Delete it from the dataframe if its a charge from the current month if charge['PostingDate'] >= flt_date: count += 1 charges.drop(index, inplace=True) continue if total == 10000: break charges = pd.read_csv('faker_data_70000.csv', parse_dates=['PostingDate']) print(f'df length: {len(charges.index)}') clean_charges(charges) print(f'df length: {len(charges.index)}') Running it time filter.py It takes 40s to process only 10k rows df length: 70000 0 1000 2000 ... 9000 df length: 66694 real 0m40.134s user 0m40.555s sys 0m0.096s Using pandas filtering charges = pd.read_csv('faker_data_70000.csv', parse_dates=['PostingDate']) print(f'df length: {len(charges.index)}') flt_date = datetime(2024, 9, 1) charges_flt = charges[charges['PostingDate'] <= flt_date] print(f'df length: {len(charges_flt.index)}') Result df length: 70000 df length: 46908 real 0m0.542s user 0m1.023s sys 0m0.040s
1
2
79,167,901
2024-11-7
https://stackoverflow.com/questions/79167901/why-do-i-need-another-pair-of-curly-braces-when-using-a-variable-in-a-format-spe
I'm learning how Python f-strings handle formatting and came across this syntax: a = 5.123 b = 2.456 width = 10 result = f"The result is {(a + b):<{width}.2f}end" print(result) This works as expected, however I don't understand why {width} needs its own curly braces within the format specification. Why can't I just use width directly just as "a" and "b"? Isn't width already inside the outer curly braces?
The braces are needed because the part starting with : is the format specification mini-language. That is parsed as its own f-string, where all is literal, except if put in braces. That format specification language would bump into ambiguities if those braces were not required: that language gives meaning to certain letters, and it would not be clear whether such letter(s) would need to be taken literally or as an expression to be evaluated dynamically. For instance, in Python 3.11+ we have the z option that can occur at the position where you specified the width: z = 5 f"The result is {(a + b):<z.2f}end" So what would this mean if braces were not needed? Would it be the z option or what the z variable represents? There are many more examples that could be constructed to bring the same conclusion home: the braces are needed for anything that needs to be evaluated inside the format specification part.
14
22
79,167,346
2024-11-7
https://stackoverflow.com/questions/79167346/strange-rendering-behaviour-with-selection-interval
I'm generating a plot with the following code (in an ipython notebook): import altair as alt import pandas as pd events = pd.DataFrame( [ {"event": "Task A", "equipment": "SK-101", "start": 10.2, "finish": 11.3}, {"event": "Task B", "equipment": "SK-102", "start": 6.5, "finish": 10.2}, {"event": "Task C", "equipment": "SK-103", "start": 3.3, "finish": 11.3}, {"event": "Task D", "equipment": "SK-104", "start": 4.7, "finish": 5.5}, {"event": "Task E", "equipment": "SK-105", "start": 13.0, "finish": 13.2}, {"event": "Task F", "equipment": "SK-106", "start": 1.1, "finish": 7.9}, {"event": "Task G", "equipment": "SK-107", "start": 7.4, "finish": 10.0}, {"event": "Task H", "equipment": "SK-108", "start": 6.6, "finish": 7.6}, {"event": "Task I", "equipment": "SK-109", "start": 8.5, "finish": 16.7}, {"event": "Task J", "equipment": "SK-110", "start": 9.0, "finish": 12.2}, {"event": "Task K", "equipment": "SK-111", "start": 10.2, "finish": 17.3}, {"event": "Task L", "equipment": "SK-112", "start": 6.1, "finish": 9.5}, {"event": "Task M", "equipment": "SK-113", "start": 5.4, "finish": 5.8}, {"event": "Task N", "equipment": "SK-114", "start": 2.2, "finish": 8.3}, {"event": "Task O", "equipment": "SK-115", "start": 9.3, "finish": 10.6}, {"event": "Task P", "equipment": "SK-116", "start": 3.9, "finish": 12.5}, {"event": "Task Q", "equipment": "SK-117", "start": 11.1, "finish": 16.6}, {"event": "Task R", "equipment": "SK-118", "start": 14.4, "finish": 18.4}, {"event": "Task S", "equipment": "SK-119", "start": 19.2, "finish": 19.9}, {"event": "Task T", "equipment": "SK-120", "start": 13.8, "finish": 16.7}, {"event": "Task U", "equipment": "SK-102", "start": 12.0, "finish": 13.0}, {"event": "Task V", "equipment": "SK-106", "start": 10.2, "finish": 17.3}, {"event": "Task W", "equipment": "SK-108", "start": 12.8, "finish": 14.9}, {"event": "Task X", "equipment": "SK-110", "start": 12.6, "finish": 18.9}, {"event": "Task Y", "equipment": "SK-112", "start": 13.3, "finish": 18.3}, {"event": "Task Z", "equipment": "SK-114", "start": 8.6, "finish": 19.2}, ] ) brush = alt.selection_interval(encodings=["y"]) minimap = ( alt.Chart(events) .mark_bar() .add_params(brush) .encode( x=alt.X("start:Q", title="", axis=alt.Axis(labels=False, tickSize=0)), x2="finish", y=alt.Y("equipment:N", title="", axis=alt.Axis(labels=False, tickSize=0)), color=alt.condition(brush, "event", alt.value("lightgray")), ) .properties( width=100, height=400, title="minimap" ) ) detail = ( alt.Chart(events) .mark_bar() .encode( x=alt.X("start:Q", title="time (hr)"), x2="finish", y=alt.Y("equipment:N").scale(domain={"param": brush.name, "encoding": "y"}), color="event", ) .properties(width=600, height=400, title="Equipment Schedule") ) detail | minimap The idea is that the minimap plot on the right is used to zoom/pan the y-axis of the main plot. When I zoom or pan with the mouse in the minimap, I see strange artifacts at the top of the main plot; it looks like all bars above the selected area get rendered at the top of the plot. Is this something I'm doing wrong, or is it some sort of rendering bug? Zoomed out: Zoomed in:
I'm unsure why that happens, but another approach would be to use the brush as a filter instead of to set the y-domain. As long as you are able to set a fixed x-domain I think this can work well for what you need: detail = ( alt.Chart(events) .mark_bar() .encode( x=alt.X("start:Q", title="time (hr)").scale(domain=(0, 20)), x2="finish", y=alt.Y("equipment:N"), color="event", ) .properties(width=600, height=alt.Step(20), title="Equipment Schedule") ).transform_filter(brush) I also set the height to a step size here, which gives you more of the scrolling behavior you mentioned in your other issue.
2
2
79,167,429
2024-11-7
https://stackoverflow.com/questions/79167429/encoding-in-utf-16be-and-decoding-in-utf-8-print-the-correct-output-but-cannot-b
If I'm encoding a string using utf-16be and decoding the encoded string using utf-8, I'm not getting any error and the output seems to be correctly getting printed on the screen as well but still I'm not able to convert the decoded string into Python representation using json module. import json str = '{"foo": "bar"}' encoded_str = str.encode("utf-16be") decoded_str = encoded_str.decode('utf-8') print(decoded_str) print(json.JSONDecoder().decode(decoded_str)) I know that encoded string should be decoded using the same encoding, but why this behaviour is what I'm trying to understand? I want to know: Why encoding str with utf-16be and decoding encoded_str with utf-8 doesn't result in an error? As encoding and decoding is not resulting in an error and the decoded_str is a valid JSON (as can be seen through the print statement), why decode(decoded_str) result in an error? Why writing the output to a file and viewing the file through less command show it as binary file? file = open("data.txt", 'w') file.write(decoded_str) When using less command to view the data.txt: "data.txt" may be a binary file. See it anyway? If the decoded_str is an invalid JSON or something else, how can I view it in its original form (print() is printing it as a valid JSON ) I'm using Python 3.10.12 on Ubuntu 22.04.4 LTS
Why encoding str with utf-16be and decoding encoded_str with utf-8 doesn't result in an error? Because in this case, the resulting bytes of str.encode("utf-16be") are also valid UTF-8. This is in fact always the case with ASCII characters, you really need to go above U+007F to trigger possible errors here (eg. use the string str = '{"foo": "!"}' which uses a full-width exclamation mark, U+FF01). As encoding and decoding is not resulting in an error and the decoded_str is a valid JSON (as can be seen through the print statement), why decode(decoded_str) result in an error? Just because you can print a string does not make it valid JSON. In particular because of the encoding to UTF-16, a bunch of null bytes got added. For example, f in UTF-16BE is 0x0066. Those bytes when re-encoded in UTF-8 actually constitute two characters, f and the null character 0x00. Based on my reading of the JSON spec, null characters are not allowed and that is why decode(decoded_str) fails. Why writing the output to a file and viewing the file through less command show it as binary file? Probably those null bytes again. With a lot of null bytes, less is probably flagging it might be a binary file as this is relatively uncommon in UTF-8 (and Linux much prefers UTF-8 over UTF-16) If the decoded_str is an invalid JSON or something else, how can I view it in its original form (print() is printing it as a valid JSON ) Too many possible answers here, it really depends on what the real use case is here. The quickest one is just don't encode/decode with different encodings. The next quickest is reverse the encode/decode process, though this is not lossless with all strings or encoding possibilities, in particular the surrogate range when dealing with a UTF-16 + UTF-8 mix-up.
1
2
79,166,985
2024-11-7
https://stackoverflow.com/questions/79166985/increment-value-based-on-condition
I want to increment a column value based on a certain condition within a polars dataframe, while considering how many times that condition was met. Example data. import polars as pl df = pl.DataFrame({ "before": [0, 0, 0, 0, 0, 0, 0, 0, 0], "cdl_type": ["REC", "REC", "GEC", None, None, "GEC", None, "REC", "GEC"], }) Current approach. df = df.with_columns( a=( pl.when(pl.col("cdl_type").is_in(["GEC", "REC"])).then( pl.int_ranges( pl.col("cdl_type") .is_in(["REC", "GEC"]) .rle() .struct.field("len") ).flatten() ) .when(pl.col('cdl_type').is_null().and_(pl.col('cdl_type').shift(1).is_not_null())) .then(pl.lit(1)) .otherwise(0) ) ) Expected output. ┌────────┬──────────┬───────┐ │ before ┆ cdl_type ┆ after │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ i64 │ ╞════════╪══════════╪═══════╡ │ 0 ┆ REC ┆ 0 │ │ 0 ┆ REC ┆ 1 │ │ 0 ┆ GEC ┆ 2 │ │ 0 ┆ null ┆ 3 │ │ 0 ┆ null ┆ 0 │ │ 0 ┆ GEC ┆ 0 │ │ 0 ┆ null ┆ 1 │ │ 0 ┆ REC ┆ 0 │ │ 0 ┆ GEC ┆ 1 │ └────────┴──────────┴───────┘
Based on the current approach and the expected result, I take that the condition is that cdcl_type equals either "REC" or "GEC". The expected output can the be obtained as follows. For each contiguous block of rows satisfying the condition, we obtain a corresponding id using pl.Expr.rle_id on expression for the condition. We use the id to create an increasing integer sequence for each such block using pl.int_range. Finally, we shift the sequence add 1 and fill any missing value with 0. df.with_columns( pl.when( pl.col("cdl_type").is_not_null() ).then( pl.int_range( pl.len() ).over( pl.col("cdl_type").is_in(["REC", "GEC"]).rle_id() ) ).add(1).shift().fill_null(0) ) shape: (9, 3) ┌────────┬──────────┬─────────┐ │ before ┆ cdl_type ┆ literal │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ i64 │ ╞════════╪══════════╪═════════╡ │ 0 ┆ REC ┆ 0 │ │ 0 ┆ REC ┆ 1 │ │ 0 ┆ GEC ┆ 2 │ │ 0 ┆ null ┆ 3 │ │ 0 ┆ null ┆ 0 │ │ 0 ┆ GEC ┆ 0 │ │ 0 ┆ null ┆ 1 │ │ 0 ┆ REC ┆ 0 │ │ 0 ┆ GEC ┆ 1 │ └────────┴──────────┴─────────┘
3
2
79,166,664
2024-11-7
https://stackoverflow.com/questions/79166664/2-date-columns-comparison-to-indicate-whether-a-record-occured-after-another
I have a dataframe where I want to return the number (proportion) of patinets that have had a subsequent follow up after diagnosis of disease. Original DF (1 Patient example) | patient_id | app_date | diag_date | cancer_yn | |------------|------------|------------|-----------| | 1 | 2024-01-11 | NaT | NaN | | 1 | 2024-03-14 | 2024-03-14 | 1 | | 1 | 2024-04-09 | NaT | NaN | | 1 | 2024-09-09 | NaT | NaN | Intermediate DF (Indicates appointment record per patient was a follow up of diagnosis date or not) | patient_id | app_date | diag_date | cancer_yn | fup_yn | |------------|------------|------------|-----------|--------| | 1 | 2024-01-11 | NaT | NaN | 0 | | 1 | 2024-03-14 | 2024-03-14 | 1 | 0 | | 1 | 2024-04-09 | NaT | NaN | 1 | | 1 | 2024-09-09 | NaT | NaN | 1 | Summarised DF (Collapsed, by groupby on patient_id and value_counts() or something similiar applied | patient_with_fup | count | |------------------|-------| | 1 | 24 | | 0 | 67 | Original DF (2nd Example) - Hoping to implement what solution above had done at patient level but for departments, say a patient can have diag_yn across multiple departments but I wnat to check has the patient had follow up like before but for each department. So, Patient 1 would be recorded twice in example, 1 for Radiology and again for Respiratory as it had a follow up appt for both departments. dept | patient_id | app_date | diag_date | diag_yn | |------------|------------|------------|------------|-----------| Radiology | 1 | 2024-01-11 | NaT | NaN | Radiology | 1 | 2024-03-14 | 2024-03-14 | 1 | Radiology | 1 | 2024-04-09 | NaT | NaN | Radiology | 1 | 2024-09-09 | NaT | NaN | Respiratory | 1 | 2024-02-11 | NaT | NaN | Respiratory | 1 | 2024-04-14 | 2024-04-14 | 1 | Respiratory | 1 | 2024-06-09 | NaT | NaN | Respiratory | 1 | 2024-09-09 | NaT | NaN | Respiratory | 2 | 2024-01-11 | NaT | NaN | Respiratory | 2 | 2024-03-14 | 2024-03-14 | 1 | Respiratory | 2 | 2024-04-09 | NaT | NaN | Respiratory | 2 | 2024-09-09 | NaT | NaN | Output (2nd Example) dept | patient_with_fup | count | Radiology |------------------|-------| | 1 | 1 | | 0 | 0 | Respiratory |------------------|-------| | 1 | 2 | | 0 | 0 | You can see the 2nd record indicates the appointment where diagnosis was made (diag_date is available and the same as app_date), this patient has had subsequent appointments, I want to flag that this is the case (say follow_ups == 1. I'm finding it hard to understand how I can groupby different patients and apply value_counts() on a flag indicating a patient has had follow ups after a diagnosis appointment. Suggestions around based way to reshape the data and generate the flag would be great.
Assuming you have a typo in your data and that 2022-03-14 is 2024-03-14, you can identify the subsequent appointment with groupby.transform: # ensure datetime df[['app_date', 'diag_date']] = df[['app_date', 'diag_date'] ].apply(pd.to_datetime) df['fup_yn'] = (df.groupby('patient_id')['diag_date'] .transform('first').lt(df['app_date']) .astype(int) ) Output: patient_id app_date diag_date cancer_yn fup_yn 0 1 2024-01-11 NaT NaN 0 1 1 2024-03-14 2024-03-14 1.0 0 2 1 2024-04-09 NaT NaN 1 3 1 2024-09-09 NaT NaN 1 For the final output, you don't really need this intermediate, you could use directly: (df.groupby('patient_id') .apply(lambda g: g['app_date'].gt(next(iter(g['diag_date'].dropna()), pd.NaT)).any(), include_groups=False) .astype(int).value_counts() .reindex([0, 1], fill_value=0).rename_axis('fup_yn') .reset_index() ) Output: fup_yn count 0 0 0 1 1 1 Here is a more complete example for clarity: # part 1 patient_id app_date diag_date cancer_yn fup_yn 0 1 2024-01-11 NaT NaN 0 1 1 2024-03-14 2024-03-14 1.0 0 2 1 2024-04-09 NaT NaN 1 3 1 2024-09-09 NaT NaN 1 4 2 2023-01-11 NaT NaN 0 5 2 2023-03-14 2023-03-14 1.0 0 6 3 2022-04-09 2022-05-14 NaN 0 7 3 2022-09-09 NaT NaN 1 # part 2 fup_yn count 0 0 1 1 1 2
2
2
79,164,734
2024-11-7
https://stackoverflow.com/questions/79164734/scraping-data-from-website-with-multiple-steps-to-get-to-the-data-and-that-preve
I'm trying to scrape data from the following website: https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfe/consulta-completa STEP 1 STEP 2 When the Access Key is inserted I need to press the "Pesquisar" button: In this case I've used the following access key: 52241061585865236600650040001896941930530252 and it returns the following page: https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfce/render/xml-consulta-completa?g-recaptcha-response=03AFcWeA7_oqqL4KubId8rW_TapI_NSJDOGBzrx_JB2XAtJitNaBl23zLKbjbj45m9eUZam3xp6R57BI47AI0lp_K3KS-CbtpPiTNAHqcxLV-Gnp2Vf778i3NeLMCKNoHpk7IitkwPHvHJjkg1sWRqdTZrHkhVHiMwFbTC4qFw6436ddwu9rRERxOiY532lIoijoHzDga85l7RvbHkyGUdWD7QVlTUNUU-2ztx21cQ_pDDQrxreDFEL8eCR0ijYAMrOtKEXMwqGSuHFTOSkZ83DCJ4S610YWujUukTXbOSdaAuGpeHljf4CsswFLWTKN8UoKTjlEia_I0cO17zgSnY9Z9rQDEZR1Xeq00CDmpbB73m95EOo0prSrL2RcsRnWkPytDIwJUIfsEAcEQ77vuacbNflj_yFpj2GSWVnGQnKXUrY4DsyRhNU6T6usZaYH5kTRb85qvrfm2FqOlgBfLDcvuwB_Q2JqRxyF6-oJlw64Sx2MZzUQC2gZjPtAIRwGCqOS80OkDkTmHZl9x3fM6tOr4fYM6BouHWrnjfyNz99O9bFcQv_bbdyREr1MVgJ6fujSZM6C7WoRJjwTv29kIuGc2l4nMkkilUU6rzK-apAYtgzSim_5T6N_zkvVQfOAo0mlKwjfVLVxCaWQYsGe5MfBe65ZmLVP_lIHnsJe_z0G9CMclmpaKTiynNEMtu_n8d6utw5ot6BHGp9OALHQq2_62hE_TTYMqVlrzugaPxMrTMKnGWd4W_kVPh-VqgqsKxdDW8xFXYtE8OM_WZNRg4m0ESnl4xW5NLZeZGu7onPt3jkw3vCt57YmdAgcHPpIhg0zPA7lNdBrY1zCeCM3edWoatnFng6irasc5R8fheSL2IS0lSUqCfN_cIuC6rYlPUGlU7pREqYe5ZTxHNkyI6GBvWM_pZSO4glw&chaveAcesso=52241061585865236600650040001896941930530252 STEP 3 In this stage I need to click in the "Visualizar NFC-e Detalhada" to finally get to the page with the data I want to scrape. The new path becomes: https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfce/render/NFCe?chNFe=52241061585865236600650040001896941930530252 The final step is to click in "Produtos e Serviços" Which will bring me to the following screen where is the data I want to scrape: Errors If i try to access the webpage directly from the link: https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfce/render/NFCe?chNFe=52241061585865236600650040001896941930530252 it returns session expired: If I try to scrape via python, it blocks me and i can't make any more search, even if it's via web browser. I need help on trying to scrape the data in that specific page, following all the steps and bypassing the recaptcha and bot block from the web site security.
The website may be blocking you because it's trying to stop scraping. However, the following code works for me even after multiple attempts: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver import ChromeOptions from selenium.webdriver.common.action_chains import ActionChains from functools import cache options = ChromeOptions() options.add_argument("--headless=true") url = "https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfe/consulta-completa" access_key = "52241061585865236600650040001896941930530252" @cache def wait(driver): return WebDriverWait(driver, 10) def click(driver, selector): button = wait(driver).until(EC.presence_of_element_located(selector)) ActionChains(driver).click(button).perform() def send(driver, selector, data): wait(driver).until(EC.presence_of_element_located(selector)).send_keys(data) def text(e): return e.text if e.text else e.get_attribute("textContent") with webdriver.Chrome(options) as driver: driver.get(url) send(driver, (By.ID, "chaveAcesso"), access_key) click(driver, (By.ID, "btnPesquisar")) click(driver, (By.CSS_SELECTOR, "button.btn-view-det")) click(driver, (By.ID, "tab_3")) # Now scrape the page. For example, print the text for a selection of labels selector = By.CSS_SELECTOR, "tr.col-4 td label" for label in wait(driver).until(EC.presence_of_all_elements_located(selector)): print(text(label)) Output: Base de Cálculo ICMS Valor do ICMS Valor do ICMS Desonerado Valor Total do FCP Código do Produto Código NCM Código CEST
1
2
79,162,280
2024-11-6
https://stackoverflow.com/questions/79162280/python-3-13-generic-classes-with-type-parameters-and-inheritance
I'm exploring types in Python 3.13 and can't get the generic typing hints as strict as I would like. The code below defines a generic Predicate class, two concrete subclasses, and a generic negation predicate. class Predicate[T](Callable[[T], bool]): """ Base class for predicates: a function that takes a 'T' and evaluates to True or False. """ def __init__(self, _eval: Callable[[T], bool]): self.eval = _eval def __call__(self, *args, **kwargs) -> bool: return self.eval(*args, **kwargs) class StartsWith(Predicate[str]): def __init__(self, prefix: str): super().__init__(lambda s: s.startswith(prefix)) class GreaterThan(Predicate[float]): def __init__(self, y: float): super().__init__(lambda x: x > y) class Not[T](Predicate[T]): def __init__(self, p: Predicate[T]): super().__init__(lambda x: not p(x)) if __name__ == '__main__': assert StartsWith("F")("Foo") assert GreaterThan(10)(42) assert Not(StartsWith("A"))("Foo") assert Not(GreaterThan(10))(3) This results in an error: Traceback (most recent call last): File "[...]/generics_demo.py", line 36, in <module> class StartsWith(Predicate[str]): ~~~~~~~~~^^^^^ File "<frozen _collections_abc>", line 475, in __new__ TypeError: Callable must be used as Callable[[arg, ...], result]. When using class StartsWith(Predicate): (i.e. any predicate) it works, but that is too loosely defined to my taste. Any hints on how to go about this?
If you change your Predicate definition to: class Predicate[T]: it works. I think this is because the "new style" Generics are inheriting from the old Generic class, for backward compatibility, this means that internally your class looks something like this: class Predicate(Callable[[T], bool], Generic[T]): And thanks to the multiple inheritance logic, the typing.Callable generic logic overrides your custom Generic logic. This means if you do not change your Predicate class definition, you have to change the inheritance from Predicate[str] to Predicate[[str], bool] and so on. Because Callable estimates a list of types followed by a single type. And just as a side note: You do not have to intherit from callable, it is just for ducktyped typehints, this means, you need callable only for defining which function you expect, not for defining which function you are. This means, if you want that your Predicate class is of type Callable[[T], bool], you have to typehint your callable in that way, not intheriting from Callable. So you have to change your class to that: class Predicate[T]: """ Base class for predicates: a function that takes a 'T' and evaluates to True or False. """ def __init__(self, _eval: Callable[[T], bool]): self.eval = _eval def __call__(self, t: T) -> bool: return self.eval(t) And a nother "improvement" would be a more generic way for the Not class: class Not[T](Predicate[T]): def __init__(self, p: Callable[[T], bool]): super().__init__(lambda x: not p(x)) if __name__ == '__main__': assert Not(StartsWith("A"))("Foo") assert Not(GreaterThan(10))(3) assert Not(lambda x: False)(4) Because Not expects a callable, that behaves like a predicate, and no predicate the lambda function is also accepted. But this is only a side note. Ignore it, if you have a special use case.
2
2
79,164,771
2024-11-7
https://stackoverflow.com/questions/79164771/extract-multiple-sparsely-packed-responses-to-yes-no-identifiers-while-preservin
I have some data from Google Sheets that has a multi-response question, like so: Q1 Q2 ... Multi-Response 0 ... ... ... "A; B" 1 ... ... ... "B; C" 2 ... ... ... "D; F" 3 ... ... ... "A; B; F" (Note the whitespace, the separator is '; ' for weird workaround reasons with the way the survey writer wrote the questions and how Google Sheets chose to output the response table) I'm trying to expand this, so I can do some k-modes clustering on it: Q1 Q2 ... A B C D F 0 ... ... ... 1 1 0 0 0 1 ... ... ... 0 1 1 0 0 2 ... ... ... 0 0 0 1 1 3 ... ... ... 1 1 0 0 1 The idea is more or less mapping each response list to a series of "do you agree? yes/no" questions. But I can't quite figure out how to transform the dataframe to that format. I tried to use pivot_table and get_dummies, but if it can do this, it's not clear to me exactly how it works. I can get a table of responses with multi_selection_question = data.keys()[-1] expanded = data[multi_selection_question].str.split('; ', expand=True) which yields something like 0 1 2 0 A B None 1 B C None 2 D F None 3 A B F And a list of questions that would be the proper column names with: questions = pandas.Series(expanded.values.flatten()).unique() But the examples for pivot_table or get_dummies that I've seen seem to require data in a different format with a more consistent column structure than what this outputs. Using get_dummies for instance makes a separate category for each (column,question) pair, so for the example table above - 2_F, 3_F, 1_B, 2_B etc. Of course I could just resort to a couple loops and build up a new dataframe row-by-row and concat it, but usually there's a better way in pandas.
Use str.get_dummies with sep='; ': out = (df.drop(columns='Multi-Response') .join(df['Multi-Response'].str.get_dummies(sep='; ')) ) Since the separator must be a fixed string, if you have a variable number of spaces in the input, you should pre-process with str.replace: out = (df.drop(columns='Multi-Response') .join(df['Multi-Response'] .str.replace('; *', '|', regex=True) .str.get_dummies()) ) Output: Q1 Q2 A B C D F 0 ... ... 1 1 0 0 0 1 ... ... 0 1 1 0 0 2 ... ... 0 0 0 1 1 3 ... ... 1 1 0 0 1 Multiple answers If you have multiple answer columns like: Q1 Q2 A1 A2 0 ... ... A; B A; E 1 ... ... B; C B 2 ... ... D; F D; E; F 3 ... ... A; B; F C Then, process all the columns: cols = df.columns[df.columns.str.startswith('A')] # ['A1', 'A2'] out = (df.drop(columns=cols) .join(pd.concat([df[c].str.get_dummies('; ') .add_prefix(f'{c}_') for c in cols], axis=1) ) ) Output: Q1 Q2 A1_A A1_B A1_C A1_D A1_F A2_A A2_B A2_C A2_D A2_E A2_F 0 ... ... 1 1 0 0 0 1 0 0 0 1 0 1 ... ... 0 1 1 0 0 0 1 0 0 0 0 2 ... ... 0 0 0 1 1 0 0 0 1 1 1 3 ... ... 1 1 0 0 1 0 0 1 0 0 0 Variant with enumerate to auto-increment the prefix: cols = df.columns[df.columns.str.startswith('A')] # ['A1', 'A2'] out = (df.drop(columns=cols) .join(pd.concat([df[c].str.get_dummies('; ') .add_prefix(f'{i}_') for i, c in enumerate(cols, start=1)], axis=1) ) ) Output: Q1 Q2 1_A 1_B 1_C 1_D 1_F 2_A 2_B 2_C 2_D 2_E 2_F 0 ... ... 1 1 0 0 0 1 0 0 0 1 0 1 ... ... 0 1 1 0 0 0 1 0 0 0 0 2 ... ... 0 0 0 1 1 0 0 0 1 1 1 3 ... ... 1 1 0 0 1 0 0 1 0 0 0
2
2
79,165,399
2024-11-7
https://stackoverflow.com/questions/79165399/reading-c-struct-dumped-into-a-file-into-python-dataclass
Is there a consistent way of reading a c struct from a file back into a Python Dataclass? E.g. I have this c struct struct boh { uint32_t a; int8_t b; boolean c; }; And I want to read it's data into its own python Dataclass @dataclass class Boh: a: int b: int c: bool Is there a way to decorate this class to make this somehow possible? Note: for I've been reading bytes and offsetting them manually into the class when I'm creating it, I would like to know if there is a better option.
Since you already have the data read as bytes, you can use struct.unpack to unpack the bytes into a tuple, which can then be unpacked as arguments to your data class contructor. The formatting characters for struct.unpack can be found here, where L denotes an unsigned long, b denotes a signed char, and ? denotes a boolean: import struct from dataclasses import dataclass @dataclass class Boh: a: int b: int c: bool data = bytes((0xff, 0xff, 0xff, 0xff, 0x7f, 1)) print(Boh(*struct.unpack('Lb?', data))) This outputs: Boh(a=4294967295, b=127, c=True)
2
1
79,165,506
2024-11-7
https://stackoverflow.com/questions/79165506/expand-list-of-struct-column-in-polars
I have a pl.DataFrame with a column that is a list of struct entries. The lengths of the lists might differ: pl.DataFrame( { "id": [1, 2, 3], "s": [ [ {"a": 1, "b": 1}, {"a": 2, "b": 2}, {"a": 3, "b": 3}, ], [ {"a": 10, "b": 10}, {"a": 20, "b": 20}, {"a": 30, "b": 30}, {"a": 40, "b": 40}, ], [ {"a": 100, "b": 100}, {"a": 200, "b": 200}, {"a": 300, "b": 300}, {"a": 400, "b": 400}, {"a": 500, "b": 500}, ], ], } ) This looks like this: shape: (3, 2) ┌─────┬─────────────────────────────────┐ │ id ┆ s │ │ --- ┆ --- │ │ i64 ┆ list[struct[2]] │ ╞═════╪═════════════════════════════════╡ │ 1 ┆ [{1,1}, {2,2}, {3,3}] │ │ 2 ┆ [{10,10}, {20,20}, … {40,40}] │ │ 3 ┆ [{100,100}, {200,200}, … {500,… │ └─────┴─────────────────────────────────┘ I've tried various versions of unnest and explode, but I am failing to turn this into a long pl.DataFrame where the list is turned into rows and the struct entries into columns. This is what I want to see: pl.DataFrame( { "id": [1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3], "a": [1, 2, 3, 10, 20, 30, 40, 100, 200, 300, 400, 500], "b": [1, 2, 3, 10, 20, 30, 40, 100, 200, 300, 400, 500], } ) Which looks like this: shape: (12, 3) ┌─────┬─────┬─────┐ │ id ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 1 ┆ 1 │ │ 1 ┆ 2 ┆ 2 │ │ 1 ┆ 3 ┆ 3 │ │ 2 ┆ 10 ┆ 10 │ │ 2 ┆ 20 ┆ 20 │ │ … ┆ … ┆ … │ │ 3 ┆ 100 ┆ 100 │ │ 3 ┆ 200 ┆ 200 │ │ 3 ┆ 300 ┆ 300 │ │ 3 ┆ 400 ┆ 400 │ │ 3 ┆ 500 ┆ 500 │ └─────┴─────┴─────┘ Is there a way to manipulate the first pl.DataFrame into the second pl.DataFrame?
First explode, then unnest: df.explode('s').unnest('s') Output: ┌─────┬─────┬─────┐ │ id ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 1 ┆ 1 │ │ 1 ┆ 2 ┆ 2 │ │ 1 ┆ 3 ┆ 3 │ │ 2 ┆ 10 ┆ 10 │ │ 2 ┆ 20 ┆ 20 │ │ … ┆ … ┆ … │ │ 3 ┆ 100 ┆ 100 │ │ 3 ┆ 200 ┆ 200 │ │ 3 ┆ 300 ┆ 300 │ │ 3 ┆ 400 ┆ 400 │ │ 3 ┆ 500 ┆ 500 │ └─────┴─────┴─────┘
4
2
79,163,896
2024-11-6
https://stackoverflow.com/questions/79163896/how-to-detect-task-cancellation-by-task-group
Given a taskgroup and number of running tasks, per taskgroup docs if any of the tasks raises an error, rest of the tasks in group will be cancelled. If some of these tasks need to perform cleanup upon cancellation, then how would one go about detecting within the task it's being cancelled? Was hoping some exception is raised in the task, but that's not the case: script.py: import asyncio class TerminateTaskGroup(Exception): """Exception raised to terminate a task group.""" async def task_that_needs_to_cleanup_on_cancellation(): try: await asyncio.sleep(10) except Exception: print('exception caught, performing cleanup...') async def err_producing_task(): await asyncio.sleep(1) raise TerminateTaskGroup() async def main(): try: async with asyncio.TaskGroup() as tg: tg.create_task(task_that_needs_to_cleanup_on_cancellation()) tg.create_task(err_producing_task()) except* TerminateTaskGroup: print('main() termination handled') asyncio.run(main()) Executing, we can see no exception is raised in task_that_needs_to_cleanup_on_cancellation(): $ python3 script.py main() termination handled
Casually, I might avoid a design where a task group is intentionally cancelled in this way order and timing of Exception may be difficult to predict potentially-awkward cleanup new work created during cancellation However, you can except asyncio.CancelledError or use a finally block async def task_that_needs_to_cleanup_on_cancellation(): try: await asyncio.sleep(10) # placeholder for work except asyncio.CancelledError: print('task cancelled, performing cleanup...') raise # end without running later logic In order to avoid this cleanup-during-cancellation, you could pack potential cleanup work into a collection for the caller to handle in its Exception handler or later Both designs feel a little ugly, but also are very specific to the individual cleanup case as I believe the Exception could occur during any await within them async def task_1(task_id, cleanups): cleanups[task_id] = "args for later work" await asyncio.sleep(10) # placeholder for work del cleanups[task_id] async def task_2(name, cleanups): try: await asyncio.sleep(10) # placeholder for work except asyncio.CancelledError: # consider list instead cleanups[task_id] = "args for later work" raise ... cleanups = {} try: # task group logic except* TerminateTaskGroup: print('main() termination handled') if cleanups: # probably create and asyncio.gather() Note that the docs also mention that CancelledError should be re-raised, but it may not matter if there is no further logic! Additionally, when asyncio was added in Python 3.7, asyncio.CancelledError inherited from Exception - this was changed to BaseException in 3.8 (and is the case in all later versions) https://docs.python.org/3/library/asyncio-exceptions.html
3
1
79,164,737
2024-11-7
https://stackoverflow.com/questions/79164737/how-to-make-a-django-url-case-insensitive
For example, if I visit http://localhost:8000/detail/PayPal I get a Page not found error 404 with the following message: Using the URLconf ... Django tried these URL patterns, in this order: ... detail/<slug:slug> [name='processor_detail'] The current path, detail/PayPal, matched the last one. Here is my code: views.py: class ProcessorDetailView(DetailView): model = Processor template_name = 'finder/processor_detail.html' slug_field = 'slug' # Tell DetailView to use the `slug` model field as the DetailView slug slug_url_kwarg = 'slug' # Match the URL parameter name models.py: class Processor(models.Model): #the newly created database model and below are the fields name = models.CharField(max_length=250, blank=True, null=True) #textField used for larger strings, CharField, smaller slug = models.SlugField(max_length=250, blank=True) ... def __str__(self): #displays some of the template information instead of 'Processot object' if self.name: return self.name[0:20] else: return '--no processor name listed--' def get_absolute_url(self): # new return reverse("processor_detail", args=[str(self.slug)]) def save(self, *args, **kwargs): #`save` model a certain way(detailed in rest of function below) if not self.slug: #if there is no value in `slug` field then... self.slug = slugify(self.name) #...save a slugified `name` field value as the value in `slug` field super().save(*args, **kwargs) urls.py: path("detail/<slug:slug>", views.ProcessorDetailView.as_view(), name='processor_detail') I want that if I follow a link it either 1. doesn't matter what case I use or 2. the case in the browser url window changes to all lowercase.
You're getting 404, not because urls.py couldn't find a match, but because ProcessorDetailView couldn't find a slug named "PayPal", even if "paypal" was in the database. So the problem isn't with urls.py, it's with the view trying to look for the slug you've specified. After some research, it turned out that you could make the model lookup case insensitive by using __iexact Here's the modified views.py: from django.views.generic.detail import DetailView from .models import Processor from django.http import Http404 class ProcessorDetailView(DetailView): def get_object(self, queryset=None): if queryset is None: queryset = self.get_queryset() slug = self.kwargs.get(self.slug_url_kwarg) if slug: slug_field = self.get_slug_field() queryset = queryset.filter(**{slug_field + '__iexact': slug}) try: obj = queryset.get() except queryset.model.DoesNotExist: raise Http404("No %(verbose_name)s found matching the query" % {'verbose_name': queryset.model._meta.verbose_name}) return obj model = Processor template_name = 'finder/processor_detail.html' slug_field = 'slug' # Tell DetailView to use the `slug` model field as the DetailView slug slug_url_kwarg = 'slug' # Match the URL parameter name Here we just overridden the get_object function, which results in modifying the queryset to be case insensitive. Check this out for more info on overriding get_object: Django docs manually setting PK for get_object
2
2
79,164,756
2024-11-7
https://stackoverflow.com/questions/79164756/remove-specific-indices-in-each-row-of-a-numpy-ndarray
I have integer arrays of the type: import numpy as np seed_idx = np.asarray([[0, 1], [1, 2], [2, 3], [3, 4]], dtype=np.int_) target_idx = np.asarray([[2,9,4,1,8], [9,7,6,2,4], [1,0,0,4,9], [7,1,2,3,8]], dtype=np.int_) For each row of target_idx, I want to select the elements whose indices are not the ones in seed_idx. The resulting array should thus be: [[4,1,8], [9,2,4], [1,0,9], [7,1,2]] In other words, I want to do something similar to np.take_along_axis(target_idx, seed_idx, axis=1), but excluding the indices instead of keeping them. What is the most elegant way to do this? I find it surprisingly annoying to find something neat.
You can mask out the values you don't want with np.put_along_axis and then index the others: >>> np.put_along_axis(target_idx, seed_idx, -1, axis=1) >>> target_idx[np.where(target_idx != -1)].reshape(len(target_idx), -1) array([[4, 1, 8], [9, 2, 4], [1, 0, 9], [7, 1, 2]]) If -1 is a valid value, use target_idx.min() - 1.
1
1
79,164,352
2024-11-6
https://stackoverflow.com/questions/79164352/polars-arg-unique-for-list-column
How can I obtain the (first occurence) indices of unique elements for a column of type list in polars dataframe? I am looking for something similar to arg_unique, but that only exists for pl.Series, such as to be performed over a whole column. I need this to work one level below that, so on every list that is inside the column. Given the dataframe df = pl.DataFrame({ "fruits": [["apple", "banana", "apple", "orange"], ["grape", "apple", "grape"], ["kiwi", "mango", "kiwi"]] }) I expect the output to be df = pl.DataFrame({ "fruits": [[0, 1, 3], [0, 1], [0, 1]] })
.list.eval() can be used as a fallback when there is no specific .list.* method currently implemented. df.with_columns( pl.col("fruits").list.eval(pl.element().arg_unique()).alias("idxs") ) shape: (3, 2) ┌────────────────────────────────────────┬───────────┐ │ fruits ┆ idxs │ │ --- ┆ --- │ │ list[str] ┆ list[u32] │ ╞════════════════════════════════════════╪═══════════╡ │ ["apple", "banana", "apple", "orange"] ┆ [0, 1, 3] │ │ ["grape", "apple", "grape"] ┆ [0, 1] │ │ ["kiwi", "mango", "kiwi"] ┆ [0, 1] │ └────────────────────────────────────────┴───────────┘
2
2
79,163,783
2024-11-6
https://stackoverflow.com/questions/79163783/failed-to-find-out-the-source-of-a-certain-portion-of-a-link
I've created a script in python to scrape certain fields from a webpage. When I use this link in the script, it produces all the data in json format and I can parse it accordingly. import requests link = 'https://api-emis.kemenag.go.id/v1/institutions/pontren/public/identity/K1hOenRreVRmaWYwSGVzcERWVFpjZz09' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36' } with requests.Session() as s: s.headers.update(headers) res = s.get(link) container = res.json()['results'] account_bank_holder = container['account_bank_holder'] organizer_name = container['organizer_name'] print((account_bank_holder,organizer_name)) However, the problem is that I can't figure out where this portion K1hOenRreVRmaWYwSGVzcERWVFpjZz09 at the end of the link is coming from. This is how I found the link: Navigate to this website Select 2023/2024 Genap Click the first item in the table: ACEH Then, click the first item in the new table: ACEH SELATAN Finally, click the first item in the new table: 500311010057 You should have the link by now, ending with the portion I mentioned above. Question: Where is the portion similar to this K1hOenRreVRmaWYwSGVzcERWVFpjZz09 at the end of the final link coming from?
So, developer has used aes encryption along with base64 encoding. import base64 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import padding def aes_encrypt_cbc(plaintext: str) -> str: secret_key = b'a2c36eb2w1em50d6665dc5d61a68b400' base64_iv = "ZW1pc0Jhc2U2NElWa2V5cw==" iv = base64.b64decode(base64_iv) cipher = Cipher(algorithms.AES(secret_key), modes.CBC(iv), backend=default_backend()) encryptor = cipher.encryptor() padder = padding.PKCS7(128).padder() padded_plaintext = padder.update(plaintext.encode()) + padder.finalize() ciphertext = encryptor.update(padded_plaintext) + encryptor.finalize() return base64.b64encode(base64.b64encode(ciphertext).decode().encode()).decode() institution_id = "175974" encrypted_text = aes_encrypt_cbc(institution_id) print("Encrypted: ", encrypted_text) Note: Initialization Vector and secret key may change in future.!
2
4
79,163,041
2024-11-6
https://stackoverflow.com/questions/79163041/text-inside-a-pygame-gui-textbox-disappears-after-changing-it-pygame-gui-libra
I am using the pygame_gui library. I am trying to make a textbox that when pressed enter it will print the text inside the box to the console and reset it (so the textbox will be empty). It does indeed resets the textbox but for some reason the textbox doesnt get any new input until I click somewhere in the background and click again on the textbox and only then it shows the text it gets. The code: import pygame_gui, pygame pygame.init() screen = pygame.display.set_mode((500,500)) clock = pygame.time.Clock() manager = pygame_gui.UIManager((500,500)) manager.set_window_resolution((500,500)) text_box = pygame_gui.elements.UITextEntryLine(relative_rect=pygame.Rect(200,200,100,50), manager=manager, object_id='#text') run=True while run: UI_refresh = clock.tick(30)/1000 for event in pygame.event.get(): if event.type==pygame.QUIT: run=False elif event.type==pygame_gui.UI_TEXT_ENTRY_FINISHED and event.ui_object_id == '#text': print(text_box.text) text_box.set_text('') #here the text of the texbox is reset. manager.process_events(event) manager.update(UI_refresh) screen.fill('white') manager.draw_ui(screen) pygame.display.update() I tried instead of reseting the textbox to set the text of it to some other value like 'A' (I changed the line text_box.set_text('') to text_box.set_text('A') ) but the textbox was still shown to be empty for some reason and I had the same problem. Plus I should mention that when setting the text it does show the changes for a bit less than a second before it disappears until I click somewhere on the screen and again on the textbox to show the changes.
I have gone through the source and I found out that if we add the line text_box.redraw() right after the set_text it fixes the problem, so the updated code is: import pygame_gui, pygame pygame.init() screen = pygame.display.set_mode((500,500)) clock = pygame.time.Clock() manager = pygame_gui.UIManager((500,500)) manager.set_window_resolution((500,500)) text_box = pygame_gui.elements.UITextEntryLine(relative_rect=pygame.Rect(200,200,100,50), manager=manager, object_id='#text') run=True while run: UI_refresh = clock.tick(30)/1000 for event in pygame.event.get(): if event.type==pygame.QUIT: run=False elif event.type==pygame_gui.UI_TEXT_ENTRY_FINISHED and event.ui_object_id == '#text': print(text_box.text) text_box.set_text('') #here the text of the texbox is reset. text_box.redraw() #this line fixes the problem. manager.process_events(event) manager.update(UI_refresh) screen.fill('white') manager.draw_ui(screen) pygame.display.update()
3
2
79,163,372
2024-11-6
https://stackoverflow.com/questions/79163372/python-equivalent-of-the-perl-flip-flop-operator
What is the Python equivalent of the Perl ".." (range, or flip-flop) operator? for ( qw( foo bar barbar baz bazbaz bletch ) ) { print "$_\n" if /ar.a/ .. /az\w/; } Output: barbar baz bazbaz The Python workaround that I am aware of includes generator expression and indexing with the help of enumerate, but this seems cumbersome: import re lst = 'foo bar barbar baz bazbaz bletch'.split() idx_from = list(i for i, el in enumerate(lst) if re.search(r'ar.a', el))[0] idx_to = list(i for i, el in enumerate(lst) if re.search(r'az\w', el))[0] lst_subset = lst[ idx_from : (idx_to+1)] print(lst_subset) # ['barbar', 'baz', 'bazbaz'] Note: I am looking for just one range. There is currently no need to have multiple ranges.
When the operands aren't simple numbers, EXPR1 .. EXPR2 in scalar context is equivalent to following (except for the scope created by do { }): do { state $hidden_state = 0; if ( $hidden_state ) { ++$hidden_state; } else { $hidden_state = 1 if EXPR1; } my $rv = $hidden_state; # Or `$hidden_state > 1 && EXPR2` for `...`. if ( $hidden_state && EXPR2 ) { $rv .= "E0"; $hidden_state = 0; } $rv } Since you only care whether the flip-flop returns true or false, the above simplifies to the following: do { state $hidden_state = false; $hidden_state ||= EXPR1; my $rv = $hidden_state; $hidden_state &&= EXPR2; $rv } Now we have to translate this. Since the flip-flip is usually used as a generator, that's what I'll create. def flipflop( enumerable, start_cond, end_cond ): state = False for val in enumerable: if not state: state = start_cond( val ) if state: yield val if state: state = end_cond( val ) import re lst = 'foo bar barbar baz bazbaz bletch'.split() for x in flipflop( lst, lambda v: re.search( r'ar.a', v ), lambda v: re.search( r'az\w', v ) ): print( x )
2
2
79,162,500
2024-11-6
https://stackoverflow.com/questions/79162500/gaps-in-a-matplotlib-plot-of-categorical-data
When I have numerical data, say index by some kind of time, it is straightforward to plot gaps in the data. For instance, if I have values at times 1, 2, 3, 5, 6, 7, I can set an np.nan at time 4 to break up the plot. import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5, 6, 7] y = [10, 20, 30, np.nan, 10, 20, 30] plt.plot(x, y) plt.show() plt.close() That sure beats the alternative of just skipping time 4! import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 5, 6, 7] y = [10, 20, 30, 10, 20, 30] plt.plot(x, y) plt.show() plt.close() However, I now have a y variable that is categorical. Mostly, the plotting is straightforward: just use the categories as the y. import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 5, 6, 7] y = ["cat", "cat", "dog", "dog", "cat", "cat"] plt.plot(x, y) plt.show() plt.close() This puts the categories on the y-axis, just as I want. However, when I do my np.nan trick to get the gap, I get a point plotted at np.nan. import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5, 6, 7] y = ["cat", "cat", "dog", np.nan, "dog", "cat", "cat"] plt.plot(x, y) plt.show() plt.close() How can I get my plots to go cat cat dog on 1, 2, 3, and then dog cat cat on 5, 6, 7, leaving a gap at 4?
To create a gap in a categorical plot handle np.nan differently asmatplotlib doesn’t natively interpret np.nan in categorical contexts (Gets converted to 'nan' string.). Solution: Splitting the data and then plot each segment separately. x = [1, 2, 3, 5, 6, 7] y = ["cat", "cat", "dog", "dog", "cat", "cat"] # Plot each segment separately to introduce a gap plt.plot(x[:3], y[:3]) plt.plot(x[3:], y[3:]) Alternative using Masks: x = [1, 2, 3, 4, 5, 6, 7] y = ["cat", "cat", "dog", "gap", "dog", "cat", "cat"] # Convert categorical data to numbers using a mapping categories = ["cat", "dog"] y_numeric = [categories.index(val) if val in categories else np.nan for val in y] y_masked = np.ma.masked_where(np.isnan(y_numeric), y_numeric) plt.plot(x, y_masked) plt.yticks(range(len(categories)), categories) This allows gaps without splitting anything.
1
3
79,162,993
2024-11-6
https://stackoverflow.com/questions/79162993/how-to-select-column-range-based-on-partial-column-names-in-pandas
I have pandas dataframe and I am trying to select multiple columns (column range starting from Test to Bio Ref). Selection has to start from column Test to any column whose name starts with Bio. Below is the sample dataframe. In reality it can contain: any number of columns before Test column, any number of columns between Test & Bio Ref like 2,3,4,5 etc. any number of columns after Bio Ref. Bio Ref column can contain suffix in it but Bio Ref will be there as start of column name always. df_chunk = pd.DataFrame({ 'Waste':[None,None], 'Test':['something', 'something'], '2':[None,None], '3':[None,None], 'Bio Ref':['2-50','15-100'], 'None':[None,None]}) df_chunk Waste Test 2 3 Bio Ref None 0 None something None None 2-50 None 1 None something None None 15-100 None I have tried below codes that work: df_chunk.columns.str.startswith('Bio') df_chunk[df_chunk.columns[pd.Series(df_chunk.columns).str.startswith('Bio')==1]] Issue: But when I try to use them for multiple column Selection then it doesn't work: df_chunk.loc[:, 'Test':df_chunk.columns.str.startswith('Bio')]
You can creates masks for boolean indexing: m1 = np.maximum.accumulate(df_chunk.columns=='Test') # array([False, True, True, True, True, True]) m2 = np.maximum.accumulate(df_chunk.columns.str.startswith('Bio')[::-1])[::-1] # array([ True, True, True, True, True, False]) # m1 & m2 # array([False, True, True, True, True, False]) out = df_chunk.loc[:, (m1&m2)] Or identify the correct names to build a slice: start = 'Test' end = next(iter(df_chunk.columns[df_chunk.columns.str.startswith('Bio')]), None) out = df_chunk.loc[:, slice(start, end)] Output: Test 2 3 Bio Ref 0 something None None 2-50 1 something None None 15-100
2
2
79,162,666
2024-11-6
https://stackoverflow.com/questions/79162666/how-to-type-polars-altair-plots-in-python
I use polars dataframes (new to this module) and I'm using some static typing, to keep my code tidy and clean for debugging purposes, and to allow auto-completion of methods and attributes on my editor. Everything goes well. However, when plotting things from dataframes with the altair API, as shown in the doc, I am unable to find the type of the returned object in polars. import polars as pl import typing as tp data = {"a": [0,0,0,0,1,1,1,2,2,3]} df = pl.DataFrame(data) def my_plot(df: pl.DataFrame, col: str) -> tp.Any: """Plot an histogram of the distribution of values of df[col]""" return df[col].value_counts( ).plot.bar( y="count", x=col ).properties( width=400, ) u = my_plot(df, "a") u.show() How do I type the output of this function? The doc states the output of (...).plot is a DataFramePlot object but there is no info on this type, and anyway I'm using the output of (...).plot.bar(...) which has a different type. If I run type(u), I get altair.vegalite.v5.api.Chart but it seems sketchy to use this for static typing, and I don't want to import altair in my code as the altair methods I need are already included in polars. I couldn't find any info about this so any help is welcome Thanks!
You can do from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: import altair as alt def my_plot(df: pl.DataFrame, col: str) -> alt.Chart:
4
4
79,162,352
2024-11-6
https://stackoverflow.com/questions/79162352/how-to-filter-a-dataframe-with-different-conditions-for-each-column
I have a DataFrame where for each column I only want to show specific values based on the index, but these conditions are different for each column. This is what it looks like: data = {'a': [1,2,3,4,5], 'b': [10,20,30,40,50], 'c': [1,1,1,1,1]} df = pd.DataFrame(data) df: a b c 0 1 10 1 1 2 20 1 2 3 30 1 3 4 40 1 4 5 50 1 I now want to take values where index <3 for 'a', index <2 for 'b' and index = 4 for 'c'. Resulting in: a b c 0 1.0 10.0 NaN 1 2.0 20.0 NaN 2 3.0 NaN NaN 4 NaN NaN 1.0 I tried the following: import pandas as pd df_a = df.loc[df.index<3, 'a'] df_b = df.loc[df.index<2, 'b'] df_c = df.loc[df.index==4, 'c'] df_result = pd.concat([df_a, df_b, df_c], axis=1)``` Which gives the desired result, but is there a more efficient way to do this? So if I would have a list for the "<" condition and a list for the "=" condition, then could I create the resulting filter in one go? It is fine if the NaNs become zeros, because that is what I want in the end anyway.
Another possible solution, whose steps are: The np.stack function stacks these boolean conds, resulting in a 2D array, which is then transpose to align correctly the mask with the original df. The [np.where]2 function then evaluates these conds, returning the element of df if True, and NaN otherwise. conds = [df.index < 3, df.index < 2, df.index == 4] pd.DataFrame( np.where(np.stack(conds).T, df, np.nan), columns=df.columns).dropna(how='all') Or, even simpler, using pandas where as @mozway suggests in a comment below (thanks!): conds = [df.index < 3, df.index < 2, df.index == 4] df.where(np.stack(conds).T).dropna(how='all') Output: a b c 0 1.0 10.0 NaN 1 2.0 20.0 NaN 2 3.0 NaN NaN 4 NaN NaN 1.0
3
3
79,161,804
2024-11-6
https://stackoverflow.com/questions/79161804/what-is-the-best-way-to-filter-the-groups-that-have-at-least-n-rows-that-meets-t
This is my DataFrame: import pandas as pd df = pd.DataFrame({ 'a': [10, 20, 30, 50, 50, 50, 4, 100], 'b': [30, 3, 200, 25, 24, 31, 29, 2], 'd': list('aaabbbcc') }) Expected output: a b d 0 10 30 a 1 20 3 a 2 30 200 a The grouping is by column d. I want to return the groups that have at least two instances of this mask m = (df.b.gt(df.a)) This is what I have tried. It works but I wonder if there is a better/more efficient way to do it. out = df.groupby('d').filter(lambda x: len(x.loc[x.b.gt(x.a)]) >= 2)
With pandas You could use a groupby.transform on the mask with sum to produce a boolean Series: m = df['b'].gt(df['a']) out = df[m.groupby(df['d']).transform('sum').ge(2)] Output: a b d 0 10 30 a 1 20 3 a 2 30 200 a Intermediates: a b d m transform('sum') ge(2) 0 10 30 a True 2 True 1 20 3 a False 2 True 2 30 200 a True 2 True 3 50 25 b False 0 False 4 50 24 b False 0 False 5 50 31 b False 0 False 6 4 29 c True 1 False 7 100 2 c False 1 False Alternative: counts = m.groupby(df['d']).sum() out = df[df['d'].isin(counts.index[counts>=2])] With numpy Alternatively, one could avoid the costly groupby with pure numpy. This first approach with add.reduceat requires the groups to be consecutive: groups = df['d'].ne(df['d'].shift()).values m = df['b'].gt(df['a']).values idx = np.nonzero(groups)[0] out = df[df['d'].isin(df['d'].iloc[idx[np.add.reduceat(m, idx)>=2]])] This second one with pandas.factorize and numpy.bincount would work even with shuffled groups: a, idx = pd.factorize(df['d']) out = df[df['d'].isin(idx[np.bincount(a, weights=m) >= 2])] Intermediates: ## reduceat approach groups = df['d'].ne(df['d'].shift()).values # array([ True, False, False, True, False, False, True, False]) m = df['b'].gt(df['a']).values # array([ True, False, True, False, False, False, True, False]) idx = np.nonzero(groups)[0] # array([0, 3, 6]) np.add.reduceat(m, idx)>=2 # array([ True, False, False]) idx[np.add.reduceat(m, idx)>=2] # array([0]) df['d'].iloc[idx[np.add.reduceat(m, idx)>=2]] # ['a'] df['d'].isin(df['d'].iloc[idx[np.add.reduceat(m, idx)>=2]]) # array([ True, True, True, False, False, False, False, False]) ## bincount approach a, idx = pd.factorize(df['d']) a # array([0, 0, 0, 1, 1, 1, 2, 2]) idx # Index(['a', 'b', 'c'], dtype='object') np.bincount(a, weights=m) # array([2., 0., 1.]) np.bincount(a, weights=m) >= 2 # array([ True, False, False]) idx[np.bincount(a, weights=m) >= 2] # Index(['a'], dtype='object') df['d'].isin(idx[np.bincount(a, weights=m) >= 2]) # array([ True, True, True, False, False, False, False, False]) Timings With groups of 3 rows (sorted members): With groups of 3 rows (shuffled members; NB. excluding reduceat.): With a fixed number of 20 groups (of about equal size) with consecutive members: With a fixed number of 20 groups (of about equal size) with shuffled members (NB. excluding reduceat.):
2
3
79,161,450
2024-11-6
https://stackoverflow.com/questions/79161450/join-where-with-starts-with-in-polars
I have two data frames, df = pl.DataFrame({'url': ['https//abc.com', 'https//abcd.com', 'https//abcd.com/aaa', 'https//abc.com/abcd']}) conditions_df = pl.DataFrame({'url': ['https//abc.com', 'https//abcd.com', 'https//abcd.com/aaa', 'https//abc.com/aaa'], 'category': [['a'], ['b'], ['c'], ['d']]}) now I want a df, for assigning categories to the first df based on first match for the url starts with in second df, that is the output should be, url category https//abc.com ['a'] https//abcd.com ['b'] https//abcd.com/aaa ['b'] - this one starts with https//abcd.com, that is the first match https//abc.com/abcd ['a'] - this one starts with https//abc.com, that is the first match current code which works is like this, def add_category_column(df: pl.DataFrame, conditions_df) -> pl.DataFrame: # Initialize the category column with empty lists df = df.with_columns(pl.Series("category", [[] for _ in range(len(df))], dtype=pl.List(pl.String))) # Apply the conditions to populate the category column for row in conditions_df.iter_rows(): url_start, category = row df = df.with_columns( pl.when( (pl.col("url").str.starts_with(url_start)) & (pl.col("category").list.len() == 0) ) .then(pl.lit(category)) .otherwise(pl.col("category")) .alias("category") ) return df but is there a way to achieve the same without using for loops, could we use join_where here, but in my attempts join_where does not work for starts_with
I would expect pl.DataFrame.join_where() to work, but apparently it doesn't allow pl.Expr.str.starts_with() condition yet - I get only 1 binary comparison allowed as join condition error. So you can use pl.DataFrame.join() and pl.DataFrame.filter() instead: ( df .join(conditions_df, how="cross") .filter(pl.col("url").str.starts_with(pl.col("url_right"))) .sort("url") .group_by("url", maintain_order=True) .agg(pl.col.category.first()) ) shape: (4, 2) ┌─────────────────────┬───────────┐ │ url ┆ category │ │ --- ┆ --- │ │ str ┆ list[str] │ ╞═════════════════════╪═══════════╡ │ https//abc.com ┆ ["a"] │ │ https//abc.com/abcd ┆ ["a"] │ │ https//abcd.com ┆ ["b"] │ │ https//abcd.com/aaa ┆ ["b"] │ └─────────────────────┴───────────┘ You can also use DuckDB integration with Polars and use lateral join: import duckdb duckdb.sql(""" select d.url, c.category from df as d, lateral ( select c.category from conditions_df as c where starts_with(d.url, c.url) limit 1 ) as c """) ┌─────────────────────┬───────────┐ │ url │ category │ │ varchar │ varchar[] │ ├─────────────────────┼───────────┤ │ https//abc.com │ [a] │ │ https//abc.com/abcd │ [a] │ │ https//abcd.com/aaa │ [b] │ │ https//abcd.com │ [b] │ └─────────────────────┴───────────┘ however, you have to be careful cause in standard SQL specification row collections are unordered, so I'd not do that in production without adding explicit order by clause into lateral part.
2
1
79,160,696
2024-11-5
https://stackoverflow.com/questions/79160696/converting-python-logic-to-sql-query-pairing-two-status-from-one-column
I need help with converting my python code to SQL: req_id_mem = "" req_workflow_mem = "" collect_state_main = [] collect_state_temp = [] for req_id, req_datetime, req_workflow in zip(df["TICKET_ID"], df["DATETIMESTANDARD"], df["STATUS"]): if req_id_mem == "" or req_id_mem != req_id: req_id_mem = req_id req_workflow_mem = "" collect_state_temp = [] if req_workflow_mem == "" and req_workflow == "Open" and req_id_mem == req_id: req_workflow_mem = req_workflow collect_state_temp.append(req_id) collect_state_temp.append(req_workflow) collect_state_temp.append(req_datetime) if req_workflow_mem == "Open" and req_workflow == "Closed" and req_id_mem == req_id: req_workflow_mem = req_workflow collect_state_temp.append(req_workflow) collect_state_temp.append(req_datetime) collect_state_main.append(collect_state_temp) collect_state_temp = [] DataFrame: TICKET_ID DATETIMESTANDARD STATUS 79355138 9/3/2024 11:54:18 AM Open 79355138 9/3/2024 9:01:12 PM Open 79355138 9/6/2024 4:52:10 PM Closed 79355138 9/6/2024 4:52:12 PM Open 79355138 9/10/2024 4:01:24 PM Closed 79446344 8/27/2024 1:32:54 PM Open 79446344 9/11/2024 9:40:17 AM Closed 79446344 9/11/2024 9:40:24 AM Closed 79446344 9/11/2024 9:42:14 AM Open Result: It will Identify the first Open State of a TICKET_ID and look for the closest Closed Status It will reiterate for each case to look for an Open and Closed pair (first open and first close will only be considered) My problem is I'm stuck since the pairings can happen more than twice. I tried Rank in sql but it only return the first instance of pairing but not the other pairs Adding also my solution to this one as I migrated to snowflake recently: SELECT FOD.TICKET_ID, FOD.FIRSTOPENDATETIME AS OPEN_DATETIME, MIN(NC.DATETIMESTANDARD) AS CLOSED_DATETIME FROM ( SELECT TICKET_ID, MIN(DATETIMESTANDARD) AS FIRSTOPENDATETIME, STATUS FROM DB.TABLE WHERE ( (STATUS IN ('Open') AND EVENT_TYPE IN ('Ticket Open')) OR STATUS IN ('Closed') ) GROUP BY TICKET_ID, STATUS ) AS FOD LEFT JOIN DB.TABLE AS NC ON FOD.TICKET_ID = NC.TICKET_ID AND NC.STATUS = 'Closed' AND NC.DATETIMESTANDARD > FOD.FIRSTOPENDATETIME WHERE FOD.STATUS = 'Open' GROUP BY FOD.TICKET_ID, FOD.FIRSTOPENDATETIME ORDER BY FOD.TICKET_ID ASC, FOD.FIRSTOPENDATETIME ASC
One way of doing it: SELECT FOD.TICKET_ID , FOD.FIRSTOPENDATETIME , (SELECT NC.DATETIMESTANDARD from MyTbl NC -- Nearest future close date where NC.TICKET_ID=FOD.TICKET_ID and NC.STUS='Closed' and NC.DATETIMESTANDARD>FOD.FIRSTOPENDATETIME and DATETIMESTANDARD=(select min(DATETIMESTANDARD) from MyTbl NCb where NCb.TICKET_ID=NC.TICKET_ID and NCb.STUS='Closed' and NCb.DATETIMESTANDARD > FOD.FIRSTOPENDATETIME) ) as NearestFutureClosedDate from (select TICKET_ID , MIN(DATETIMESTANDARD) as FIRSTOPENDATETIME from MyTbl group by TICKET_ID) as FOD The main query selects the earliest Open rows for each ticket, and the subquery finds the earliest Closed rows following those. Update: I just realised that you only want the date (and no other columns) from the nearest close date row), so it can be simplifed to: SELECT FOD.TICKET_ID , FOD.FIRSTOPENDATETIME , (select min(DATETIMESTANDARD) from #MyTbl NC where FOD.TICKET_ID=NC.TICKET_ID and NC.STUS='Closed' and NC.DATETIMESTANDARD > FOD.FIRSTOPENDATETIME ) as NearestFutureClosedDate from (select TICKET_ID , MIN(DATETIMESTANDARD) as FIRSTOPENDATETIME from #MyTbl group by TICKET_ID) as FOD ; You can also use CTEs but it doesn't mean that it will change the execution plan. Just to prove the point, I moved the subquery obtaining 'first open date' into a CTE, and compared the execution plans: with FOD as ( select TICKET_ID , MIN(DATETIMESTANDARD) as FIRSTOPENDATETIME from #MyTbl group by TICKET_ID ) SELECT FOD.TICKET_ID , FOD.FIRSTOPENDATETIME , (select min(DATETIMESTANDARD) from #MyTbl NC where FOD.TICKET_ID=NC.TICKET_ID and NC.STUS='Closed' and NC.DATETIMESTANDARD > FOD.FIRSTOPENDATETIME ) as NearestFutureClosedDate from FOD These are exactly the same for this example; we can have different outcomes depending on the data statistics, rdbms engine, system resources, etc, but it proves that defining some part of your query as a 'subquery' doesn't instruct the engine to 'create a table'.
2
2
79,152,912
2024-11-3
https://stackoverflow.com/questions/79152912/how-to-select-a-particular-div-or-pragraph-tag-from-html-content-using-beautiful
I'm using beautiful soup to extract some text content from HTML data. I have a div and several paragraph tags and the last paragraph is the copyright information with copyright logo , the year and some more info. the year is different based on what year the content was from , so i can't look for exact text but rest is always the same besides a variable year . is there a way i can delete/ignore the last paragraph? from bs4 import BeautifulSoup text_content = '<div><p>here is the header information </p><p> some text content </p> <p> another block of text</p> .....<p> 2024 copyright , all rights reserved </p>' bs = BeautifulSoup(text_content, "html.parser") only_text = " ".join([p.text for p in soup.find_all("p")]) I used beautiful soup to get all the text content , now i want to remove a particular paragraph.
Just store elements in a list and then pop the last item. soup = BeautifulSoup(text_content, "html.parser") # Find all the paragraph tags paragraphs = soup.find_all("p") # Check if the last paragraph contains a match if "copyright" in paragraphs[-1].text.lower(): # Remove the last paragraph paragraphs.pop() # Join the remaining paragraphs only_text = " ".join([p.text for p in paragraphs]) paragraphs = soup.find_all("p") - finds all the <p> tags and stores them in a list paragraphs.pop() - removes the last element from the list
1
1
79,157,298
2024-11-4
https://stackoverflow.com/questions/79157298/python-gekko-step-wise-variable-in-a-time-domain-problem-with-differential-equat
I am coding an MPC problem in Python Gekko to heat a building on a winter day (for now I am working with a simple building with 2 zones only). The building is expressed with a Resistor-Capacitor (RC) model and the objective is to minimize a combination of maximum power and the total energy used. The RC model is expressed through differential equations. It is already working correctly by using the temperature of the zones as manipulated variables (MV) to minimize the objective. But to increase the accuracy and integrate it better in the building I have now implemented a PID inside the MPC problem (because the building is using PID thermostats and I am using MPC only to feed better setpoints to the PID controllers). This means that the MPC will cahnge the PID setpoints to minimize the objective function. In this case the MVs are not the zones' temperatures but the PID setpoints. The zones temperatures will try to follow the setpoints through the PID control. This is working well so far but I need the PID setpoints to change only every 2 hours instead of 15 minutes (which is the timestep I am using). The MPC must use 15 minutes timesteps as 2 hours would be way too large. This means that the MV should have step-wise shape with values that can change only every 8 timesteps. I am trying to address this with m.Array variables of the Setpoints (SP) (length of 97 as the m.time) to try to impose 12 different Fixed Values (FV) every 8 timesteps (SP[0:8] = FV_1 , SP[8:16] = FV_2 etc.) but I am not managing to solve my issue. I searched within multiple posts here in Stack Overflow but I am not finding a similar case. The working code with the PID in the MPC is: m = GEKKO(remote=False) # create GEKKO model n_days_mpc = 1 starting_datetime_mpc = pd.Timestamp("2015-02-02 00:00", tz="US/Eastern") finishing_datetime_mpc = starting_datetime_mpc + timedelta(hours = (24*n_days_mpc)) dates_mpc = pd.date_range(starting_datetime_mpc, finishing_datetime_mpc,freq='15min',tz="US/Eastern") time_array = np.arange(0,len(dates_mpc)*timestep,timestep) m.time = time_array # import input data df, location, solar_position, Q_solar_df = weather_data (starting_datetime_mpc, finishing_datetime_mpc) U_test = Power_delivered[starting_datetime_mpc:finishing_datetime_mpc].values X_test = Temp[starting_datetime_mpc:finishing_datetime_mpc].values W_cols = ["T_ext", "Tground_1", "Tground_2", "Q_sol_1", "Q_sol_2", "Q_sol_roof", "Q_int_1", "Q_int_2", "wind_sq"] # exogenous_columns W_test = df[W_cols].values # reference performances from TRNSYS power_trnsys = U_test[:,0] + U_test[:,1] Q_int_trnsys = W_test[:,7] + W_test[:,8] Q_sol_direct_trnsys = W_test[:,3] + W_test[:,4] # comfort schedule df["is_occupied"] = (7. <= df.index.hour) & (df.index.hour < 18) # comfort schedules heating_SP_unoccupied = 19.0 # Setpoints when occupied/unoccupied heating_SP_occupied = 21.0 df["heating_SP"] = heating_SP_unoccupied df.loc[df["is_occupied"], "heating_SP"] = heating_SP_occupied # Update for occupied setpoint heating_SP = df["heating_SP"].values T_comfort_21 = m.Param(value=heating_SP) # PID parameters Kc = 30000/3600 # controller gain tauI = 1000 # controller reset time tauD = 0 # derivative constant Q1_0 = m.Const(value=0.0) # OP bias Zone 1 Q2_0 = m.Const(value=0.0) # OP bias Zone 2 # change solver options m.options.IMODE = 6 # MPC # parameters from building training for i in params_list: locals()[i] = m.Const(value = locals()[i+'_results'][-1]) # initial conditions T_wall_int_initial = np.ones(len(dates_mpc))*20 initial_thermal_power = U_test[:,0][0] + U_test[:,1][0] # state variables (temperature) T_1 = m.SV(value=X_test[:,0][0]); T_2 = m.SV(value=X_test[:,1][0]); T_wall_int = m.SV(value=T_wall_int_typique); # manipulated variables (PID thermostat setpoints) SP_1 = m.MV(value=heating_SP[0],fixed_initial=False); SP_1.STATUS = 1 # set point SP_2 = m.MV(value=heating_SP[0],fixed_initial=False); SP_2.STATUS = 1 # set point Intgl_1 = m.Var((heating_SP[0]-X_test[:,0][0])*tauI*Kc) # integral of the error Zone 1 Intgl_2 = m.Var((heating_SP[0]-X_test[:,1][0])*tauI*Kc) # integral of the error Zone 2 # controlled variables (heating) Q_1 = m.Var(value=U_test[:,0][0],lb=0); Q_2= m.Var(value=U_test[:,1][0],lb=0); thermal_power = m.Var(value=initial_thermal_power) max_power = m.FV(value=initial_thermal_power); max_power.STATUS = 1 # uncontrolled variables (weather and internal heat gains) T_ext = m.Param(value=W_test[:,0]) Tground_1 = m.Param(value=W_test[:,1]) Tground_2 = m.Param(value=W_test[:,2]) Q_sol_1 = m.Param(value=W_test[:,3]) Q_sol_2 = m.Param(value=W_test[:,4]) Q_sol_roof = m.Param(value=W_test[:,5]) Q_int_1 = m.Param(value=W_test[:,6]) Q_int_2 = m.Param(value=W_test[:,7]) wind_sq = m.Param(value=W_test[:,8]) # building equations (RC grey box) m.Equation(C_1 *1e6* T_1.dt() == (Q_1 + lambda_1 * Q_int_1 + sigma_1 * Q_sol_1 + sigma_roof * Q_sol_roof + alpha_1 * wind_sq * (T_ext - T_1) \ + Uext_1*(T_ext - T_1) + Uroof_1*(T_ext - T_1) + Ug_1*(Tground_1 - T_1) \ + U_wall_int*(T_wall_int - T_1) + U_12*(T_2- T_1) )) m.Equation(C_2 *1e6* T_2.dt() == (Q_2 + lambda_2 * Q_int_2 + sigma_2 * Q_sol_2 + sigma_roof * Q_sol_roof + alpha_2 * wind_sq * (T_ext - T_2) \ + Uext_2*(T_ext - T_2) + Uroof_2*(T_ext - T_2) + Ug_2*(Tground_2 - T_2) \ + U_wall_int*(T_wall_int - T_2) + U_12*(T_1- T_2) )) m.Equation(C_wall_int *1e6* T_wall_int.dt() == U_wall_int*(T_1 - T_wall_int) + U_wall_int*(T_2 - T_wall_int)) # PID thermostats err_1 = m.Intermediate(SP_1-T_1) # set point error err_2 = m.Intermediate(SP_2-T_2) # set point error m.Equation(Intgl_1.dt()==err_1) # integral of the error m.Equation(Q_1 == Q1_0 + Kc*err_1 + (Kc/tauI)*Intgl_1 - (Kc*tauD)*T_1.dt()) m.Equation(Intgl_2.dt()==err_2) # integral of the error m.Equation(Q_2 == Q2_0 + Kc*err_2 + (Kc/tauI)*Intgl_2 - (Kc*tauD)*T_2.dt()) m.Equation(thermal_power == Q_1+Q_2) m.Equation(max_power >= thermal_power) m.Equation(SP_1 >= T_comfort_21) m.Equation(SP_2 >= T_comfort_21) m.Minimize(14.58*max_power + 5.03/100*thermal_power/ (60/minutes_per_timestep)) m.solve() What I was trying to do is (i report here only the parts i modified): # manipulated variables (PID thermostat setpoints) SP_1 = m.Array(m.Var, len(m.time)) SP_2 = m.Array(m.Var, len(m.time)) SP_1_values = m.Array(m.FV,12) SP_2_values = m.Array(m.FV,12) for SP in SP_1_values: SP.STATUS = 1 for SP in SP_2_values: SP.STATUS = 1 Intgl_1 = m.Var((heating_SP[0]-X_test[:,0][0])*tauI*Kc) # integral of the error Zone 1 Intgl_2 = m.Var((heating_SP[0]-X_test[:,1][0])*tauI*Kc) # integral of the error Zone 2 # PID thermostats for i in (0,1,2,3,4,5,6,7,8): m.Equation(SP_1[i] == SP_1_values[0]) m.Equation(SP_2[i] == SP_2_values[0]) for j in range(1, 12): start_index = 8 * j + 1 indices = tuple(range(start_index, start_index + 8)) for i in indices: m.Equation(SP_1[i] == SP_1_values[j]) m.Equation(SP_2[i] == SP_2_values[j]) err_1 = m.Intermediate(SP_1-T_1) # set point error err_2 = m.Intermediate(SP_2-T_2) # set point error m.Equation(Intgl_1.dt()==err_1) # integral of the error m.Equation(Q_1 == Q1_0 + Kc*err_1 + (Kc/tauI)*Intgl_1 - (Kc*tauD)*T_1.dt()) m.Equation(Intgl_2.dt()==err_2) # integral of the error m.Equation(Q_2 == Q2_0 + Kc*err_2 + (Kc/tauI)*Intgl_2 - (Kc*tauD)*T_2.dt()) m.Equation(thermal_power == Q_1+Q_2) m.Equation(max_power >= thermal_power) for i in range(len(m.time)): m.Equation(SP_1[i] >= T_comfort_21) m.Equation(SP_2[i] >= T_comfort_21) I first tried to define the PID Setpoints as only m.Var but then i could not use SP_1[i] as it was saying "invalid index to scalar variable" so that's why I am using an Array and then I am trying to inpose something like SP_1[0:9] = an FV parameter that can change then SP_1[9:17] = another FV parameter that can change and so on, so that SP_1 is a step-wise variable. But now the m.Intermediate line err_1 = m.Intermediate(SP_1-T_1) gives this error: @error: Intermediate Definition Error: Intermediate variable with no equality (=) expression . I guess that it's because now SP_1 needs to be indexed there (since it's an m.Array) but then I cannot do the same with T_1 because that one needs to stay as State Variable (SV) because it is subject to differential equations and then also the integral error of the PID is subject to a differential equation. So I am basically stuck and don't know what to do. What I wanted to ask is: Is it possible to implement the step-wise variable I would like to implement? and how could I do it? Thanks to everyone that will answer to this post. If the question is not clear I can try to clarify it. Also, if it would be useful to have the input data to compute the code I could provide it.
Try using an option for the MV type that specifies the number of time steps between each allowable movement of the Manipulated Variable. SP_1.MV_STEP_HOR=8 SP_2.MV_STEP_HOR=8 There is more information on MPC tuning options in the Dynamic Optimization Course and in the documentation. A global option can be set for all MVs (e.g. m.options.MV_STEP_HOR=8) or they can be set individually (e.g. SP_1.MV_STEP_HOR=6, SP_1.MV_STEP_HOR=10). By default, the individual MVs are set to 0 when they use the global option.
2
0
79,155,108
2024-11-4
https://stackoverflow.com/questions/79155108/one-progress-bar-for-a-parallel-job-python
The loop runs over some number of models (n_mod) and is distributed among the n_cpus. As you will note, running this code as mpirun -np 4 python test_mpi.py produces 4 progress bars. This is understandable. But is there a way to use tdqm to get one progress bar which tells me how many models have been completed? from tqdm import tqdm from mpi4py import MPI import time comm = MPI.COMM_WORLD cpu_ind = comm.Get_rank() n_cpu = comm.Get_size() n_mod=100 for i in tqdm(range(n_mod)): if (cpu_ind == int(i/int(n_mod/n_cpu))%n_cpu): #some task here which is a function of i time.sleep(0.02)
This can be achieved with a main node and worker node(s) setup. Essentially only the rank == 0 node will be updating a progress bar whilst the worker nodes will simply be informing the main node that they have completed the task. Worker: (Defined similarly to your above code) def worker(n_mod, size, rank): comm = MPI.COMM_WORLD step = size - 1 # Number of worker processes start = rank - 1 for i in range(start, n_mod, step): time.sleep(0.02) # Work # Notify the master that a task is done comm.send('done', dest=0, tag=1) Main Node: def pbar_node(n_mod, size): comm = MPI.COMM_WORLD pbar = tqdm(total=n_mod, desc="Processing Models") completed = 0 while completed < n_mod: # Receive a message from any worker status = MPI.Status() comm.recv(source=MPI.ANY_SOURCE, tag=1, status=status) completed += 1 pbar.update(1) pbar.close() I have tested this with mpirun -np 4 python test.py and get a single progress bar for 4 models sharing 100 tasks as shown in the screenshot. Ensure that you take into account the fact there will be the number of workers plus a main node when dealing with distributing tasks so you do not get off-by-one errors. You can use these like so: comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() n_mod = N_TASKS if rank == 0: # pbar process pbar_node(n_mod)else: # Worker processes worker(n_mod, size, rank)
1
2
79,154,980
2024-11-4
https://stackoverflow.com/questions/79154980/any-way-to-check-if-a-number-is-already-within-a-3x3-grid-in-a-9x9-grid-of-neste
Programming a sudoku game and I've got a nested list acting as my board and the logic written out however I am stumped on how to check if a number is specifically within a 3x3 grid within the 9x9 nested list board I've got #board initilization board = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 2], [0, 0, 0, 3, 0, 3, 0, 0, 0], [0, 0, 5, 0, 0, 0, 0, 0, 0], [0, 8, 0, 0, 0, 6, 0, 5, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 2, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 9, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0] ] #main game logic #list printing logic and collecting user inputs while True: print("Welcome to the Python Sudoku") print("Here's your board") for row in board: print(row) print("You will have a choice to which row you choose and which column you choose") print("You will also have a choice of number you wish to input") rowchoice = int(input("Please pick the row you wish to add the number to 0-8 where 0 is the first row")) #error handling to handle invalid inputs nested in to check after each input if rowchoice > 8: print("Invalid number, try again") continue if rowchoice < 0: print("Invalid number, try that again") continue columnchoice = int(input("Please pick the column of choice 0-8 where 0 is the first column")) if columnchoice > 8: print("Invalid number try again") continue if columnchoice < 0: print("Invalid number try again") continue guess = int(input("Please pick the sudoku guess number 1-9")) if guess > 9: print("Invalid number, try again.") continue if guess < 0: print("Invalid number, try again") continue #adding user guess to user choice board[rowchoice][columnchoice] = guess #printing new updated board for row in board: print(board) Thought of trying a check if its in one of the lists but I don't think that will work as each list is not a 3x3 grid Also would be greatly appreciated if functions are not used as I am required not to use them for this sudoku game, thanks!
First you just need a given cell (row,col) which is the top left corner (the start) of the 3x3 subgrid you are trying to check: start_row = (row // 3) * 3 start_col = (col // 3) * 3 Then simply iterate through the 3x3 subgrid. Function implementation: def is_valid_move(board, row, col, num): # Check row and column if num in board[row] or any(board[i][col] == num for i in range(9)): return False # Determine the top-left corner of the 3x3 subgrid start_row, start_col = 3 * (row // 3), 3 * (col // 3) # Check the 3x3 subgrid for r in range(start_row, start_row + 3): if num in board[r][start_col:start_col + 3]: return False return True Add this: # Main Game Loop while True: ... if is_valid_move(board, row, col, num): board[row][col] = num print(f"Placed {num} at ({row}, {col}).") else: print("Invalid move. Number conflicts with existing ones.") ... Non Function implementation (requested in comment as additional requirement) Instead of adding a function you can just add the same logic into the main loop itself. # Subgrid Check start_row = (rowchoice // 3) * 3 start_col = (columnchoice // 3) * 3 subgrid_valid = True for r in range(start_row, start_row + 3): for c in range(start_col, start_col + 3): if board[r][c] == guess: subgrid_valid = False break if not subgrid_valid: break if not subgrid_valid: print("Number already exists in the 3x3 subgrid.") continue
1
3
79,157,061
2024-11-4
https://stackoverflow.com/questions/79157061/python3-venv-how-to-sync-ansible-python-interpreter-for-playbooks-that-mix-con
I'm running ansible playbooks in python venv My playbooks often involve a mix of cloud infrastructure (AWS) and system engineering. I have configured them to run cloud infrastructure tasks with connection: local - this is to minimize access rights required on the target system. However since using venv I have a conflict in regards to the ansible_python_interpreter location: on the target system they tend to be in a "default" location /usr/bin/python3 - I am not 100% sure if this is hard coded in ansible, or stored in a PATH variable on my local system I assume they are defined by home = /opt/homebrew/opt/[email protected]/bin include-system-site-packages = false version = 3.12.5 executable = /opt/homebrew/Cellar/[email protected]/3.12.5/Frameworks/Python.framework/Versions/3.12/bin/python3.12 command = /opt/homebrew/opt/[email protected]/bin/python3.12 -m venv /Users/jd/projects/mgr2/ansible Because of this, I cannot run a mixed playbook, either I need add vars: ansible_python_interpreter: /Users/jd/projects/mgr2/ansible/bin/python3 to my playbook to run local tasks or remove this line to run target system tasks. I'm looking for a way to have python3 in the PATH variable, depending on which venv I am sourcing.
I had a similar problem, and I resolved it with interpreter fallback which can use a list of locations (attempted in order) unlike the interpreter config which is just a single path. In your inventory set this variable: ... ... ansible_python_interpreter_fallback: - /Users/jd/projects/mgr2/ansible/bin/python3 - /usr/bin/python3 Note that you can not do this in the config file, unfortunately, it has to be set in inventory (or by inventory plugin).
1
2
79,159,747
2024-11-5
https://stackoverflow.com/questions/79159747/is-there-a-way-to-render-gt-tables-as-pngs-with-a-no-browser-and-b-without-w
I have a really pretty gt table that we'd like to automate production of. Running this on our remote server has some limitations: enterprise policy is that no browsers, headless or otherwise, may be installed on the server; the admin has been unwilling to install wkhtmltopdf. So I can either run this locally, which I'd rather avoid, or I can schedule it to crank out an HTML table, which is a pain for the person who actually uses these images. Rendering the gt_tbl using ggplot destroys a lot of the formatting that was done. The R script that generates the table is set up now to output an html table. I'm open to solutions in R or Python that can run after the table is generated. Thanks!
Figured it out! foo <- <a gt_tbl object> library(ggplot2) library(ggplotify) library(rsvg) #required to render SVG cells foo_1 <- as_gtable(foo) foo_2 <- as.ggplot(foo_1) ggsave('path/to/file.png', foo_2)
2
1
79,157,011
2024-11-4
https://stackoverflow.com/questions/79157011/python-3-0-apply-conditional-formatting-to-a-cell-based-off-the-calculated-valu
I am trying to automate processing of a form with; (1) two calculated fields in columns F and G, and (2) one manually-entered field in column H. If both these values in a row are calculated to be >=30, I would like to highlight the corresponding cell in column A. Alternatively, if the value in column H is "Warranty PM", I want to highlight the corresponding cell in column A in yellow. At this time, I have not started on function (2) as I want it to run after function (1), such that it is the priority function. I apologize if I am messing the naming convention up with Excel! In case it may help, the following is an example of my data: Case No Status Current Date Date Created Date Modified Days Since Creation Days Since Modification Type 3051 New [Today] 01-Nov-2024 01-Nov-2024 =DATEDIF(D2,C2,"d") =DATEDIF(E2,C2,"d") Warranty PM 3048 Service Scheduled [Today] 31-Oct-2024 01-Nov-2024 =DATEDIF(D3,C3,"d") =DATEDIF(E3,C3,"d") Hardware ... 2832 Service Scheduled [Today] 20-Aug-2024 27-Aug-2024 =DATEDIF(D16,C16,"d") =DATEDIF(E16,C16,"d") Customer Request (Move) In my current code, I am able to get both the F and G columns to properly highlight based on their calculated values. These should be light red if the cell is over 30. This is completed using conditional formatting: from openpyxl import load_workbook from openpyxl.styles import PatternFill, NamedStyle from openpyxl.formatting.rule import CellIsRule def format_xlsx(xlsx_file): #Load the file and workbook file_path = xlsx_file workbook = load_workbook(file_path) sheet = workbook.active #Define the fills light_red_fill = PatternFill(start_color="FFC7CE", end_color="FFC7CE", fill_type="solid") dark_red_fill = PatternFill(start_color="8B0000", end_color="8B0000", fill_type="solid") yellow_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid") #Create conditional format rules for F and G: sheet.conditional_formatting.add( "F2:F1048576", # Apply to column F from row 2 onwards CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) sheet.conditional_formatting.add( "G2:G1048576", # Apply to column G from row 2 onwards CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) workbook.save(file_path) return xlsx_file However, if I try to include a function that will highlight cells in column A: from openpyxl import load_workbook from openpyxl.styles import PatternFill, NamedStyle from openpyxl.formatting.rule import CellIsRule def format_xlsx(xlsx_file): #Load the file and workbook file_path = xlsx_file workbook = load_workbook(file_path) sheet = workbook.active #Define the fills light_red_fill = PatternFill(start_color="FFC7CE", end_color="FFC7CE", fill_type="solid") dark_red_fill = PatternFill(start_color="8B0000", end_color="8B0000", fill_type="solid") yellow_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid") #Create conditional format rules for F and G: sheet.conditional_formatting.add( "F2:F1048576", # Apply to column F from row 2 onwards CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) sheet.conditional_formatting.add( "G2:G1048576", # Apply to column G from row 2 onwards CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) #Loop through each cell in columns F and G, ignoring the first row for headers. red = "AND($F2>=30, $G2>=30)" sheet.conditional_formatting.add( "A2:A1048576", # Apply to column F from row 2 onwards CellIsRule(formula=[red], fill=dark_red_fill)) workbook.save(file_path) return xlsx_file I get the following error when opening the workbook in excel: "We found a problem with some content in 'testfile_04-Nov-2024.xlsx'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes." I am then presented with the option to view Excel was able to open the file by repairing or removing the unreadable content. and this error message in a word document: Repair Result to testfile_04-Nov-2024.xlsx Errors were detected in file '/Users/llindgren/Downloads/testfile_04-Nov-2024.xlsx' Removed Records: Conditional formatting from /xl/worksheets/sheet1.xml part I also tried using a different function for all the above, but this resulted in no highlighted values: import openpyxl from openpyxl.styles import PatternFill # Load the Excel file file_path = "your_file.xlsx" workbook = openpyxl.load_workbook(file_path) sheet = workbook.active # Define the fill styles for highlighting light_red_fill = PatternFill(start_color="FFC7CE", end_color="FFC7CE", fill_type="solid") # Light red for columns F and G dark_red_fill = PatternFill(start_color="8B0000", end_color="8B0000", fill_type="solid") # Dark red for column A # Loop through each row and check values in columns F and G for row in range(2, sheet.max_row + 1): # Starting from row 2 to skip headers if they exist cell_f = sheet[f"F{row}"] cell_g = sheet[f"G{row}"] # Check and highlight cells in F and G if their value >= 30 if cell_f.value is not None and isinstance(cell_f.value, (int, float)): if cell_f.value >= 30: cell_f.fill = light_red_fill # Highlight F print(f"Highlighted F{row} with light red.") # Debug statement if cell_g.value is not None and isinstance(cell_g.value, (int, float)): if cell_g.value >= 30: cell_g.fill = light_red_fill # Highlight G print(f"Highlighted G{row} with light red.") # Debug statement # Check if both F and G are >= 30, then apply dark red fill to column A if ( cell_f.value is not None and cell_g.value is not None and isinstance(cell_f.value, (int, float)) and isinstance(cell_g.value, (int, float)) and cell_f.value >= 30 and cell_g.value >= 30 ): sheet[f"A{row}"].fill = dark_red_fill print(f"Highlighted A{row} with dark red.") # Debug statement # Save changes to the same file workbook.save(file_path) print("File saved successfully.")
PART 1 To fix your first issue; If both these values in a row are calculated to be >=30, I would like to highlight the corresponding cell in column A You just need to add another CF rule for column A using 'FormulaRule' Using a Sheet that contains your example data which I'll just use to fill the first 3 rows so the data covers the range A1:H4, add another CF Rule that checks the value of the same row F and G columns. If both are greater or equal to 30 then highlight A. I have updated your first code sample with the additional rule. The rule uses this formula; Thanks to @Rachel for pointing out the formula can be simplified to... AND($F2>=30,$G2>=30) which basically looks at the value in Columns F and G and if both are greater or equal to 30 then return TRUE, otherwise return FALSE. Note: The extra import for the formula Rule 'FormulaRule ' Doesn't seem likely that you're going to have a Sheet that uses all 1048576 rows but will stick with that max for the example from openpyxl import load_workbook from openpyxl.styles import PatternFill, NamedStyle from openpyxl.formatting.rule import CellIsRule, FormulaRule # <--- Add import def format_xlsx(xlsx_file): # Load the file and workbook file_path = xlsx_file workbook = load_workbook(file_path) sheet = workbook.active # Define the fills light_red_fill = PatternFill(start_color="FFC7CE", end_color="FFC7CE", fill_type="solid") dark_red_fill = PatternFill(start_color="8B0000", end_color="8B0000", fill_type="solid") yellow_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid") # Create conditional format rules for F and G: ### Note the range used here. This covers both Columns F and G, dont need a second rule for Column G sheet.conditional_formatting.add( "F2:G1048576", # Apply to column F from row 2 onwards CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) ### Dont need this ### # sheet.conditional_formatting.add( # "G2:G1048576", # Apply to column G from row 2 onwards # CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill)) ### Additional Rule for Column A sheet.conditional_formatting.add( "A2:A1048576", # Apply to column A from row 2 onwards FormulaRule(formula=["AND($F2>=30,$G2>=30)"], fill=light_red_fill)) workbook.save(f"new_{file_path}") return xlsx_file format_xlsx('cf1.xlsx') Example Sheet I have manually changed the value in cell G3 to 32 for display In the example Sheet below; In Row 2 in F and G values are below 30 so no cell is highlighted In Row 3 in F is less than 30 and G is greater or equal to 30, so G is highlighted and A and F are not In Row 4 in F and G are greater or equal to 30 so F, G and A are highlighted. The rule on column F & G is your original, however, there is no need to apply to the ranges F2:F1048576 and then the same for G2:... Just set the range to F2:G1048576 on one rule. PART 2 For your Second requirement; Alternatively, if the value in column H is "Warranty PM", I want to highlight the corresponding cell in column A in yellow. This is the same as in Part 1 with a different formula checking the value of Column H. I have modified your code below slightly to include the setting of priorities on the rules. Priority 1 obviously should be applied over any other rules for the same range by Excel and appear at the the top of the list of rules in CF. I have also added the stopIfTrue= field which is used to determine if the next priority rule for the same range is applied if a higher priority rule on the range was True. Don't think you want to apply it but have added it in set to default False just in case. I think the changes to the code are pretty clear on what and how so I wont bother with including additional comments on what each line does. I have arbitrarily applied priorities, I'll leave it to you to set as you need. from openpyxl import load_workbook from openpyxl.styles import PatternFill, NamedStyle from openpyxl.formatting.rule import CellIsRule, FormulaRule # <--- Add import def format_xlsx(xlsx_file): # Load the file and workbook file_path = xlsx_file workbook = load_workbook(file_path) sheet = workbook.active # Define the fills light_red_fill = PatternFill(start_color="FFC7CE", end_color="FFC7CE", fill_type="solid", stopIfTrue=False) dark_red_fill = PatternFill(start_color="8B0000", end_color="8B0000", fill_type="solid", stopIfTrue=False) yellow_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid", stopIfTrue=False) # Create conditional format rules for A, F and G: rule2 = CellIsRule(operator="greaterThanOrEqual", formula=["30"], fill=light_red_fill) rule3 = FormulaRule(formula=["AND($F2>=30,$G2>=30)"], fill=light_red_fill) rule1 = FormulaRule(formula=["=$H2=\"Warranty PM\""], fill=yellow_fill) # Add the rules to the Sheet sheet.conditional_formatting.add("F2:G1048576", rule2) sheet.conditional_formatting.add("A2:A1048576", rule3) sheet.conditional_formatting.add("A2:A1048576", rule1) # Set the rule's Priority rule2.priority = 2 rule3.priority = 3 rule1.priority = 1 workbook.save(f"new_{file_path}") return xlsx_file format_xlsx('cf1.xlsx')
2
3
79,158,826
2024-11-5
https://stackoverflow.com/questions/79158826/printing-nested-html-tables-in-pyqt6
I have an issue when trying to print the contents of a QTableWidget in a PyQt6 application. It actually works, but there is a small problem: I have tables embedded in the main table and I'd like those tables to completely fill the parent cells (100% of their widths), but the child tables don't expand as expected. This is my code: import sys from PyQt6 import QtWidgets, QtPrintSupport from PyQt6.QtGui import QTextDocument class MyWidget(QtWidgets.QWidget): def __init__(self): super().__init__() self.table_widget = QtWidgets.QTableWidget() self.button = QtWidgets.QPushButton('Print TableWidget') self.layout = QtWidgets.QVBoxLayout(self) self.layout.addWidget(self.table_widget) self.layout.addWidget(self.button) self.button.clicked.connect(self.print_table) def print_table(self): html_table = ''' <table cellpadding="0"> <tr><th>header1</th><th>header2</th><th>header3</th></tr> <tr> <td>data1</td> <td>data2</td> <td><table> <tr> <th>header1</th><th>header2</th><th>header3</th> </tr> <tr> <td>data3</td><td>data3</td><td>data3</td> </tr> </table></td> </tr> <tr> <td>data1</td> <td>data2</td> <td><table> <tr> <th>hr1</th><th>hr2</th><th>hr3</th><th>hr4</th> </tr> <tr> <td>d3</td><td>d3</td><td>d3</td><td>d3</td> </tr> <tr> <td>d3</td><td>d3</td><td>d3</td><td>d3</td> </tr> </table></td> </tr> </table> ''' style_sheet = ''' table { border-collapse: collapse; width: 100%; } th { background-color: lightblue; border: 1px solid gray; height: 1em; } td { border: 1px solid gray; padding: 0; vertical-align: top; } ''' text_doc = QTextDocument() text_doc.setDefaultStyleSheet(style_sheet) text_doc.setHtml(html_table) prev_dialog = QtPrintSupport.QPrintPreviewDialog() prev_dialog.paintRequested.connect(text_doc.print) prev_dialog.exec() if __name__ == '__main__': app = QtWidgets.QApplication([]) widget = MyWidget() widget.resize(640,480) widget.show() sys.exit(app.exec()) And this is what i get: But this is what i want: I would appreciate any suggestions about this problem, as I have no idea about how to fix it.
The rich text in Qt only provides a limited subset of HTML and CSS. More specifically, it only provides what QTextDocument allows, and the HTML parsing is therefore completely based on the QTextDocument capabilities. Complex layouts that may be easier to achieve with standard HTML and CSS in a common web browser, may be difficult if not impossible to get in Qt, and the documentation has to be carefully checked. Specifically, the width property is not listed in the CSS Properties list. Nonetheless, it is supported as an HTML attribute for many tags, including table and their cells. Note: there may be occurrences of tags/CSS behavior that works "as expected" even if not documented, but you shall not rely on those, unless it's been found consistent in multiple and previous Qt versions, and possibly upon careful checking of consistency in the sources, both for the parser, and the document layout structure. Just remove that property from the CSS, and use it as an attribute, for example: <table width="100%"> ... <tr><td width="75%"> <table width="100%"> ... </table> </td></tr> </table> The above will make the parent table occupy the full viewport (or page) width, with the column containing the nested table being 75% of that width, and that table occupying the cell in full, horizontally. UPDATE As properly noted by ekhumoro's comment, though, there is a bug that actually prevents setting any numerical value of width, with the result that, no matter what, the table will always occupy almost the full width of the cell. In case you want the nested table to occupy just a percentage of the "parent cell", the work around for that is by setting percentages for each of its columns. Consider the following header syntax for the second nested table: <tr> <th width=20%>hr1</th> <th width=20%>hr2</th> <th width=20%>hr3</th> <th width=20%>hr4</th> </tr> Which, along with the first snippet above, will result in a table that properly occupies 80% of the cell width: You'll probably notice a small margin on the left and top of the tables, which is probably caused by the border-collapse property, and another demonstration of the limitations of Qt's rich text engine, which was not intended for pixel-perfect rendering and initially implemented in a more "flexible" (not design-oriented) intention of hyper text display. A possible work around may be to completely avoid border collapse, and use the table attribute cellspacing=0 instead; it may not be always appropriate, but it works in your specific case, and can be eventually fixed with a more carefully written CSS based on class selectors. For instance by setting a class for the table and explicitly declaring the rules for cells (eg: table.nested td { ... }). As already noted, if you want more control on the layout/appearance and more compliant support for modern HTML and CSS standards, the only option is to use the QtWebEngine related classes. Be aware, though, that that module alone is more than 150MB in size, so consider that requirement carefully.
1
1
79,160,774
2024-11-5
https://stackoverflow.com/questions/79160774/how-to-disable-the-caret-characters-in-the-python-stacktrace
I have a script for doing test readouts that worked fine with Python 3.9 but now we upgraded to Python 3.12 we have those carets that break the script. So the easiest way would be disabling it. Is there a way to disable the carets (^^^^^^^^^^^^^^^^^^^^) in the Python stacktrace? ERROR: test_email (tests.test_emails.EmailTestCase.test_email) ---------------------------------------------------------------------- Traceback (most recent call last): File "/my-project/email/tests/test_emails.py", line 72, in test_email self.assertNotEquals(self.email.id, self.email_id) ^^^^^^^^^^^^^^^^^^^^
You can run Python with the -X no_debug_ranges command-line option, or set the PYTHONNODEBUGRANGES environment variable (to any nonempty string) before running your program, to disable these indicators.
2
5
79,154,880
2024-11-4
https://stackoverflow.com/questions/79154880/why-do-i-lose-doc-on-a-parameterized-generic
Issue The docstring is lost when setting type parameters on a generic. Minimal example: from typing import TypeVar, Generic T = TypeVar("T") class Test(Generic[T]): """My docstring""" assert Test.__doc__ == "My docstring" This works fine as expected. However, this fails as __doc__ is now None: assert Test[int].__doc__ == "My docstring" Expectation I would expect the docstring to still be the same. Afterall, it's still the "same" class. Is there something I just don't understand about how Python's typing system making this intendet behavior? Background Using parameterized types in FastAPI, I'm losing the description (coming from the docstring) when generating OpenAPI specs. I can fix this with a decorator, but that leads to more problems in my case with Pydantic's model creation. But that's besides the point. I'd like to understand why this is happening in the first place regardless of non-builtin tools.
Parameterising user-defined generic types with type arguments results in instances of typing._GenericAlias, which inherits from typing._BaseGenericAlias. >>> type(Test[int]) <class 'typing._GenericAlias'> >>> type(Test[int]).mro() [<class 'typing._GenericAlias'>, <class 'typing._BaseGenericAlias'>, <class 'typing._Final'>, <class 'object'>] If you look at the source code, it shows you that attributes are relayed from Test to Test[int] via a __getattr__ defined on _BaseGenericAlias. This reveals 2 issues preventing __doc__ from being relayed from Test to Test[int]: Since Test[int] is an instance of typing._GenericAlias, when you're accessing Test[int].__doc__, you are actually accessing typing._GenericAlias.__doc__; since all classes (including typing._GenericAlias) have a .__doc__ = None if no docstring is provided, __getattr__ is never called and you get None. If you add a dummy docstring to your Python standard library's typing.py, # typing.py ... class _GenericAlias(_BaseGenericAlias, _root=True): """ HELLO """ ... you can verify the behaviour: >>> Test[int].__doc__ 'HELLO' Even if __getattr__ were called, the implementation deliberately avoids relaying dunder attributes (like __doc__): def __getattr__(self, attr): ... # We are careful for copy and pickle. # Also for simplicity we don't relay any dunder names if '__origin__' in self.__dict__ and not _is_dunder(attr): return getattr(self.__origin__, attr) raise AttributeError(attr)
3
2
79,159,200
2024-11-5
https://stackoverflow.com/questions/79159200/how-to-fill-spaces-between-subplots-with-a-color-in-matplotlib
With the following code : nb_vars=4 fig, axs = plt.subplots(4,4,figsize=(8,8), gridspec_kw = {'wspace':0.20, 'hspace':0.20}, dpi= 100) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_yticklabels([]) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_xticklabels([]) The space between the subplots is white. How is it possible to colour them ? And with different colors ? See for example this figure :
You could add patches in between the axes: from matplotlib import patches nb_vars=4 # colors between two axes in a row r_colors = [['#CC0000', '#CC0000', '#CC0000'], ['#0293D8', '#0293D8', '#0293D8'], ['#FF8E00', '#FF8E00', '#FF8E00'], ['#ABB402', '#ABB402', '#ABB402'], ] # colors between two axes in a column c_colors = [['#CC0000', '#0293D8', '#FF8E00', '#ABB402'], ['#CC0000', '#0293D8', '#FF8E00', '#ABB402'], ['#CC0000', '#0293D8', '#FF8E00', '#ABB402'], ] fig, axs = plt.subplots(4, 4, figsize=(4, 4), gridspec_kw = {'wspace':0.20, 'hspace':0.20}, dpi= 100) h, w = axs.shape for r, i_ax in enumerate(axs): for c, ii_ax in enumerate(i_ax): ii_ax.set_yticklabels([]) ii_ax.set_xticklabels([]) ii_ax.plot([-r, r], [-c, c]) # plot dummy line for demo bbox = ii_ax.get_position() if r+1 < h: ii_ax.add_patch(patches.Rectangle((bbox.x0, bbox.y0), bbox.width, -0.2, facecolor=c_colors[r][c], zorder=-1, clip_on=False, transform=fig.transFigure, figure=fig )) if c+1 < w: ii_ax.add_patch(patches.Rectangle((bbox.x1, bbox.y0), 0.2, bbox.height, facecolor=r_colors[r][c], zorder=-1, clip_on=False, transform=fig.transFigure, figure=fig )) Output:
3
3
79,158,742
2024-11-5
https://stackoverflow.com/questions/79158742/i-want-to-change-values-of-a-string-to-upper-case-if-its-even-what-am-i-doing-w
def func(string): stringlist= string.split() stringlist =[] for pos in string: stringlist.append(pos) for pan in stringlist: if pan %2 ==0: pan.upper() print(stringlist) func("Hello") Tried everything I know. I just get an error about TypeError: not all arguments converted during string formatting
You could make use of enumerate, which returns index and value within the string: def func(string): return ''.join(char.upper() if index % 2 == 0 else char for index, char in enumerate(string)) print(func("Hello")) HeLlO
1
2
79,157,939
2024-11-5
https://stackoverflow.com/questions/79157939/how-can-i-rewrite-the-complex-number-z-5i-into-standard-form-z-coslog5
I would like to write complex numbers z into the standard form z = a + i b with a and b real numbers. For most of my cases, the sympy construct z.expand(complex=True) does what I am expecting but not in all cases. For instance, I fail to rewrite z = 5**sp.I and SymPy just gives back the input: In [1]: import sympy as sp In [2]: c1 = 2 * sp.sqrt(2) * sp.exp(-3 * sp.pi * sp.I / 4) In [3]: c1.expand(complex=True) # works as expected Out[3]: -2 - 2*I In [4]: c2 = 5**(sp.I) # SymPy fails here In [5]: c2.expand(complex=True) Out[5]: re(5**I) + I*im(5**I) In [6]: sp.__version__ Out[6]: '1.13.2' For c2, I would expect the conversion to give me cos(log(5)) + i * sin(log(5)). Is there a way to obtain this result?
First convert it to exponential form, then convert to trig. from sympy import I, cos, exp expr = 5 ** I expr = expr.rewrite(exp).rewrite(cos) print( expr ) Output: cos(log(5)) + I*sin(log(5))
1
4
79,158,209
2024-11-5
https://stackoverflow.com/questions/79158209/groupby-a-df-column-based-on-2-other-columns
I have an df which has 3 columns lets say Region, Country and AREA_CODE. Region Country AREA_CODE =================================== AMER US A1 AMER CANADA A1 AMER US B1 AMER US A1 I want to get the output like list of AREA_CODE for each country under each Region with 'ALL' as list value as well. something like { "AMER": { "US": ["ALL", "A1", "B1"], "CANADA": ["ALL", "A1"] } } So far i have tried to groupby both region and country column and then tried to group & agg it by AREA_CODE, it is throwing error df.drop_duplicates().groupby(["Region", "Country"]).groupby("Country")['AREA_CODE'].agg(lambda x: ["ALL"]+sorted(x.unique().tolist())).to_dict() Could someone kindly help me with this. Thanks,
In order to have an extra level of dictionary nesting, you need to perform an additional groupby. This is most easily done in a dictionary comprehension: out = {k: {k2: ['ALL']+sorted(v2.unique().tolist()) for k2, v2 in v.groupby('Country')['AREA_CODE'] } for k, v in df.drop_duplicates().groupby('Region') } Or using a single groupby and a for loop: out = {} for (k1, k2), v in df.groupby(['Region', 'Country'])['AREA_CODE']: out.setdefault(k1, {})[k2] = ['ALL']+sorted(v.unique()) Output: {'AMER': {'CANADA': ['ALL', 'A1'], 'US': ['ALL', 'A1', 'B1'], }, } Example with more regions: # input Region Country AREA_CODE 0 AMER US A1 1 AMER CANADA A1 2 AMER US B1 3 AMER US A1 4 ASIA INDIA C1 # output {'AMER': {'CANADA': ['ALL', 'A1'], 'US': ['ALL', 'A1', 'B1']}, 'ASIA': {'INDIA': ['ALL', 'C1']}}
2
1
79,157,442
2024-11-5
https://stackoverflow.com/questions/79157442/how-to-create-multi-channel-tif-image-with-python
I have set of microscopy images, each subset has two channels which I need to merge into one .tif image (Each merged image should contain two channels) import cv2 as cv im1 = cv.imread('/img1.tif', cv.IMREAD_UNCHANGED) im2 = cv.imread('/img1.tif', cv.IMREAD_UNCHANGED) im3 = im1 + im2 cv.imwrite('img3.tif', im3) That way it creates a file with one channel, but I'd like to get 16-bit .tif image with two channels that can be viewed separately in ImageJ, for example. Which way it can be corrected?
If you wanted to generate a TIFF file containing a "stack" of images, you don't need tifffile. You can use OpenCV's own function cv.imwritemulti(): import cv2 as cv im1 = cv.imread('img1.tif', cv.IMREAD_UNCHANGED) im2 = cv.imread('img1.tif', cv.IMREAD_UNCHANGED) im3 = [im1, im2] # a stack of two cv.imwritemulti('img3.tif', im3)
2
3
79,157,914
2024-11-5
https://stackoverflow.com/questions/79157914/why-io-bytesio-is-not-a-subclass-of-typing-binaryio-and-io-stringio-is-neither
When use match-case pattern, I found that case typing.BinaryIO(): can not match object with type io.BytesIO. So I try this: import io import typing assert issubclass(list, typing.Sequence) assert issubclass(list, typing.List) assert issubclass(dict, typing.Mapping) assert issubclass(dict, typing.Dict) # assert issubclass(io.StringIO, typing.TextIO) # failed! # assert issubclass(io.BytesIO, typing.BinaryIO) # failed! a = [1, 2, 3] b = {"a": 1, "b": 2, "c": 3} c = io.BytesIO(b"123123123") d = io.StringIO("123123123") assert isinstance(a, typing.List) assert isinstance(a, typing.Sequence) assert isinstance(b, typing.Dict) assert isinstance(b, typing.Mapping) # assert isinstance(c, typing.BinaryIO) # failed! # assert isinstance(d, typing.TextIO) # failed! It shows that io.BytesIO and io.StringIO are not subclass of typing.BinaryIO and typing.TextIO, which in my opinion is strange since official documents never hint me to be carefule about this behaviour (or at least I never found it). What is more strange is a .pyi stub file i found, its path is /data/users/XXXXXX/.vscode-server/extensions/ms-python.vscode-pylance-2024.11.1/dist/typeshed-fallback/stdlib/_io.pyi. I found this line in the stub file: class BytesIO(BufferedIOBase, _BufferedIOBase, BinaryIO):, which implies that things should be as they should be for my intuition. Is this intentional design or a bug? If is a design, why? I'm using Python 3.12.3 with mypy 1.13.0, under VSCode 1.93.1 with pylance 2024.11.1 extension.
Classes in the typing module are not meant to be used for instance / subclass checks at runtime. Their purpose is type hinting. If you want to use instance / subclass checks like that, check out collections.abc, especially this overview. For io.StringIO and io.BytesIO, their abstract base classes are io.TextIOBase and io.BufferedIOBase. Their common base class is io.IOBase.
1
3
79,157,504
2024-11-5
https://stackoverflow.com/questions/79157504/how-to-append-an-executable-sum-formula-to-google-sheets-using-python
I'm writing a Python script that uploads a Polars DataFrame to Google Sheets and formats it accordingly. One of my goals is to create a summary row at the bottom of the table that sums the numerical values for each column. Currently, I have the following code snippet that successfully constructs a summary row: # Add a summary row at the end of the data num_rows = len(data) total_row = ['Grand Total', ""] for col in range(2, len(header)): total_formula = f'=SUM({chr(65 + col)}2:{chr(65 + col)}{num_rows})' total_row.append(total_formula) new_sheet.append_row(total_row) The problem is, I've encountered an issue where this code creates a string in the grand total row that includes a leading single quote, like this: '=SUM(F2:F47) This prevents Google Sheets from executing it as a formula. My Question: How can I modify this code to ensure that it writes an executable formula to Google Sheets without the leading single quote? Any insights or suggestions would be greatly appreciated! Additional Context: Packages use include gspread oauth2client polars gspread-formatting Data Structure: I’m using a Polars DataFrame to manage my data before uploading it to Google Sheets. Thank you!
The default value_input_option is set to RAW in gspread's append_row method. The input data won't be parsed, if set to RAW according to sheets API. Set it to USER_ENTERED: new_sheet.append_row(total_row, value_input_option='USER_ENTERED') Or new_sheet.append_row(total_row, value_input_option=gspread.utils.ValueInputOption.user_entered)
1
2
79,157,015
2024-11-4
https://stackoverflow.com/questions/79157015/how-to-place-dataframe-data-in-one-unique-index
I've got the next code: data = [{'TpoMoneda': 'UYU'}, {'MntNetoIvaTasaMin': '3825.44'}, {'IVATasaMin': '10.000'}, {'IVATasaBasica': '22.000'}, {'MntIVATasaMin': '382.54'}, {'MntTotal': '4207.98'}, {'MntTotRetenido': '133.90'}, {'CantLinDet': '2'}, {'RetencPercep': None}, {'RetencPercep': None}, {'MontoNF': '0.12'}, {'MntPagar': '4342.00'}] df = pd.DataFrame.from_dict(data, orient = 'columns') My dataframe looks like this: # TpoMoneda MntNetoIvaTasaMin IVATasaMin ... RetencPercep MontoNF MntPagar # 0 UYU NaN NaN ... NaN NaN NaN # 1 NaN 3825.44 NaN ... NaN NaN NaN # 2 NaN NaN 10.000 ... NaN NaN NaN # 3 NaN NaN NaN ... NaN NaN NaN # 4 NaN NaN NaN ... NaN NaN NaN # 5 NaN NaN NaN ... NaN NaN NaN # 6 NaN NaN NaN ... NaN NaN NaN # 7 NaN NaN NaN ... NaN NaN NaN # 8 NaN NaN NaN ... NaN NaN NaN # 9 NaN NaN NaN ... NaN NaN NaN # 10 NaN NaN NaN ... NaN 0.12 NaN # 11 NaN NaN NaN ... NaN NaN 4342.00 # # [12 rows x 11 columns] I want to have all the data in one unique index (1 row x 11 columns), for example: # TpoMoneda MntNetoIvaTasaMin IvaTasaMin ... RetencPercep MontoNF MntPagar # 0 UYU 3825.44 10.000 Nan 0.12 4342.00 # # [1 rows x 11 columns] Despite I don´t speak english very well, I´m open to trying understand all the solutions someone can give me. Thank all!
You can use map + pd.transpose like below pd.DataFrame( map(lambda x: df[x].dropna().values, df.columns), index=df.columns ).transpose() which gives TpoMoneda MntNetoIvaTasaMin IVATasaMin IVATasaBasica MntIVATasaMin MntTotal \ 0 UYU 3825.44 10.000 22.000 382.54 4207.98 MntTotRetenido CantLinDet RetencPercep MontoNF MntPagar 0 133.90 2 None 0.12 4342.00 Or, you can try pd.aggregate like below pd.DataFrame( df.agg(lambda x: x.dropna().values[0] if any(~x.isna()) else np.nan, axis=0) ).transpose() which gives TpoMoneda MntNetoIvaTasaMin IVATasaMin IVATasaBasica MntIVATasaMin MntTotal \ 0 UYU 3825.44 10.000 22.000 382.54 4207.98 MntTotRetenido CantLinDet RetencPercep MontoNF MntPagar 0 133.90 2 NaN 0.12 4342.00
2
0
79,148,243
2024-11-1
https://stackoverflow.com/questions/79148243/transposing-within-a-polars-df-lead-to-typeerror-not-yet-implemented-nested-o
I have this data: ┌────────────┬─────────────────────────────────────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ list[str] ┆ list[list[str]] │ ╞════════════╪═════════════════════════════════════╡ │ ["a"] ┆ [["a"]] │ │ ["b", "c"] ┆ [["b", "c"], ["b", "c"], ["b", "c"] │ │ ["d"] ┆ [["d"]] │ └────────────┴─────────────────────────────────────┘ I want to have all b's and all c's in the same list in in row 2, but as you can see the associations of b to b and c to c are not maintained in row 2. With pandas I used: import pandas as pd pddf = pd.DataFrame({"col1": [["a"], ["b", "c"], ["d"]], "col2": [[["a"]], [["b", "c"], ["b", "c"], ["b", "c"]], [["d"]]]}) pddf["col2"] = pddf["col2"].apply(lambda listed: pd.DataFrame(listed).transpose().values.tolist()) print(pddf) # col1 col2 # 0 [a] [[a]] # 1 [b, c] [[b, b, b], [c, c, c]] # 2 [d] [[d]] This is the desired result. I am trying to do the same with polars, by replacing pddf.transpose().values.tolist() with pldf.transpose().to_numpy().tolist(), but I always get and TypeError: not yet implemented: Nested object types. Are there any workarounds? Here is the complete polars code: import polars as pl pldf = pl.DataFrame({"col1": [["a"], ["b", "c"], ["d"]], "col2": [[["a"]], [["b", "c"], ["b", "c"], ["b", "c"]], [["d"]]]}) pldf = pldf.with_columns(pl.col("col2").map_elements(lambda listed: pl.DataFrame(listed).transpose().to_numpy().tolist())) print(pldf) # TypeError: not yet implemented: Nested object types # Hint: Try setting `strict=False` to allow passing data with mixed types. Where would I need to apply the mentioned strict=False? On an easier df pddf.transpose().values.tolist() and pldf.transpose().to_numpy().tolist() are the same: import pandas as pd import polars as pl pd.DataFrame( {"col1": ["a", "b", "c"], "col2": ["d", "e", "f"]} ).transpose().values.tolist() == pl.DataFrame( {"col1": ["a", "b", "c"], "col2": ["d", "e", "f"]} ).transpose().to_numpy().tolist() # True Please keep as close as possible to the code, even though it's not ideal using .apply() or .map_elements(), but this is in a far greater project and I don't want to break anything else :). (EDIT: I simplified the code a little since the second lambda wasn't really necessary for the question.)
As you've mentioned performance in the comments - this may also be of interest. Building upon @Herick's answer - when you add an index and explode, you are guaranteed to have sorted data. With sorted data, you can add a "row number per group" using rle() - which can be significantly faster. (some discussion here: https://github.com/pola-rs/polars/issues/19089) (df.with_row_index("row") .select("row", "col2") .explode("col2") .with_row_index() .explode("col2") .with_columns( pl.int_ranges(pl.col("index").rle().struct.field("len")) .flatten() .alias("list_idx") ) ) shape: (8, 4) ┌───────┬─────┬──────┬──────────┐ │ index ┆ row ┆ col2 ┆ list_idx │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ u32 ┆ str ┆ i64 │ ╞═══════╪═════╪══════╪══════════╡ │ 0 ┆ 0 ┆ a ┆ 0 │ │ 1 ┆ 1 ┆ b ┆ 0 │ │ 1 ┆ 1 ┆ c ┆ 1 │ │ 2 ┆ 1 ┆ b ┆ 0 │ │ 2 ┆ 1 ┆ c ┆ 1 │ │ 3 ┆ 1 ┆ b ┆ 0 │ │ 3 ┆ 1 ┆ c ┆ 1 │ │ 4 ┆ 2 ┆ d ┆ 0 │ └───────┴─────┴──────┴──────────┘ .map_batches() can be used to wrap the frame exploding/group_by logic into an expression context. df = pl.DataFrame({ "col1": [["a"], ["b", "c"], ["d"]], "col2": [[["a"]], [["b", "c"], ["b", "c"], ["b", "c"]], [["d"]]], "col3": [[[1]], [[2, 3], [4, 5], [6, 7]], [[8]]] }) df.with_columns( pl.col("col2", "col3").map_batches(lambda s: s.to_frame() .with_row_index("row") .explode(pl.last()) .with_row_index() .explode(pl.last()) .with_columns( pl.int_ranges(pl.col("index").rle().struct.field("len")) .flatten() .alias("index") ) .group_by("index", "row", maintain_order=True) # order required for "horizontal concat" .agg(pl.last()) .group_by("row", maintain_order=True) # order required for "horizontal concat" .agg(pl.last()) .select(pl.last()) .to_series() # must return a series ) ) shape: (3, 3) ┌────────────┬─────────────────────────────────┬────────────────────────┐ │ col1 ┆ col2 ┆ col3 │ │ --- ┆ --- ┆ --- │ │ list[str] ┆ list[list[str]] ┆ list[list[i64]] │ ╞════════════╪═════════════════════════════════╪════════════════════════╡ │ ["a"] ┆ [["a"]] ┆ [[1]] │ │ ["b", "c"] ┆ [["b", "b", "b"], ["c", "c", "… ┆ [[2, 4, 6], [3, 5, 7]] │ │ ["d"] ┆ [["d"]] ┆ [[8]] │ └────────────┴─────────────────────────────────┴────────────────────────┘ As a basic comparison, if I sample 5_000_000 rows, I get: name time map_elements 61.19s map_batches 7.14s
3
0
79,155,921
2024-11-4
https://stackoverflow.com/questions/79155921/wagtail-cms-programmatically-enabling-user-access-to-manage-pages-within-the-a
Context Wagtail CMS has a permission system that builds on that of Django's. However, customizing it for users that are neither an admin nor using the pre-made groups Moderator or Editor is unclear. Presently, I have: A custom user class, StudentUser Pages arranged in the below hierarchy: Program | Course / | \ Report Labs Events I'd like to add a group, Student, which can add/submit pages of type Report. Like stated in the title, I require a programmatic solution. Having an admin go through and personally assign permissions is not acceptable. Problem Wagtail provides only one programmatic code example, which is for adding a custom permission here in their documentation: from django.contrib.auth.models import Permission from django.contrib.contenttypes.models import ContentType from wagtail.admin.models import Admin content_type = ContentType.objects.get_for_model(Admin) permission = Permission.objects.create( content_type=content_type, codename="can_do_something", name="Can do something", ) Technically, I don't need a custom permission. I need to grant a Group a custom set of existing permissions. To accomplish this, I've attempted the following: from django.contrib.auth.models import Permission, Group from example.website.models import StudentUser group, created = Group.objects.get_or_create(name="Student") add_report = Permission.objects.get(codename="add_reportpage") change_report = Permission.objects.get(codename="change_reportpage") group.permissions.add(add_report, change_report) user = StudentUser.objects.get(email="[email protected]") user.groups.add(group) Unfortunately, this hasn't worked. When I log into the backend with a user of type StudentUser (the account to which I provided the permission), there's nothing but the Documents tab (that is default) in the navigation menu. Nowhere can I see a place to modify Report. If I try to copy the exact Report URL path used in the admin login, it doesn't allow access when logged in the StudentUser despite the added permissions. Further debugging To figure out if there are some other types of permissions I'm missing, I listed the permissions for all groups. Then, I copied them for my new group. You can see the list of permissions now below that I copied from the built-in Moderators group: Moderators <Permission: Wagtail admin | admin | Can access Wagtail admin>, <Permission: Wagtail documents | document | Can add document>, <Permission: Wagtail documents | document | Can change document>, <Permission: Wagtail documents | document | Can choose document>, <Permission: Wagtail documents | document | Can delete document>, <Permission: Wagtail images | image | Can add image>, <Permission: Wagtail images | image | Can change image>, <Permission: Wagtail images | image | Can choose image>, <Permission: Wagtail images | image | Can delete image> Student <Permission: Wagtail admin | admin | Can access Wagtail admin>, <Permission: Website | report index | Can view report index>, <Permission: Website | report | Can add report>, <Permission: Website | report | Can change report>, <Permission: Website | report | Can delete report>, <Permission: Website | report | Can view report> As you can see, it would appear only admin access seems like the prerequisite for seeing content in the Admin interface. Despite this, logging in as Student still shows no changes in the Wagtail admin panel. And attempts to access any report in the admin interface (as you would as admin) continue to get a permission denied return code. What step am I missing here to ensure custom groups can access content in the admin interface?
Page permissions in Wagtail are determined by position within the page hierarchy rather than by page type. So, while it's not possible to directly give students edit permission over the Report page model - if all your Report pages exist under a single ReportIndex (something that can be enforced through the use of parent_page_types / subpage_types rules), you can give them add/change permission over that ReportIndex page, which will thus allow them to create Report pages there. Page permissions are defined through the wagtail.models.GroupPagePermission model, which is distinct from Django's built-in group/permission relation. To set such a permission rule up, you can do the following: from wagtail.models import GroupPagePermission from django.contrib.auth.models import Permission, Group from myapp.models import ReportIndex students, created = Group.objects.get_or_create(name="Student") add_page = Permission.objects.get(codename="add_reportpage") report_index_page = ReportIndex.objects.first() GroupPagePermission.objects.create( group=students, page=report_index_page, permission=add_page, )
2
2
79,152,821
2024-11-3
https://stackoverflow.com/questions/79152821/python-selenium-script-not-retrieving-product-price-from-a-webpage
I'm trying to scrape product prices from the website Ultra Liquors using Python and Selenium, but I'm unable to retrieve the price despite the HTML containing the expected elements. My goal is to compare prices from several shops to find the best deals or any ongoing specials for our venue. Here's the code I'm using: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By # Set up Chrome options options = Options() options.add_argument('--headless') # Run in headless mode options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') # Initialize Chrome driver service = Service('path_to_chromedriver') # Replace with your path to chromedriver driver = webdriver.Chrome(service=service, options=options) # Open the product page driver.get('https://george.ultraliquors.co.za/olof-bergh-olof-bergh-brandy-750ml') try: # Attempt to retrieve product name product_name = driver.find_element(By.XPATH, '//h1[@class="product-title"]').text print(f"Product Name: {product_name}") except Exception as e: print(f"Could not locate product name: {e}") try: # Attempt to retrieve price price_element = driver.find_element(By.CLASS_NAME, 'price-value-10677') price = price_element.text print(f"Price: {price}") except Exception as e: print(f"Could not locate price: {e}") # Close the driver driver.quit() I expect to get the price value 'R169.99', but the script is not finding it and returns an error message. I've tried using different element locators and checking if the element is dynamically loaded. I'm using Python 3.12, Selenium 4.8, and ChromeDriver. Any help would be greatly appreciated!
You should use bs4 because bs4 is faster than selenium (if you don't have to deal with bot protection), I used this endpoint https://george.ultraliquors.co.za/getFilteredProducts which accepts a POST request with JSON body, you can get all the product prices under the category of SPIRITS --> BRANDY and your catagoryId is 4 there are lots of category in your target apps out of them your target category number is 4, here is the sample code with bs4 and requests library, Sample Code: import requests from bs4 import BeautifulSoup import json # https://www.makro.co.za Result as you said url = 'https://www.makro.co.za/makhybris/v2/makro/category/JG/search?channelType=WEB&fields=LIGHT&query=:relevance&userType=B2C&pos=M10' def getPrice_two(url): resp = requests.get(url) for code in resp.json()['contentSlots']['contentSlot'][0]['components']['component'][0]['facetTileDataList']: data = {"variables":{"categoryId":f"{code['code']}","keyword":"","filterQuery":{},"offset":0,"sortBy":"relevance","sortOrder":"desc","storeId":"M10","dynamicPriceRange":True,"customerDetails":{"customerType":"B2C","targetGroups":[]}}} n, data_next = 0, data while True: url = 'https://www.makro.co.za/wmapi/bff/graphql/CategoryListing/49bf7b2507b2c0ad40dc253614b8d5fb9b1834cb677a6c558aba06f4f399ff9f?channelType=WEB' header = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:131.0) Gecko/20100101 Firefox/131.0", "Wm_tenant_id": "30" } resp = requests.post(url, json=data_next, headers=header) for i in resp.json()['data']['categoryListing']['data']['results']['items']: values = [i['itemDetails']['itemInfo']['genericName'],i['itemDetails']['price']['basePrice']] print(values) try: pagination = resp.json()['data']['categoryListing']['data']['results']['pagination']['nextPageOffset'] data_next = {"variables":{"categoryId":f"{code['code']}","keyword":"","filterQuery":{},"offset":pagination,"sortBy":"relevance","sortOrder":"desc","storeId":"M10","dynamicPriceRange":True,"customerDetails":{"customerType":"B2C","targetGroups":[]}}} except TypeError: break getPrice_two(url) #https://shop.liquorcity.co.za Result as you said def getUserId(store_name): url = 'https://shop.liquorcity.co.za/api/marketplace/marketplace_get_city_storefronts_v3?domain_name=shop.liquorcity.co.za&post_to_get=1&marketplace_reference_id=01c838877c7b7c7d9f15b8f40d3d2980&marketplace_user_id=742314&latitude=-26.1192269&longitude=28.0264195&filters=undefined&skip=0&limit=250&self_pickup=1&source=0&dual_user_key=0&language=en' resp = requests.get(url).json() for i in resp['data']: if store_name in i['storepage_slug']: return i['storefront_user_id'] def getCatagory(store_name): user_id = getUserId(store_name) url = 'https://shop.liquorcity.co.za/api/catalogue/get' data = {"marketplace_user_id":742314,"user_id":user_id,"date_time":"2024-11-04T10:52:56.815Z","show_all_sub_categories":1,"domain_name":"shop.liquorcity.co.za","dual_user_key":0,"language":"en"} resp = requests.post(url, json=data).json() catagory_ids = [] for i in resp['data']['result']: catagory_ids.append(i['catalogue_id']) for sub in i['sub_categories']: catagory_ids.append(sub['catalogue_id']) for id in catagory_ids: url = 'https://shop.liquorcity.co.za/api/get_products_for_category' data_ctgry = data = {"parent_category_id":id,"page_no":1,"offset":0,"limit":500,"marketplace_user_id":742314,"user_id":user_id,"date_time":"2024-11-04T11:46:05.410Z","domain_name":"shop.liquorcity.co.za","dual_user_key":0,"language":"en"} resp = requests.post(url, json=data).json() for i in resp['data']: values = [i['layout_data']['lines'][0]['data'], i['layout_data']['lines'][1]['data'], i['layout_data']['lines'][2]['data']] print(values) getCatagory('Mosselbay') #https://www.checkers.co.za Please add a loop for all pages i just included page 1 for the demo def getPrice(searchItem, page): url = f'https://www.checkers.co.za/c-2256/All-Departments?q=%3Arelevance%3AallCategories%3A{searchItem}%3AbrowseAllStoresFacetOff%3AbrowseAllStoresFacetOff&page={page}' header = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:131.0) Gecko/20100101 Firefox/131.0" } resp = requests.get(url, headers=header).text soup = BeautifulSoup(resp, 'lxml') for i in soup.findAll(class_='product-frame'): value = json.loads(i['data-product-ga']) name = value['name'] price = value['price'] img = value['product_image_url'] data = [name, price] print(data) getPrice('drinks', '1') #https://www.ngf.co.za Result as you said page = -1 while True: page = page + 1 url = f'https://www.ngf.co.za/product-category/spirits/page/{page}/' resp = requests.get(url).text soup = BeautifulSoup(resp, 'lxml') get_details = soup.findAll(class_='box-text box-text-products text-center grid-style-2') if get_details: for i in get_details: details = [i.find('a').text, i.find('bdi').text] print(details) else: break #https://george.ultraliquors.co.za Previous One url = "https://george.ultraliquors.co.za/getFilteredProducts" page = 0 while True: page = page + 1 data = {"categoryId":"3","manufacturerId":"0","vendorId":"0","pageNumber":page,"orderby":"5","viewmode":None,"pagesize":0,"queryString":"","shouldNotStartFromFirstPage":True,"keyword":"","searchCategoryId":"0","searchManufacturerId":"0","searchVendorId":"0","priceFrom":"","priceTo":"","includeSubcategories":"False","searchInProductDescriptions":"False","advancedSearch":"False","isOnSearchPage":"False","inStockFilterModel":None} header = { "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:131.0) Gecko/20100101 Firefox/131.0" } r = requests.post(url, json=data, headers=header).text soup = BeautifulSoup(r, 'lxml') html_page = soup.findAll('div', class_='product-item product-box-product-item product-box-grid') if html_page: for i in html_page: product = i.find(class_="product-title-product-box").text.strip() price = f"{i.find(class_='price actual-price').text.strip()}.{i.find(class_='price actual-price-cents').text.strip()}" size = i.find(class_='desktop-product-box-pack-size').text.strip() product_url = f"https://george.ultraliquors.co.za{i.find(class_='product-title-product-box')['href']}" all_details = [product, price, size, product_url] print(all_details) else: break Sample output: ['1000 POUNDER RUM', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/1000-pounder-1000-pounder-rum-750ml-2'] ['1000 POUNDER RUM', 'R1479.00', '6 x 750ML', 'https://george.ultraliquors.co.za/1000-pounder-1000-pounder-rum-750ml-x-6'] ['ABERLOUR 12YR MALT TIN', 'R809.99', '1 x 750ML', 'https://george.ultraliquors.co.za/aberlour-aberlour-12yr-malt-tin-750ml-3'] ['ABERLOUR 12YR MALT TIN', 'R4779.00', '6 x 750ML', 'https://george.ultraliquors.co.za/aberlour-aberlour-12yr-malt-tin-750ml-x-6'] ['ABERLOUR 16YR MALT TIN', 'R1539.00', '1 x 750ML', 'https://george.ultraliquors.co.za/aberlour-aberlour-16yr-malt-tin-750ml'] ['ABERLOUR 16YR MALT TIN', 'R8419.00', '6 x 750ML', 'https://george.ultraliquors.co.za/aberlour-aberlour-16yr-malt-tin-750ml-x-6'] ['ABSOLUT VODKA BLUE', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-blue-750ml'] ['ABSOLUT VODKA BLUE', 'R2949.00', '12 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-blue-750ml-x-12'] ['ABSOLUT VODKA GRAPEFRUIT', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-grapefruit-750ml'] ['ABSOLUT VODKA GRAPEFRUIT', 'R2949.00', '12 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-grapefruit-750ml-x-12'] ['ABSOLUT VODKA LIME', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-lime-750ml'] ['ABSOLUT VODKA LIME', 'R2949.00', '12 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-lime-750ml-x-12'] ['ABSOLUT VODKA RASPBERRI', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-raspberri-750ml-2'] ['ABSOLUT VODKA RASPBERRI', 'R2949.00', '12 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-raspberri-750ml-x-12-2'] ['ABSOLUT VODKA WATERMELON', 'R249.99', '1 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-watermelon-750ml'] ['ABSOLUT VODKA WATERMELON', 'R2949.00', '12 x 750ML', 'https://george.ultraliquors.co.za/absolut-absolut-vodka-watermelon-750ml-x-12'] ['AERSTONE SINGLE MALT LAND CASK', 'R444.99', '1 x 750ML', 'https://george.ultraliquors.co.za/aerstone-aerstone-single-malt-land-cask-750ml-2'] ['AERSTONE SINGLE MALT LAND CASK', 'R2619.00', '6 x 750ML', 'https://george.ultraliquors.co.za/aerstone-aerstone-single-malt-land-cask-750ml-x-6'] Let me know if this is ok for your
2
1
79,155,737
2024-11-4
https://stackoverflow.com/questions/79155737/join-differently-nested-lists-in-polars-columns
As you might have recognized from my other questions I am transitioning from pandas to polars right now. I have a polars df with differently nested lists like this: ┌────────────────────────────────────┬────────────────────────────────────┬─────────────────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- │ │ list[list[str]] ┆ list[list[str]] ┆ list[str] ┆ str │ ╞════════════════════════════════════╪════════════════════════════════════╪═════════════════╪══════╡ │ [["a", "a"], ["b", "b"], ["c", "c"]┆ [["a", "a"], ["b", "b"], ["c", "c"]┆ ["A", "B", "C"] ┆ 1 │ │ [["a", "a"]] ┆ [["a", "a"]] ┆ ["A"] ┆ 2 │ │ [["b", "b"], ["c", "c"]] ┆ [["b", "b"], ["c", "c"]] ┆ ["B", "C"] ┆ 3 │ └────────────────────────────────────┴────────────────────────────────────┴─────────────────┴──────┘ Now I want to join the lists inside out using different separators to reach this: ┌─────────────┬─────────────┬───────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str │ ╞═════════════╪═════════════╪═══════╪══════╡ │ a+a-b+b-c+c ┆ a+a-b+b-c+c ┆ A-B-C ┆ 1 │ │ a+a ┆ a+a ┆ A ┆ 2 │ │ b+b-c+c ┆ b+b-c+c ┆ B-C ┆ 3 │ └─────────────┴─────────────┴───────┴──────┘ I do this by using map_elements and a for loop, but I guess that is highly inefficient. Is there a polars native way to manage this? Here is my code: import polars as pl df = pl.DataFrame({"col1": [[["a", "a"], ["b", "b"], ["c", "c"]], [["a", "a"]], [["b", "b"], ["c", "c"]]], "col2": [[["a", "a"], ["b", "b"], ["c", "c"]], [["a", "a"]], [["b", "b"], ["c", "c"]]], "col3": [["A", "B", "C"], ["A"], ["B", "C"]], "col4": ["1", "2", "3"]}) nested_list_cols = ["col1", "col2"] list_cols = ["col3"] for col in nested_list_cols: df = df.with_columns(pl.lit(df[col].map_elements(lambda listed: ['+'.join(element) for element in listed], return_dtype=pl.List(pl.String))).alias(col)) # is the return_dtype always pl.List(pl.String)? for col in list_cols + nested_list_cols: df = df.with_columns(pl.lit(df[col].list.join(separator='-')).alias(col))
You could use list.eval() and .list.join() df.with_columns( pl.col(nested_list_cols).list.eval(pl.element().list.join("+")).list.join("-"), pl.col(list_cols).list.join("-") ) shape: (3, 4) ┌─────────────┬─────────────┬───────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str │ ╞═════════════╪═════════════╪═══════╪══════╡ │ a+a-b+b-c+c ┆ a+a-b+b-c+c ┆ A-B-C ┆ 1 │ │ a+a ┆ a+a ┆ A ┆ 2 │ │ b+b-c+c ┆ b+b-c+c ┆ B-C ┆ 3 │ └─────────────┴─────────────┴───────┴──────┘
2
1
79,153,112
2024-11-3
https://stackoverflow.com/questions/79153112/keyerror-default-when-attempting-to-create-a-table-using-magic-line-sql-in-j
I am attempting to create a new database and create a table using magic line %sql in Jupyter Notebook but I've been getting a KeyError and I'm struggling to work out why, my code is as follows. import sqlite3 as sql import pandas as pd %load_ext sql %sql sqlite:///dataProgramming.db %%sql sqlite:// CREATE TABLE users ( FirstName VARCHAR(30) NOT NULL, LastName VARCHAR(30) NOT NULL, USERID INT NOT NULL UNIQUE, PRIMARY KEY (USERID) ); When I run this I get the following traceback. --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[3], line 1 ----> 1 get_ipython().run_cell_magic('sql', 'sqlite://', 'CREATE TABLE\n users (\n FirstName VARCHAR(30) NOT NULL,\n LastName VARCHAR(30) NOT NULL,\n USERID INT NOT NULL UNIQUE,\n PRIMARY KEY (USERID)\n );\n') File ~\anaconda3\Lib\site-packages\IPython\core\interactiveshell.py:2541, in InteractiveShell.run_cell_magic(self, magic_name, line, cell) 2539 with self.builtin_trap: 2540 args = (magic_arg_s, cell) -> 2541 result = fn(*args, **kwargs) 2543 # The code below prevents the output from being displayed 2544 # when using magics with decorator @output_can_be_silenced 2545 # when the last Python token in the expression is a ';'. 2546 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False): File ~\anaconda3\Lib\site-packages\sql\magic.py:365, in SqlMagic.execute(self, line, cell, local_ns) 257 @no_var_expand 258 @needs_local_scope 259 @line_magic("sql") (...) 337 ) 338 def execute(self, line="", cell="", local_ns=None): 339 """ 340 Runs SQL statement against a database, specified by 341 SQLAlchemy connect string. (...) 363 364 """ --> 365 return self._execute( 366 line=line, cell=cell, local_ns=local_ns, is_interactive_mode=False 367 ) File ~\anaconda3\Lib\site-packages\ploomber_core\exceptions.py:128, in modify_exceptions.<locals>.wrapper(*args, **kwargs) 125 @wraps(fn) 126 def wrapper(*args, **kwargs): 127 try: --> 128 return fn(*args, **kwargs) 129 except (ValueError, TypeError) as e: 130 _add_community_link(e) File ~\anaconda3\Lib\site-packages\sql\magic.py:624, in SqlMagic._execute(self, line, cell, local_ns, is_interactive_mode) 621 handle_exception(e, command.sql, self.short_errors) 622 except Exception as e: 623 # Handle non SQLAlchemy errors --> 624 handle_exception(e, command.sql, self.short_errors) File ~\anaconda3\Lib\site-packages\sql\error_handler.py:115, in handle_exception(error, query, short_error) 113 _display_error_msg_with_trace(error, detailed_message) 114 else: --> 115 raise error File ~\anaconda3\Lib\site-packages\sql\magic.py:578, in SqlMagic._execute(self, line, cell, local_ns, is_interactive_mode) 575 parameters = user_ns 577 try: --> 578 result = run_statements(conn, command.sql, self, parameters=parameters) 580 if ( 581 result is not None 582 and not isinstance(result, str) (...) 585 # Instead of returning values, set variables directly in the 586 # users namespace. Variable names given by column names 588 if self.autopandas or self.autopolars: File ~\anaconda3\Lib\site-packages\sql\run\run.py:65, in run_statements(conn, sql, config, parameters) 58 if ( 59 config.feedback >= 1 60 and hasattr(result, "rowcount") 61 and result.rowcount > 0 62 ): 63 display.message_success(f"{result.rowcount} rows affected.") ---> 65 result_set = ResultSet(result, config, statement, conn) 66 return select_df_type(result_set, config) File ~\anaconda3\Lib\site-packages\sql\run\resultset.py:39, in ResultSet.__init__(self, sqlaproxy, config, statement, conn) 36 self._is_dbapi_results = hasattr(sqlaproxy, "description") 38 # note that calling this will fetch the keys ---> 39 self._pretty_table = self._init_table() 41 self._mark_fetching_as_done = False 43 if self._config.autolimit == 1: 44 # if autolimit is 1, we only want to fetch one row File ~\anaconda3\Lib\site-packages\sql\run\resultset.py:466, in ResultSet._init_table(self) 463 pretty = CustomPrettyTable(self.field_names) 465 if isinstance(self._config.style, str): --> 466 _style = prettytable.__dict__[self._config.style.upper()] 467 pretty.set_style(_style) 469 return pretty KeyError: 'DEFAULT' Any guidance would be greatly appreciated, thank you I've attempted following the traceback through the .py files in the dependencies but I can't work out the issues
This issue reported here looks to be the same with very related pointers to similar code in the last parts and the same error. Following the advice given there: Try running a cell like %config SqlMagic.style = '_DEPRECATED_DEFAULT' before running the cell magic one in your post.
4
8
79,154,580
2024-11-4
https://stackoverflow.com/questions/79154580/check-if-string-does-not-contain-strings-from-the-list-with-wildcard-when-symb
newlist = ['test', '%ing', 'osh', '16fg'] tartext = 'Singing' I want to check my tartext value doesn't matching with any value with newlist. if the newlist string contains % symbol in the value then I need to match this as wildcard charecter. I want to achieve condition as below. if (tartext != 'test' and tartext not like '%ing' and tartext != 'osh' and tartext != '16fg') then return true else false Since %ing from the lsit contains '%' symbol in it I need to change the comparison as wildcard chatecter search like sql. In this example 'Singing' is matching with '%ing' then I am expecting the condition to return False. Below is the code that I have tried but didn't work import re newlist = ['test', '%ing', 'osh', '16fg'] tartext = 'Singing' def wildcard_compare(string, match): match = match.replace('%','.*')#.replace('_','.') match_expression = f'^{match}$' return bool(re.fullmatch(match_expression,string)) def condition_match(lookupstring, mylist): for value in mylist: if '%' in value: if not wildcard_compare(lookupstring,value): return True else: if value != lookupstring: return True return False print(condition_match(tartext,newlist)) print(condition_match('BO_IN',['AP_IN','BO_IN','CA_PS']))
One approach would be to use regular expressions. In this approach, we can replace % in the newlist by .*. Then, use re.search along with a list comprehension to find any matches. newlist = ['test', '%ing', 'osh', '16fg'] tartext = 'Singing' newlist = [r'^' + x.replace('%', '.*') + r'$' for x in newlist] num_matches = sum([1 for x in newlist if re.search(x, tartext)]) if num_matches > 0: print("found a match") # prints here else: print("no matches") Edit: See the comment below by @ekhumoro in the case that your input string might also have regex metacharacters in it.
2
0
79,155,290
2024-11-4
https://stackoverflow.com/questions/79155290/dutch-sentiment-analysis-robbertje-outputs-just-positive-negative-labels-netura
When I run Dutch sentiment analysis RobBERTje, it outputs just positive/negative labels, netural label is missing in the data. https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment There are obvious neutral sentences/words e.g. 'Fhdf' (nonsense) and 'Als gisteren inclusief blauw' (neutral), but they both evaluate to positive or negative. Is there a way to get neutral labels for such examples in RobBERTje? from transformers import RobertaTokenizer, RobertaForSequenceClassification from transformers import pipeline import torch model_name = "DTAI-KULeuven/robbert-v2-dutch-sentiment" model = RobertaForSequenceClassification.from_pretrained(model_name) tokenizer = RobertaTokenizer.from_pretrained(model_name) classifier = pipeline('sentiment-analysis', model=model, tokenizer = tokenizer) result1 = classifier('Fhdf') result2 = classifier('Als gisteren inclusief blauw') print(result1) print(result2) Output: [{'label': 'Positive', 'score': 0.7520257234573364}] [{'label': 'Negative', 'score': 0.7538396120071411}]
This model was trained only on negative and positive labels. Therefore, it will try to categorize every input as positive or negative, even if it is nonsensical or neutral. what you can do is to: 1- Find other models that was trained to include neutral label. 2- Fine-tune this model on a dataset that includes neutral label. 3- Empirically define a threshold based on the confidence outputs and interpret it as neutral. The first 2 choices are extensive in effort. I would suggest you go with the third option for a quick workaround. Try feeding the model with a few neutral input and observe the range of confidence score in the output. then use that threshold to classify as neutral. Here's a sample: def classify_with_neutral(text, threshold=0.5): result = classifier(text)[0] # Get the classification result if result['score'] < threshold: result['label'] = 'Neutral' # Override label to 'Neutral' return result
2
3