question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,114,445 | 2024-10-22 | https://stackoverflow.com/questions/79114445/filter-the-earliest-and-latest-date-in-each-month | Given a dataframe like the one below, how do I filter for the earlest and latest date in each month? Note the actual data runs to tens of thousands of rows. Input: Date Deg 02/01/1990 1210.92 13/01/1990 1226.83 14/01/1990 1224.52 15/01/1990 1220.77 08/02/1990 1164.32 09/02/1990 1156.72 12/02/1990 1145.18 13/02/1990 1146.88 24/02/1990 1149.07 Desired output: Date Deg 02/01/1990 1210.92 15/01/1990 1220.77 08/02/1990 1164.32 24/02/1990 1149.07 | Your data looks sorted. Try this: df["year"] = df["Date"].dt.year df["month"] = df["Date"].dt.month return pd.concat( [ df.groupby(["month", "year"]).last(), df.groupby(["month", "year"]).first() ] ).reset_index(drop=True).sort_values(by="Date") | 1 | 2 |
79,114,033 | 2024-10-22 | https://stackoverflow.com/questions/79114033/whats-the-advantage-of-newtype-over-typealias | Consider the following example: UserId = NewType('UserId', int) ProductId = NewType('ProductId', int) But I can also do, the following: UserId: TypeAlias = int ProductId: TypeAlias = int So why should I use NewType over TypeAlias or vice versa? Are they both interchangeable? | Aliases don't distinguish between each other. New types do. Consider this example: Meter = NewType('Meter', float) Gram = NewType('Gram', float) MeterSquared = NewType('MeterSquared', float) def area(length: Meter, width: Meter) -> MeterSquared: return MeterSquared(length * width) Now area(3, 5) won't type check, nor would area(Meter(3), Gram(5)), but area(Meter(3), Meter(5)) will. If you had defined Meter and Gram as mere type aliases, all three would type check. Basically, a NewType is, well, a new type entirely distinct from the "base" type, just with the same underlying representation. A type alias, on the other hand, is just another name for an existing type, and the two are entirely interchangeable. | 1 | 2 |
79,106,088 | 2024-10-19 | https://stackoverflow.com/questions/79106088/correct-python-dbus-connection-syntax | I'm having trouble getting dbus to connect: try: logging.debug("Attempting to connect to D-Bus.") self.bus = SessionBus() self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow") # self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/") # self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow") Dbus.Listnames shows: $ dbus-send --print-reply --dest=org.freedesktop.DBus --type=method_call /org/freedesktop/DBus org.freedesktop.DBus.ListNames method return time=1729375987.604568 sender=org.freedesktop.DBus -> destination=:1.826 serial=3 reply_serial=2 array [ string "org.freedesktop.DBus" string ":1.469" string "org.freedesktop.Notifications" string "org.freedesktop.PowerManagement" string ":1.7" string "org.keepassxc.KeePassXC.MainWindow" This version produces this error: self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow") ERROR:root:Error message: g-dbus-error-quark: GDBus.Error:org.freedesktop.DBus.Error.UnknownObject: No such object path '/org/keepassxc/KeePassXC/MainWindow' (41) This version produces this error: self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/") (process:607644): GLib-GIO-CRITICAL **: 16:18:39.599: g_dbus_connection_call_sync_internal: assertion 'object_path != NULL && g_variant_is_object_path (object_path)' failed ERROR:root:Failed to connect to KeePassXC D-Bus interface. ERROR:root:Error message: 'no such object; you might need to pass object path as the 2nd argument for get()' I've tried adding a time delay in case it was a race condition. I've tried with a keepassxc instance already running. I don't know where to go next? Here's the code in full context: from pydbus import SessionBus import logging import os import subprocess from gi.repository import GLib import time # Set up logging configuration logging.basicConfig(level=logging.DEBUG) # Set logging level to debug class KeePassXCManager: def __init__(self, db_path, password=None, keyfile=None, appimage_path=None): logging.debug("Initializing KeePassXCManager") self.db_path = db_path self.password = password self.keyfile = keyfile self.kp = None self.keepass_command = [] # Set default path to the KeePassXC AppImage in ~/Applications self.appimage_path = appimage_path or os.path.expanduser("~/Applications/KeePassXC.appimage") logging.debug(f"AppImage path set to: {self.appimage_path}") # Determine the KeePassXC launch command self._set_keepassxc_command() self._ensure_keepassxc_running() # Set up the D-Bus connection to KeePassXC self.bus = SessionBus() self.keepass_service = None self._connect_to_dbus() # Open the database once the manager is initialized if not self.open_database(): logging.error("Failed to open the database during initialization.") def _set_keepassxc_command(self): """Sets the command to launch KeePassXC.""" try: if self._is_keepassxc_installed(): logging.info("Using installed KeePassXC version.") self.keepass_command = ["keepassxc"] elif os.path.isfile(self.appimage_path) and os.access(self.appimage_path, os.X_OK): logging.info(f"KeePassXC AppImage is executable at {self.appimage_path}") self.keepass_command = [self.appimage_path] else: logging.error("KeePassXC is not installed or AppImage is not executable.") raise RuntimeError("KeePassXC is not installed. Please install it or provide a valid AppImage.") logging.debug(f"Final KeePassXC command set: {self.keepass_command}") except Exception as e: logging.error(f"Error setting KeePassXC command: {e}") raise def _is_keepassxc_installed(self): """Checks if KeePassXC is installed on the system.""" logging.debug("Checking if KeePassXC is installed via package manager") try: result = subprocess.run(["which", "keepassxc"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) if result.returncode == 0: logging.info(f"KeePassXC found at {result.stdout.decode().strip()}") return True else: logging.warning("KeePassXC is not installed via package manager.") return False except Exception as e: logging.error(f"Error checking KeePassXC installation: {e}") return False def _ensure_keepassxc_running(self): """Checks if KeePassXC is running and starts it if not.""" logging.debug("Checking if KeePassXC is running") try: # Check if KeePassXC is running using pgrep result = subprocess.run(["pgrep", "-x", "keepassxc"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) if result.returncode != 0: logging.info("KeePassXC is not running. Starting KeePassXC.") # Start KeePassXC subprocess.Popen(self.keepass_command) # Optionally, wait for a short time to allow KeePassXC to start GLib.idle_add(lambda: None) # Allows the GUI to initialize else: logging.info("KeePassXC is already running.") except Exception as e: logging.error(f"Error checking or starting KeePassXC: {e}") def _construct_open_command(self): """Constructs the command to open the KeePassXC database.""" command = [self.keepass_command[0], self.db_path] if self.password: command.append("--pw-stdin") logging.debug(f"Command includes password for opening database: {self.db_path}") if self.keyfile: command.append(f"--keyfile={self.keyfile}") logging.debug(f"Command includes keyfile for opening database: {self.keyfile}") logging.debug(f"Final command to open KeePassXC database: {command}") return command if self.password or self.keyfile else None def _clear_sensitive_data(self): """Clears sensitive data from memory.""" logging.debug("Clearing sensitive data from memory") self.password = None self.keyfile = None self.db_path = None def _connect_to_dbus(self): """Connects to the KeePassXC D-Bus interface.""" try: logging.debug("Attempting to connect to D-Bus.") self.bus = SessionBus() # self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow") self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/") # self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow") # self.keepass_service = self.bus.get("org.KeePassXC.MainWindow", "/org/KeePassXC/MainWindow") if self.keepass_service: logging.info("Successfully connected to KeePassXC D-Bus interface.") else: logging.error("KeePassXC D-Bus interface is not available.") except Exception as e: logging.error("Failed to connect to KeePassXC D-Bus interface.") logging.error(f"Error message: {e}") services = self.bus.get_services() logging.error(f"Available D-Bus services: {services}") def open_database(self): """Opens the KeePassXC database using D-Bus.""" try: if not self.keepass_service: logging.error("KeePassXC D-Bus service is not available.") return False logging.info(f"Opening database: {self.db_path}") # Prepare parameters for the D-Bus call password = self.password or "" keyfile = self.keyfile or "" # Call the D-Bus method with parameters directly response = self.keepass_service.openDatabase(self.db_path, password, keyfile) if response: logging.info("Database opened successfully via D-Bus.") return True else: logging.error("Failed to open database via D-Bus.") return False except Exception as e: logging.error(f"An error occurred while opening the database: {e}") return False def unlock_database(self): """Unlocks the KeePassXC database with the password via D-Bus.""" try: if not self.keepass_service: logging.error("KeePassXC D-Bus service is not available.") return False logging.info("Unlocking database with the provided password.") response = self.keepass_service.unlockDatabase(self.password) if response: logging.info("Database unlocked successfully via D-Bus.") return True else: logging.error("Failed to unlock database via D-Bus.") return False except Exception as e: logging.error(f"An error occurred while unlocking the database: {e}") return False | You are assuming that the object path always follows the naming of the service itself. That's not always the case – a service can export many different object paths, and does not strictly need to follow any naming style (i.e. there isn't an enforced rule that all object paths start with the service name, much less that there be one that exactly matches it; both are merely conventions). KeePassXC is a Qt-based app, and many of those famously do not care to follow the usual D-Bus convention of using the service name as the "base" for object paths; instead, the old KDE3 DCOP (pre-D-Bus) style with all objects rooted directly at / remains common among Qt programs. Looking through busctl or D-Spy (or the older D-Feet), it seems that KeePassXC does not follow the D-Bus conventions of object naming, and the only object it exposes is at the path /keepassxc. $ busctl --user tree org.keepassxc.KeePassXC.MainWindow └─ /keepassxc $ gdbus introspect -e -d org.keepassxc.KeePassXC.MainWindow -o / -r -p node / { node /keepassxc { }; }; So you need to call: bus.get("org.keepassxc.KeePassXC.MainWindow", "/keepassxc") Note that since an object can have methods under several interfaces, some D-Bus bindings automatically use introspection to resolve unambiguous method names to the correct interface, but e.g. dbus-python would require you to explicitly use dbus.Interface(obj…).Foo(…) or obj.Foo(…, dbus_interface=…). Likewise the interface names don't need to match the service name – although it seems that KeePassXC has used the same string for both, but the .MainWindow suffix is pretty odd for a service name to have when all windows of an app belong to the same process (while being a perfectly normal name for an interface that holds main window-related methods). Generally instead of dbus-send I'd suggest the slightly less verbose systemd or GLib2 tools (both of which support showing the introspection XML from the service): $ busctl --user introspect org.keepassxc.KeePassXC.MainWindow /keepassxc $ busctl --user call ... $ gdbus introspect -e -d org.keepassxc.KeePassXC.MainWindow -o /keepassxc $ gdbus call -e ... | 2 | 1 |
79,112,299 | 2024-10-22 | https://stackoverflow.com/questions/79112299/how-to-change-an-element-in-one-array-based-on-conditions-at-the-same-index-elem | I have two arrays containing 60 0's or 1's. One is defined as result and the other is defined as daysinfected. The goal is to look at each element of result and set that element to -1 IF it is > 0 AND IF the corresponding element in the daysinfected element is 0. By printing result for debugging after this code it is obvious that it is not generating -1 values where expected (I.e., where these conditions are both met). In fact it doesn't seem to be modifying anything in result. for i in range(len(result)): for i in range (len(daysinfected)): if result[i] > 0 and daysinfected[i] == 0: i in result == -1 | Your outer loop is redundant here, just this is sufficient: for i in range (len(daysinfected)): if result[i] > 0 and daysinfected[i] == 0: i in result == -1 Now, in the 3rd line, i in result == -1 this is an expression, not an assignment. its interpreted like so: (i in result) == -1 where i in result is checking if i exists in result, then you're checking if its equal to -1 with ... == -1, not assigning a value. So really, you want this: for i in range (len(daysinfected)): if result[i] > 0 and daysinfected[i] == 0: result[i] = -1 # assign -1 to the i-th value in result, notice the single = However, with numpy arrays, you should try to avoid looping in python as much as possible. Generally, numpy provides utility functions which can almost always do the trick much faster and are much more readable: result = np.where((result > 0) & (daysinfected == 0), -1, result) | 1 | 2 |
79,111,951 | 2024-10-21 | https://stackoverflow.com/questions/79111951/python-protocol-using-keyword-only-arguments-requires-implementation-to-have-dif | I'm on python 3.10. I'm using PyCharm's default type checker and MyPy. Here is the protocol I defined: class OnSubscribeFunc(Protocol): def __call__(self, instrument: str, *, x: int) -> AsyncGenerator: ... When create a method that implements it like this: class A: async def subscribe(self, instrument: str, *, x: int): yield ... a: OnSubscribeFunc = A().subscribe # this apparently is where it gets it wrong I get this warning: Expected type 'OnSubscribeFunc', got '(instrument: str, Any, x: int) -> AsyncGenerator' instead If I remove the * from my implementation however, the warning disappears. I would expect it to be the other way around because not having the * allows the implementation to have non-keyword-only arguments which might not be what I'm aiming for with my protocol. So for comparison - this implementation gives no warning: class A: async def subscribe(self, instrument: str, x: int): yield ... This does not make any sense to me, why does it behave like this and is this expected or is it a bug in my type checker? | This is a known bug of PyCharm. Mypy and Pyright both accept your code as it is. Put a # type: ignore or # noqa there and move on. | 2 | 3 |
79,111,615 | 2024-10-21 | https://stackoverflow.com/questions/79111615/i-am-trying-to-do-a-board-outline-with-and-but-i-am-getting-unexpected-o | I mentioned the condition to print at particular places but instead of printing at those locations code just appends "+" to "-" even with range condition it exceeds the limit. I wanted to print "+" at 0, 8, 16, 24 index locations and print "-" in between them. def display(board): for row in range(25): print("-", end = "") if row == 0 or row == 8 or row == 16 or row == 24: print("+", end = "") The outcome when I invoke the function is: -+--------+--------+--------+ I tried to modify code so that "+" stays in condition loop when "-" is outside but output is different def display(board): for row in range(25): if row == 0 or row == 8 or row == 16 or row == 24: print("+", end = "") print("-", end = "") +--------+--------+--------+- My expected outcome is supposed to be: +-------+-------+-------+ | You've just got to use an if else to only print 1 char per iteration of your loop. Without the else, you are printing twice on each multiple of 8. def display(board): for row in range(25): if row == 0 or row == 8 or row == 16 or row == 24: print("+", end = "") else print("-", end = "") You could also simplify the code and make it a bit more readable using some math. def display(board): for row in range(25): if row % 8 == 0: print("+", end = "") else print("-", end = "") | 1 | 4 |
79,110,939 | 2024-10-21 | https://stackoverflow.com/questions/79110939/break-up-a-sparse-2d-array-or-table-into-multiple-subarrays-or-subtables | I want to find a way to "lasso around" a bunch of contiguous/touching values in a sparse table, and output a set of new tables. If any values are "touching", they should be part of a subarray together. For example: if I have the following sparse table/array: [[0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 0 0 0] [0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 1 0 0] [0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0]] The algorithm should "find" subtables/subarrays. It would identify them like this: [[0 0 0 1 1 0 0 0 2 2 2 2 0 0 0 0 0 0 0] [0 0 0 1 1 0 0 0 2 2 2 2 2 0 0 3 3 0 0] [0 0 0 0 0 0 0 0 2 2 2 0 0 0 3 3 3 3 0] [0 0 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0]] But the final output should be a series subarrays/subtables like this: [[1 1] [1 1]] [[0 1 1 1 1 0] [0 1 1 1 1 1] [0 1 1 1 0 0] [1 0 0 0 0 0]] [[0 0 1 1 0] [0 1 1 1 1] [1 0 0 0 0]] How can I do this in python? I've tried looking at sk-image and a few things seem to be similar to what I'm trying to do, but nothing I have seen seems to fit quite right. EDIT: it looks like scipy.ndimage.label is extremely close to what I want to do, but it will break the corner-case values into their own separate arrays. So it's not quite right. EDIT: ah ha, the structure argument is what I am after. If I get time I will update my question with an answer. | A possible solution, which based on the following ideas: First, measure.label assigns a unique label to each connected component in the array based on an 8-connectivity criterion (connectivity=2). Second, measure.regionprops retrieves properties of these labeled regions, such as their bounding boxes. Then, the code iterates through each detected region, extracts the minimum and maximum row and column indices from the region's bounding box, and slices the original array a to obtain the corresponding subarray. labels = measure.label(a, connectivity=2) regions = measure.regionprops(labels) list_suba = [] for region in regions: min_row, min_col, max_row, max_col = region.bbox subarray = a[min_row:max_row, min_col:max_col] list_suba.append(subarray) list_suba Or, more concisely: labels = measure.label(a, connectivity=2) regions = measure.regionprops(labels) [a[region.bbox[0]:region.bbox[2], region.bbox[1]:region.bbox[3]] for region in regions] Output: [array([[1, 1], [1, 1]]), array([[0, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1], [0, 1, 1, 1, 0, 0], [1, 0, 0, 0, 0, 0]]), array([[0, 0, 1, 1, 0], [0, 1, 1, 1, 1], [1, 0, 0, 0, 0]])] | 3 | 3 |
79,110,878 | 2024-10-21 | https://stackoverflow.com/questions/79110878/i-want-to-match-6-or-fewer-digits-in-a-string-if-there-are-or-between-t | It should match: "abc 12-34 def" precisely "12-34" "Phone number: 123/45", precisely "123/45" "sequence: 12//-34", precisely "12//-34" "My code is 1-2-3-4", precisely "1-2-3-4" It should not match: "too many: 1234-567-89" "too many; 1234-567" Here is what I have tried: pattern = r'\d([\/-]\d){1,5}' but didn't succeed | In your pattern, this part \d([\/-]\d expects a match for either / or - You might use a single digit, and repeat 1 - 5 times a digit with zero or more occurrences of - or / in between. On the left you can place a negative lookbehind and on the right a negative lookahead to assert not - / or a digit to mark the boundaries. (?<![/\d-])\d(?:[/-]*\d){1,5}(?![/\d-]) See a regex demo | 2 | 7 |
79,110,294 | 2024-10-21 | https://stackoverflow.com/questions/79110294/polars-use-value-from-column-as-column-name-in-when-then-expression | In a polars dataframe I have a column that contains the names of other columns (column "id_column_name"). I want to use those names in a when-then expression with pl.col() to create a new column ("id") which gets its values out of these other columns ("id_column1", "id_column2"). Every row can gets its value from another column in the df. # initial df df = pl.DataFrame({ 'id_column1': [123, 456], 'id_column2': ['abc', 'def'], 'id_column_name': ['id_column1', 'id_column2'] }) # required output df df_out = pl.DataFrame({ 'id_column1': [123, 456], 'id_column2': ['abc', 'def'], 'id_column_name': ['id_column1', 'id_column2'], 'id': ['123', 'def'] }) # one of the trings I tried df = df.with_columns( pl.when(pl.col('id_column_name').is_not_null()) .then(pl.col(pl.col('id_column_name'))) .otherwise(None) .cast(pl.String) .alias('id') ) This leads to the error: Expected str or DataType, got 'Expr'. Using str() or .str to create the expression into a regular string lead to other errors: Expected str or DataType, got 'ExprStringNameSpace'. This cant be that hard, can it? | pl.Series.unique() to get all possible values of id_column_name column. pl.when() to create conditional results. pl.coalesce() to fill the final result with first non-empty value. df.with_columns( id = pl.coalesce( pl.when(pl.col.id_column_name == col).then(pl.col(col)) for col in df.schema if col not in ("id_column_name") # or this if amount of columns is large # for col in df["id_column_name"].unique() ) ) shape: (2, 4) ┌────────────┬────────────┬────────────────┬─────┐ │ id_column1 ┆ id_column2 ┆ id_column_name ┆ id │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ str ┆ str │ ╞════════════╪════════════╪════════════════╪═════╡ │ 123 ┆ abc ┆ id_column1 ┆ 123 │ │ 456 ┆ def ┆ id_column2 ┆ def │ └────────────┴────────────┴────────────────┴─────┘ | 2 | 1 |
79,110,306 | 2024-10-21 | https://stackoverflow.com/questions/79110306/django-circular-import-models-views-forms | Watched all the previous related topics devoted to this problem, but haven't found a proper solution, so decided to create my own question. I'm creating a forum project (as a part of the site project). Views are made via class-based views: SubForumListView leads to a main forum page, where its main sections ("subforums") are listed. TopicListView, in its turn, leads to the pages of certain subforums with the list of active topics created within the subforum. ShowTopic view leads to a certain topic page with a list of comments. The problem manifests itself because of: Models.py: Method get_absolute_url in the model Subforum, which in its return section's reverse function takes a view as the 1st argument; I've tried to avoid a direct import of the view, but the program doesn't accept other variants; Views.py: most of the views have imported models either in instance arguments (model = Subforum), or in methods using querysets (like in get_context_data: topics = Topic.objects.all()); I can't surely say whether the change of instance argument model = Subforum to model = 'Subforum' really helps, as it's impossible to do with queryset methods and thus can't be proved; Forms.py: my form classes were created via forms.ModelForm and include class Meta, where the model instance argument is provided the same way as in 2): model = Topic. For now I've commented them (again, without being sure whether it was helpful or not), as well as the import of models, but when they were active, there was a triple circular import "models-views-forms" (funny enough). I see this problem, I know what and where provokes it, but I don't know how to solve it, that is: I don't know how to better define views and forms (or, maybe, models with their "get_absolute_url" methods) to avoid CI and how to better organize the connection between different parts of the program. Corresponding files: models.py: from django.db import models from django.contrib.auth.models import User from django.urls import reverse from django.utils.text import slugify from .consts import * from .views import TopicListView, ShowTopic ''' class User(AbstractUser): class Meta: app_label = 'forum' ''' class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) surname = models.CharField(max_length=32, default='') name = models.CharField(max_length=32, default='') email = models.EmailField(max_length=254, blank=True, unique=True) bio = models.TextField(max_length=500, default="Write a couple of words about yourself") avatar = models.ImageField(default=None, blank=True, max_length=255) status = models.CharField(max_length=25, blank=True, default='') slug = models.SlugField() age = models.IntegerField(verbose_name='Возраст', null=True, blank=True) gender = models.CharField(verbose_name='Пол', max_length=32, choices=Genders.GENDER_CHOICES, default="H", blank=True) reputation = models.IntegerField(verbose_name='Репутация', default=0) def __str__(self): return f'{self.user} profile' def get_absolute_url(self): return reverse('user_profile', kwargs={'profile_slug': self.slug}) def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.user.username) return super(Profile, self).save(*args, **kwargs) class Subforum(models.Model): title = models.CharField(verbose_name='Название', max_length=32, choices=Theme.THEME_CHOICES, default=1) slug = models.SlugField(default='News') objects = models.Manager() class Meta: ordering = ['title'] verbose_name = 'Разделы форума' verbose_name_plural = 'Разделы форума' def __str__(self): return self.title def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.title) return super(Subforum, self).save(*args, **kwargs) def get_absolute_url(self): return reverse(TopicListView, kwargs={'name': self.title, 'subforum_slug': self.slug}) class Topic(models.Model): subject = models.CharField(verbose_name='Заголовок', max_length=255, unique=True) first_comment = models.TextField(verbose_name='Сообщение', max_length=2000, default='') slug = models.SlugField(default='', unique=True, max_length=25, editable=False) subforum = models.ForeignKey('Subforum', verbose_name='Раздел', on_delete=models.CASCADE, related_name='subforum') creator = models.ForeignKey(User, verbose_name='Создатель темы', on_delete=models.SET('deleted'), related_name='creator') created = models.DateTimeField(auto_now_add=True) closed = models.BooleanField(default=False) objects = models.Manager() class Meta: ordering = ['id'] verbose_name = 'Обсуждения' verbose_name_plural = 'Обсуждения' def __str__(self): return self.subject def save(self, *args, **kwargs): if not self.id: self.slug = f'topic-{slugify(self.subject)}'[0:25] return super(Topic, self).save(*args, **kwargs) def get_absolute_url(self): return reverse(ShowTopic, kwargs={'topic_slug': self.slug}) class Comment(models.Model): topic = models.ForeignKey('Topic', verbose_name='Тема', on_delete=models.CASCADE, related_name='topic') author = models.ForeignKey(User, verbose_name='Комментатор', on_delete=models.SET('deleted'), related_name='author') content = models.TextField(verbose_name='Текст', max_length=2000) created = models.DateTimeField(verbose_name='Дата публикации', auto_now_add=True) updated = models.DateTimeField(verbose_name='Дата изменения', auto_now=True) objects = models.Manager() class Meta: ordering = ['created'] verbose_name = 'Комментарии' verbose_name_plural = 'Комментарии' def __str__(self): return f'Post of {self.topic.subject} is posted by {self.author.username}.' views.py: from django.contrib.auth.mixins import LoginRequiredMixin from django.shortcuts import get_object_or_404 from django.urls import reverse_lazy from django.views.generic import ListView, DetailView, CreateView, UpdateView, DeleteView from core.views import menu from .forms import AddTopicForm, AddCommentForm from .models import Subforum, Topic, Comment, Profile from .utils import DataMixin class SubForumListView(ListView): model = Subforum context_object_name = 'subforum_list' template_name = "forum/forum.html" def get_context_data(self, **kwargs): subforums = Subforum.objects.all() context = {'subforums': subforums} return context class TopicListView(ListView): model = Topic template_name = "forum/subforum.html" slug_url_kwarg = 'subforum_slug' context_object_name = 'subforum' def get_context_data(self, **kwargs): topics = Topic.objects.all() context = {'topics': topics} return context class ShowTopic(DetailView): model = Topic template_name = "forum/topic.html" slug_url_kwarg = 'topic_slug' context_object_name = 'topic' def get_context_data(self, topic_slug, **kwargs): topic = get_object_or_404(Topic, slug=topic_slug) comments = Comment.objects.filter(topic=topic) comments_number = len(Comment.objects.filter(topic=topic)) context = {'menu': menu, 'topic': topic, 'comments': comments, 'comm_num': comments_number} return context class AddTopic(LoginRequiredMixin, DataMixin, CreateView): form_class = AddTopicForm template_name = 'forum/addtopic.html' page_title = 'Создание новой темы' class AddComment(LoginRequiredMixin, DataMixin, CreateView): form_class = AddCommentForm template_name = 'forum/addcomment.html' page_title = 'Оставить комментарий' success_url = reverse_lazy('topic') class UpdateComment(LoginRequiredMixin, DataMixin, UpdateView): form_class = AddCommentForm template_name = 'forum/addcomment.html' page_title = 'Редактировать комментарий' success_url = reverse_lazy('topic') class UserProfile(DetailView): model = Profile template_name = "profile.html" forms.py: from django import forms from django.core.exceptions import ValidationError #from forum.models import Topic, Comment class AddTopicForm(forms.ModelForm): subject = forms.CharField(label="Заголовок", max_length=100, min_length=7) first_comment = forms.CharField(label="Сообщение", widget=forms.Textarea()) class Meta: #model = Topic fields = ['subject', 'first_comment'] def clean_subject(self): subject = self.cleaned_data['subject'] if len(subject) > 100: raise ValidationError("Длина превышает 100 символов") if len(subject) < 7: raise ValidationError("Слишком короткое заглавие, требуется не менее 7 символов") return subject class AddCommentForm(forms.ModelForm): content = forms.CharField(label="Текст комментария", max_length=2000, min_length=1, widget=forms.Textarea()) class Meta: #model = Comment fields = ['content'] I am not sure whether it's necessary or not, but also urls.py for you: from django.urls import path from forum.views import * urlpatterns = [ #path('<slug:profile_slug>/', user_profile, name='user_profile'), path('', SubForumListView.as_view(), name='forum'), path('<slug:subforum_slug>/', TopicListView.as_view(), name='subforum'), path('subforum/<slug:topic_slug>/', ShowTopic.as_view(), name='topic'), path('subforum/add-topic/', AddTopic.as_view(), name="add_topic"), path('subforum/<slug:topic_slug>/add-comment/', AddComment.as_view(), name="add_comment"), path('subforum/<slug:topic_slug>/edit/<int:id>/', UpdateComment.as_view(), name="edit_comment"), If some additional files/information are necessary, I'm ready to provide them. For now, I can't continue implementation of the project, as CI doesn't allow me to test the forum page. So I have to solve this problem before any further actions. | Don't import views in the models. Views should depend on models, never in the opposite way. You can use the name of the path, so: class Subforum(models.Model): # … def get_absolute_url(self): return reverse( 'subforum', kwargs={'name': self.title, 'subforum_slug': self.slug}, ) The same for ShowTopic, and thus remove the imports. Note: You can make use of django-autoslug [GitHub] to automatically create a slug based on other field(s). | 1 | 2 |
79,109,487 | 2024-10-21 | https://stackoverflow.com/questions/79109487/how-to-check-whether-an-sklearn-estimator-is-a-scaler | I'm writing a function that needs to determine whether an object passed to it is an imputer (can check with isinstance(obj, _BaseImputer)), a scaler, or something else. While all imputers have a common base class that identifies them as imputers, scalers do not. I found that all scalers in sklearn.preprocessing._data inherit (OneToOneFeatureMixin, TransformerMixin, BaseEstimator), so I could check if they are instances of all of them. However that could generate false positives (not sure which other object may inherit the same base classes). It doesn't feel very clean or pythonic either. I was also thinking of checking whether the object has the .inverse_transform() method. However, not only scalers have that, a SimpleImputer (and maybe other objects) have also. How can I easily check if my object is a scaler? | Unfortunately, the cleanest way to do this is to check each scaler type individually, any other check will potentially let through non-scaler objects as well. Nevertheless, I'll offer some "hack-jobs" too. The most failsafe solution is to import your scalers and then check if your object is any of these scalers or not. from sklearn.preprocessing import MinMaxScaler, RobustScaler # ... other scalers your code base uses SCALER_TYPES = [MinMaxScaler, RobustScaler] # Extend list if needed if any([isinstance(YourObject, scaler_type) for scaler_type in SCALER_TYPES]): # Do something pass else: # Do something else pass Now, if you want something that catches them all without listing all the scalers you use in your code, you could rely on private properties of the scaler objects. These are private for a good reason though, and are subject to change without notice even between patch versions, so nothing at all guarantees that your code will work if you update sklearn to a new version. You could rely on the string representation (__repr__) of the object to check if it contains Scaler. This is how you can do it: if 'Scaler' in str(YourObject): # Do something pass else: # Do something else pass or if 'Scaler' in YourObject.__repr__(): # Do something pass else: # Do something else pass This will let through anything that has Scaler in its string representation though, so you are definitely better off with being explicit and defining your list of scalers. | 1 | 4 |
79,109,524 | 2024-10-21 | https://stackoverflow.com/questions/79109524/dataframe-manipulation-explode-rows-on-new-dataframe-with-repeated-indices | I have two dataframes say df1 and df2, for example import pandas as pd col_1= ["A", ["B","C"], ["A","C","D"], "D"] col_id = [1,2,3,4] col_2 = [1,2,2,3,3,4,4] d1 = {'ID': [1,2,3,4], 'Labels': col_1} d2 = {'ID': col_2, } d_2_get = {'ID': col_2, "Labels": ["A", "B", "C", "A", "C", "D", np.nan] } df1 = pd.DataFrame(data=d1) df2 = pd.DataFrame(data=d2) df_2_get = pd.DataFrame(data=d_2_get) df1 looking like ID col2 0 1 A 1 2 [B, C] 2 3 [A, C, D] 3 4 D and df2 looking like ID 0 1 1 2 2 2 3 3 4 3 5 4 6 4 I want to add a column Labels to df2, taken from df1, in such a way that: for index i, start with the first value in df1 if the new row in df2["ID"] has a repeated entry, get the next value in df1, if it exists. If not, set NaN. Given df1 and df2, the output should look like df_2_get below ID Labels 0 1 A 1 2 B 2 2 C 3 3 A 4 3 C 5 4 D 6 4 NaN My current clumsy attempt is below, from collections import Counter def list_flattener(list_of_lists): return [item for row in list_of_lists for item in row] def my_dataframe_filler(df1, df2): list_2_fill = [] repeats = dict(Counter(df2["ID"])) for k in repeats.keys(): available_labels_list = df1[df1["ID"]==k]["Labels"].tolist() available_labels_list+=[[np.nan]*10] available_labels_list = list_flattener(available_labels_list) list_2_fill+=available_labels_list[:repeats[k]] return list_2_fill and then use as df2["Labels"] = my_dataframe_filler(df1, df2) but I would like to learn how a pandas black belt would handle the problem, thanks | IIUC, you could explode and perform a merge after deduplication with groupby.cumcount: out = (df2 .assign(n=df2.groupby('ID').cumcount()) .merge(df1.explode('Labels').assign(n=lambda x: x.groupby('ID').cumcount()), on=['ID', 'n'], how='left' ) #.drop(columns='n') ) Output: ID n Labels 0 1 0 A 1 2 0 B 2 2 1 C 3 3 0 A 4 3 1 C 5 4 0 D 6 4 1 NaN Alternatively, a pure python approach using iterators, map and next: # for each list, build an iterator d = dict(zip(df1['ID'], map(iter, df1['Labels']))) # take the appropriate list and get the next available item # default to None if exhausted df2['Labels'] = df2['ID'].map(lambda x: next(d[x], None)) Output: ID Labels 0 1 A 1 2 B 2 2 C 3 3 A 4 3 C 5 4 D 6 4 None | 4 | 6 |
79,107,659 | 2024-10-20 | https://stackoverflow.com/questions/79107659/how-to-pass-aggregation-functions-as-function-argument-in-polars | How can we pass aggregation functions as argument to a custom aggregation function in Polars? You should be able to pass a single function for all columns or a dictionary if you have different aggregations by column. import polars as pl # Sample DataFrame df = pl.DataFrame({ "category": ["A", "A", "B", "B", "B"], "value": [1, 2, 3, 4, 5] }) def agg_with_sum(df: pl.DataFrame | pl.LazyFrame) -> pl.DataFrame | pl.LazyFrame: return df.group_by("category").agg(pl.col("*").sum()) # Custom function to perform aggregation def agg_with_expr(df: pl.DataFrame | pl.LazyFrame, agg_expr: pl.Expr | dict[str, pl.Expr]) -> pl.DataFrame | pl.LazyFrame: if isinstance(agg_expr, dict): return df.group_by("category").agg([pl.col(col).aggexpr() for col, aggexpr in agg_expr.items()]) return df.group_by("category").agg(pl.col("*").agg_expr()) # Trying to pass a Polars expression for sum aggregation print(agg_with_sum(df)) # ┌──────────┬───────┐ # │ category ┆ value │ # │ --- ┆ --- │ # │ str ┆ i64 │ # ╞══════════╪═══════╡ # │ A ┆ 3 │ # │ B ┆ 12 │ # └──────────┴───────┘ # Trying to pass a custom Polars expression print(agg_with_expr(df, pl.sum)) # AttributeError: 'Expr' object has no attribute 'agg_expr' print(agg_with_expr(df, {'value': pl.sum})) # AttributeError: 'Expr' object has no attribute 'aggexpr' | You can pass it as anonymous function with expression as parameter (I simplified your example just to illustrate the point): def agg_with_expr(df, agg_expr): return df.group_by("category").agg(agg_expr(pl.col("*"))) agg_with_expr(df, lambda x: x.sum()) shape: (2, 2) ┌──────────┬───────┐ │ category ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞══════════╪═══════╡ │ B ┆ 12 │ │ A ┆ 3 │ └──────────┴───────┘ update. as @orlp mentioned in comments, in this particular case you could do it without anonymous function, with plain usage of pl.Expr.sum(), which is much more neat. agg_with_expr(df, pl.Expr.sum) shape: (2, 2) ┌──────────┬───────┐ │ category ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞══════════╪═══════╡ │ A ┆ 3 │ │ B ┆ 12 │ └──────────┴───────┘ | 3 | 2 |
79,103,866 | 2024-10-18 | https://stackoverflow.com/questions/79103866/stopping-asyncio-program-using-file-input | What specific code needs to change in the Python 3.12 example below in order for the program myReader.py to be successfully halted every time the line "Stop, damnit!" gets printed into sourceFile.txt by the program myWriter.py? THE PROBLEM: The problem is that myReader.py only sometimes stops when the line "Stop, damnit!" is printed into sourceFile.txt. One workaround is to have myWriter.py continue to write "Stop, damnit!" again and again to sourceFile.txt. This can cause myReader.py to eventually halt. But the problem is that myWriter.py has to continue writing the same line for arbitrarily long periods of time. We have tested continuing for 15 minutes. But there might be situations in which myWriter.py might need to continue writing "Stop, damnit!" every second for 30 minutes. And there might be other times when myWriter.py might need to continue writing "Stop, damnit!" every second for only one or two minutes. The problem seems to be that the API calls being made by myReader.py take variable amounts of time to return, so that the backlog can become arbitrarily long sometimes, but not always. And it seems that the myReader.py loop is not able to see the "Stop, damnit!" line unless and until the many asynchronous API call tasks have completed. The solution would ideally involve having myReader.py actually hear and respond to a single writing of "Stop, damnit!" instead of needing to have "Stop, damnit!" written so many times. WRITER PROGRAM: The myWriter.py program writes a lot of things. But the relevant part of myWriter.py which writes the stop command is: import time #Repeat 900 times to test output. Sleep for 1 second between each. for i in range(900): writeToFile("Stop, damnit!") time.sleep(1) READER PROGRAM: The relevant portion of myReader.py is as follows: import os import platform import asyncio import aiofiles BATCH_SIZE = 10 def get_source_file_path(): if platform.system() == 'Windows': return 'C:\\path\\to\\sourceFile.txt' else: return '/path/to/sourceFile.txt' async def send_to_api(linesBuffer): success = runAPI(linesBuffer) return success async def read_source_file(): source_file_path = get_source_file_path() counter = 0 print("Reading source file...") print("source_file_path: ", source_file_path) #Detect the size of the file located at source_file_path and store it in the variable file_size. file_size = os.path.getsize(source_file_path) print("file_size: ", file_size) taskCountList = [] background_tasks = set() async with aiofiles.open(source_file_path, 'r') as source_file: await source_file.seek(0, os.SEEK_END) linesBuffer = [] while True: # Always make sure that file_size is the current size: line = await source_file.readline() new_file_size = os.path.getsize(source_file_path) if new_file_size < file_size: print("The file has been truncated.") print("old file_size: ", file_size) print("new_file_size: ", new_file_size) await source_file.seek(0, os.SEEK_SET) file_size = new_file_size # Allocate a new list instead of clearing the current one linesBuffer = [] counter = 0 continue line = await source_file.readline() if line: new_line = str(counter) + " line: " + line print(new_line) linesBuffer.append(new_line) print("len(linesBuffer): ", len(linesBuffer)) if len(linesBuffer) == BATCH_SIZE: print("sending to api...") task = asyncio.create_task(send_to_api(linesBuffer)) background_tasks.add(task) task.add_done_callback(background_tasks.discard) pendingTasks = len(background_tasks) taskCountList.append(pendingTasks) print("") print("pendingTasks: ", pendingTasks) print("") # Do not clear the buffer; allocate a new one: linesBuffer = [] counter += 1 print("counter: ", counter) #detect whether or not the present line is the last line in the file. # If it is the last line in the file, then write whatever batch # we have even if it is not complete. if "Stop, damnit!" in line: #Print the next line 30 times to simulate a large file. for i in range(30): print("LAST LINE IN FILE FOUND.") #sleep for 1 second to simulate a large file. await asyncio.sleep(1) #Omitting other stuff for brevity. break else: await asyncio.sleep(0.1) async def main(): await read_source_file() if __name__ == '__main__': asyncio.run(main()) | Things to know: at least in my testing, this line line = await source_file.readline() will happily return an empty string if it didn't find a new line immediately. you have line = await source_file.readline() twice in your main loop, with the first call's result being thrown out That first call would return a line, and the second call returns empty string because there's nothing else to read. So, you read "Stop, damnit!" in the first call then call readline again and get empty string. You can verify this by modifying your writer to write this: writeToFile("Stop, damnit!\nStop, damnit!") (maybe \r\n). This way you put two lines into the file at once, and the second call to readline actually reads something and the check to stop actually sees the message. Edit: Here's a couple examples to show you how it's behaving. Key changes: BATCH_SIZE = 1 async def send_to_api(linesBuffer): print('processed lines', linesBuffer) return True With myWriter.py as: import time def writeToFile(text): with open("sourceFile.txt", "a") as f: f.write(text+"\n") for i in range(1,11): writeToFile(f"message {i}") time.sleep(0.2) writeToFile("Stop, damnit!\nStop, damnit!") # To make it stop we get output: processed lines ['0 line: message 5\n'] processed lines ['1 line: message 8\n'] processed lines ['2 line: Stop, damnit!\n'] The majority of the messages are dropped. With myWriter.py as: def writeToFile(text): with open("sourceFile.txt", "a") as f: f.write(text+"\n") messages = [] for i in range(1,11): messages.append(f"message {i}") writeToFile('\n'.join(messages)) writeToFile("Stop, damnit!\nStop, damnit!") we get: processed lines ['0 line: message 2\n'] processed lines ['1 line: message 4\n'] processed lines ['2 line: message 6\n'] processed lines ['3 line: message 8\n'] processed lines ['4 line: message 10\n'] processed lines ['5 line: Stop, damnit!\n'] Here we see every other message dropped, since that first readline always drops what it read. | 2 | 3 |
79,102,719 | 2024-10-18 | https://stackoverflow.com/questions/79102719/how-would-i-write-a-function-similiar-to-np-logspace-but-with-a-given-first-inte | I am trying to write a function that returns an array that has in the first section linearly increasing elements and in the second section ever increasing distances between the elements. As inputs, I would like to give the starting value, the final value and the length of the array. This would be solveable with np.logspace(). However, I would like the transition from the first section to the second section to be smooth, therefore I had the idea to fix the interval between the first and second element in the second section to the distance between two ajacent elements in the first section. My comprehension is that this is not doable with np.logspace(). Maybe for some context: The goal is to run optimisations in a loop with a running variable, for which the first part of the running variable is more interesting than the last part. Hence I am looking for an elegant way to devide the sections smoothly. I read and tried a lot but could not find a ready to use solution. My approach was hence the following, where x_n is the final value, n is the length of the array and x_0 describes the threshold between the two sections. The underlying function for the exponential section is the following: x_i = x_0 + b*(y^i-1) def list_creation(x_n, n, x_0, y_initial_guess = 1.1): n_half = int((n + 1) // 2) # Adjust for odd/even cases # Linear difference, region with detailed resolution first_section = np.linspace(0,x_0, n_half) first_step = first_section[1] - first_section[0] # Exponention decreasing resolution section x_0 = x_0 + first_step # This is necessary to avoid doubled values # Function to solve for def equation_for_y(y): return x_0 + first_step * (y**(n_half-1) - 1) / (y - 1) - x_n y_solution = fsolve(equation_for_y, y_initial_guess)[0] # Calculating scaling factor b b = first_step / (y_solution - 1) # Calculating array given the calculated variables second_section = np.array([x_0 + b * (y_solution**i - 1) for i in range(n_half)]) return np.concatenate((first_section, second_section)) This mostly works as intended. However, if I now increase or decrease the threshold x_0 to a large or small value relative to x_n (e.g. 15/0.005 for n = 100) this approach does work anymore. In my application, this is mostly and issue for the case that x_0 is small compared to x_n, so I tried to square i in the function, which however does not achieve the desired result. Is there an easy fix to my problem or are there other solutions to achieve the desired result? Thanks in advance! | There are several changes in order: You should frame the first segment as a half-open interval, including 0 but excluding the threshold. The second segment should include both the threshold and the endpoint. You should enforce the threshold, endpoint, and a constraint of first-order differential continuity. Because of the exponential, this is very sensitive to initial conditions and a poor initial guess will not solve. This works for the order of magnitude of your example inputs. import numpy as np from matplotlib import pyplot as plt from scipy.optimize import fsolve, least_squares def exp_function(a: float, b: float, c: float, i: np.ndarray) -> np.ndarray: return a*b**i + c def b_to_abc( b: float, x_0: float, x_n: float, n_half: int, n: int, first_step: float, ) -> tuple[float, float, float]: # gradient constraint # a = first_step/np.log(b_est) # endpoint constraint # x0 - a == xn - a*b**(n - nhalf - 1) a = (x_n - x_0)/(b**(n - n_half - 1) - 1) # threshold constraint c = x_0 - a return a, b, c def exp_equations( b: float, x_0: float, x_n: float, n_half: int, n: int, first_step: float, ) -> float: a, b, c = b_to_abc(b=b, x_0=x_0, x_n=x_n, n_half=n_half, n=n, first_step=first_step) # dx/di = a*ln(b) * b^i: gradient starts at same value of linear section differential_error = np.exp(first_step/a) - b return differential_error def print_params( b: float, args: tuple[int | float, ...]) -> None: x_0, x_n, n_half, n, first_step = args abc = b_to_abc(b, *args) a, b, c = abc print('abc =', abc) print(f'threshold: {x_0} ~ {exp_function(*abc, 0)}') print(f'endpoint: {x_n} ~ {exp_function(*abc, n - n_half - 1)}') print(f'differential: {first_step} ~ {a*np.log(b)}') print('differential error =', exp_equations(b, *args)) def make_series( n: int, # length of array x_0: float, # threshold x_n: float, # final value use_fsolve: bool = False, ) -> np.ndarray: n_half = n//2 first_step = x_0/n_half # Linear region with detailed resolution # Half-open interval: [0, x_0) lin_section = np.linspace(start=0, stop=x_0*(n_half - 1)/n_half, num=n_half) # Analytic solution would require a lambert W. Do the easier thing and call a solver. b_est = 1.2 args = x_0, x_n, n_half, n, first_step print('Estimate:') print_params(b_est, args) print() if use_fsolve: (b,), infodict, success, message = fsolve( func=exp_equations, x0=b_est, args=args, maxfev=10000, full_output=True, ) assert success == 1, message else: result = least_squares( fun=exp_equations, x0=b_est, args=args, max_nfev=10000, ) assert result.success, result.message b, = result.x print('Fit:') print_params(b, args) print() # Exponential region with decreasing resolution # Closed interval: [x_0, n] abc = b_to_abc(b, *args) exp_section = exp_function(*abc, np.arange(n - n_half)) return np.concatenate((lin_section, exp_section)) def demo() -> None: n = 100 series = make_series(x_0=5e-3, x_n=15, n=n, use_fsolve=False) print(series) fig, ax = plt.subplots() ax.semilogy(np.arange(n), series) plt.show() if __name__ == '__main__': demo() Estimate: abc = (0.0019775281955880866, 1.2, 0.0030224718044119135) threshold: 0.005 ~ 0.005 endpoint: 15 ~ 15.0 differential: 0.0001 ~ 0.00036054601922355986 differential error = -0.14813142362076936 Fit: abc = (0.00047275979928614655, 1.2355595042735252, 0.004527240200713854) threshold: 0.005 ~ 0.005 endpoint: 15 ~ 14.999999999999998 differential: 0.0001 ~ 9.99999999999992e-05 differential error = 1.9984014443252818e-15 [0.00000000e+00 1.00000000e-04 2.00000000e-04 3.00000000e-04 4.00000000e-04 5.00000000e-04 6.00000000e-04 7.00000000e-04 8.00000000e-04 9.00000000e-04 1.00000000e-03 1.10000000e-03 1.20000000e-03 1.30000000e-03 1.40000000e-03 1.50000000e-03 1.60000000e-03 1.70000000e-03 1.80000000e-03 1.90000000e-03 2.00000000e-03 2.10000000e-03 2.20000000e-03 2.30000000e-03 2.40000000e-03 2.50000000e-03 2.60000000e-03 2.70000000e-03 2.80000000e-03 2.90000000e-03 3.00000000e-03 3.10000000e-03 3.20000000e-03 3.30000000e-03 3.40000000e-03 3.50000000e-03 3.60000000e-03 3.70000000e-03 3.80000000e-03 3.90000000e-03 4.00000000e-03 4.10000000e-03 4.20000000e-03 4.30000000e-03 4.40000000e-03 4.50000000e-03 4.60000000e-03 4.70000000e-03 4.80000000e-03 4.90000000e-03 5.00000000e-03 5.11136306e-03 5.24895876e-03 5.41896642e-03 5.62902101e-03 5.88855595e-03 6.20922681e-03 6.60543474e-03 7.09497322e-03 7.69982714e-03 8.44716014e-03 9.37053454e-03 1.05114186e-02 1.19210486e-02 1.36627305e-02 1.58146821e-02 1.84735463e-02 2.17587312e-02 2.58177727e-02 3.08329600e-02 3.70295223e-02 4.46857437e-02 5.41454609e-02 6.58335044e-02 8.02747776e-02 9.81178299e-02 1.20163983e-01 1.47403317e-01 1.81059134e-01 2.22642900e-01 2.74022116e-01 3.37504196e-01 4.15940083e-01 5.12852288e-01 6.32593084e-01 7.80539963e-01 9.63337135e-01 1.18919392e+00 1.46825341e+00 1.81304803e+00 2.23906229e+00 2.76542825e+00 3.41578473e+00 4.21933885e+00 5.21217778e+00 6.43888936e+00 7.95456452e+00 9.82727136e+00 1.21411121e+01 1.50000000e+01] To do better, you need to solve with a Lambert W: import numpy as np import scipy.special from matplotlib import pyplot as plt def exp_function(a: float, b: float, c: float, i: np.ndarray) -> np.ndarray: return a*b**i + c def make_series( n: int, # length of array x_0: float, # threshold x_n: float, # final value ) -> np.ndarray: n_half = n//2 first_step = x_0/n_half # Linear region with detailed resolution # Half-open interval: [0, x_0) lin_section = np.linspace(start=0, stop=x_0*(n_half - 1)/n_half, num=n_half) # Exponential region with decreasing resolution # Closed interval: [x_0, n] i_n = n - n_half - 1 ''' d0 = alnb # 1: gradient continuity x0 = a + c # 2: threshold continuity xn = ab^in + c # 3: endpoint x0-a = xn-ab^in # 2,3 for c a(b^in - 1) = xn - x0 b^in = 1 + (xn - x0)/a b = (1 + (xn - x0)/a)^(1/in) d0 = aln( (1 + (xn - x0)/a)^(1/in) ) # 1 for b exp(d0in/a) = 1 + (xn - x0)/a ''' p = first_step*i_n q = x_n - x_0 a = -p*q/( p + q*scipy.special.lambertw( -p/q*np.exp(-p/q), k=-1, tol=1e-16, ).real ) b = (1 + q/a)**(1/i_n) c = x_0 - a print(f'a={a:.3e}, b={b:.3f}, c={c:.3e}') print(f'Threshold: {x_0} ~ {a + c:.3e}') print(f'Endpoint: {x_n} ~ {a*b**i_n + c:.3f}') exp_section = exp_function(a, b, c, np.arange(1 + i_n)) return np.concatenate((lin_section, exp_section)) def demo() -> None: n = 100 series = make_series(x_0=5e-3, x_n=15, n=n) fig, ax = plt.subplots() ax.semilogy(np.arange(n), series) plt.show() if __name__ == '__main__': demo() a=4.728e-04, b=1.236, c=4.527e-03 Threshold: 0.005 ~ 5.000e-03 Endpoint: 15 ~ 15.000 | 2 | 0 |
79,108,089 | 2024-10-20 | https://stackoverflow.com/questions/79108089/how-to-get-item-in-text-based-game | I have been trying to come up with the code to add an item to my inventory in my text based game but so far I haven't been able to figure it out. Here is my current code: rooms = { 'Entrance': {'west': 'Catacombs A', 'north': 'Main Hall'}, 'Catacombs A': {'east': 'Entrance', 'item': 'Lesser Artifact'}, 'Main Hall': {'north': 'Great Hall', 'east': 'Catacombs B', 'south': 'Entrance', 'west': 'Necron Tomb', 'item': 'Dog Tags'}, 'Necron Tomb': {'east': 'Main Hall', 'item': 'Supplies'}, 'Catacombs B': {'west': 'Main Hall', 'north': 'Storage Room', 'item': 'Lesser Artifact'}, 'Storage Room': {'south': 'Catacombs B', 'item': 'Supplies'}, 'Great Hall': {'east': 'Repair Hub', 'south': 'Main Hall', 'item': 'Dog Tags'}, 'Repair Hub': {'west': 'Great Hall', 'item': 'Necron Lord'} # Villain } current_room = 'Entrance' inventory = [] print('You are a member of a squad of Ultramarines that has been tasked with retrieving 6 items from a ' 'Necron facility on the orbiting planet before reaching the Necron Lord. You and your brothers breach the entrance, which way do you go first?') def move_rooms(current_room, directions): current_room = rooms[current_room] new_room = current_room[directions] return new_room while True: print('You are in the', current_room) directions = input('Enter a direction: north, east, south, west, get item, or exit.') if directions in rooms[current_room]: current_room = move_rooms(current_room, directions) if directions == 'exit': print('You have failed the Emperor and your brothers...') break elif directions not in rooms[current_room]: print('You cannot go that way.') I tried making a define function for getting an item by doing def get_item(item, inventory): if item in current_room: inventory.append(item[current_room]) return inventory Python gave me an error and asked to define item even though it is in my rooms dictionary. I am very new to python so I am struggling a lot, any help would be appreciated! | So I've been experimenting with this and i think i may have found a method that will work. I made a function that takes the currentroom, and your inventory. It will search the dictionary of that room to see if it has an item and if it does it will append it to the inventory table. The code is as follows: rooms = { 'Entrance': {'west': 'Catacombs A', 'north': 'Main Hall'}, 'Catacombs A': {'east': 'Entrance', 'item': 'Lesser Artifact'}, 'Main Hall': {'north': 'Great Hall', 'east': 'Catacombs B', 'south': 'Entrance', 'west': 'Necron Tomb', 'item': 'Dog Tags'}, 'Necron Tomb': {'east': 'Main Hall', 'item': 'Supplies'}, 'Catacombs B': {'west': 'Main Hall', 'north': 'Storage Room', 'item': 'Lesser Artifact'}, 'Storage Room': {'south': 'Catacombs B', 'item': 'Supplies'}, 'Great Hall': {'east': 'Repair Hub', 'south': 'Main Hall', 'item': 'Dog Tags'}, 'Repair Hub': {'west': 'Great Hall', 'item': 'Necron Lord'} # Villain } current_room = 'Storage Room' inventory = [] items = 'item' def get_item(currentroom, inventory): roomtosearch = rooms[currentroom] if items in roomtosearch: found = roomtosearch[items] inventory.append(found) else: print('No such item exists') get_item(current_room, inventory) print(inventory) you can add an if statement in your while true loop that when given that direction input it if its "item" it will call the get_item() function searching that room for an item. Ex: while True: print('You are in the', current_room) directions = input('Enter a direction: north, east, south, west, get item, or exit.') if directions in rooms[current_room]: current_room = move_rooms(current_room, directions) if directions == 'exit': print('You have failed the Emperor and your brothers...') break if directions == 'item': get_item(current_room, inventory) elif directions not in rooms[current_room]: print('You cannot go that way.') I saw your comment and i did some more testing and i realized that the sequence is messed up, because it searches for if item is in the room which it is, and it checks for that before it runs the get_item function it will set the current room to the item making the get_item throw an error. The solution to this is to rephrase the get_item statement to check before the moveroom one and if it the get_item check is true it wont run the moveroom function. My new code is as follows: while True: print('You are in the', current_room) directions = input('Enter a direction: north, east, south, west, get item, or exit.') if directions == 'item': print(current_room) get_item(current_room, inventory) elif directions in rooms[current_room]: current_room = move_rooms(current_room, directions) print(current_room) elif directions == 'exit': print('You have failed the Emperor and your brothers...') break elif directions not in rooms[current_room]: print('You cannot go that way.') | 2 | 2 |
79,108,099 | 2024-10-20 | https://stackoverflow.com/questions/79108099/understanding-what-a-python-3-12-bytecode-does-call-0-after-get-iter | I have this python function and the bytecode it translates to: Text code: x = "-".join(str(z) for z in range(5)) assert x == "0-1-2-3-4" print("Assert test case for generator_expression_in_join") Disassembled code: 0 0 RESUME 0 2 2 LOAD_CONST 0 ('-') 4 LOAD_ATTR 1 (NULL|self + join) 24 LOAD_CONST 1 (<code object <genexpr> at 0x1026abe30, file "<dis>", line 2>) 26 MAKE_FUNCTION 0 28 PUSH_NULL 30 LOAD_NAME 1 (range) 32 LOAD_CONST 2 (5) 34 CALL 1 42 GET_ITER 44 CALL 0 52 CALL 1 60 STORE_NAME 2 (x) 3 62 LOAD_NAME 2 (x) 64 LOAD_CONST 3 ('0-1-2-3-4') 66 COMPARE_OP 40 (==) 70 POP_JUMP_IF_TRUE 2 (to 76) 72 LOAD_ASSERTION_ERROR 74 RAISE_VARARGS 1 4 >> 76 PUSH_NULL 78 LOAD_NAME 3 (print) 80 LOAD_CONST 4 ('Assert test case for generator_expression_in_join') 82 CALL 1 90 POP_TOP 92 RETURN_CONST 5 (None) Disassembly of <code object <genexpr> at 0x1026abe30, file "<dis>", line 2>: 2 0 RETURN_GENERATOR 2 POP_TOP 4 RESUME 0 6 LOAD_FAST 0 (.0) >> 8 FOR_ITER 15 (to 42) 12 STORE_FAST 1 (z) 14 LOAD_GLOBAL 1 (NULL + str) 24 LOAD_FAST 1 (z) 26 CALL 1 34 YIELD_VALUE 1 36 RESUME 1 38 POP_TOP 40 JUMP_BACKWARD 17 (to 8) >> 42 END_FOR 44 RETURN_CONST 0 (None) >> 46 CALL_INTRINSIC_1 3 (INTRINSIC_STOPITERATION_ERROR) 48 RERAISE 1 I have trouble understanding what the instruction with label 44 stands for. I understand that I have a range iterator on the top of my stack after instruction 42 (iter(range(5))), but I don't know why one would call an iterator. I'm trying to implement a python virtual machine, and am struggling to implement the CALL opcode correctly. I don't see any help in the provided spec. What is the logic behind doing CALL 0 on an iterator since an iterator isn't even callable? | argc is the total of the positional and named arguments, excluding self when a NULL is not present. Here’s one call: 28 PUSH_NULL 30 LOAD_NAME 1 (range) 32 LOAD_CONST 2 (5) 34 CALL 1 And here’s another: 24 LOAD_CONST 1 (<code object <genexpr> at 0x1026abe30, file "<dis>", line 2>) 26 MAKE_FUNCTION 0 ⋮ 42 GET_ITER 44 CALL 0 That is, you’re calling the generator expression with the iterator as self. Note that this instruction is different again in Python 3.13. | 2 | 1 |
79,107,149 | 2024-10-20 | https://stackoverflow.com/questions/79107149/pylance-incorrectly-flagging-sklearn-mean-squared-error-function-as-deprecated | I haven't been able to find anything online about this. Pylance seems to be marking the mean_squared_error function from sklearn.metrics as deprecated, although only the squared parameter is deprecated. I am running Python through micromamba and have the latest version of both sklearn (1.5.2) and Pylance (v2024.10.1). I have uninstalled and reinstalled scikit-learn in my micromamba environment as well as updating micromamba itself. | Pyright/Pylance made no mistake, and neither did you. This is a problem with the type stubs for sklearn.metrics, in which mean_squared_error is defined as: @deprecated() def mean_squared_error( y_true: MatrixLike | ArrayLike, y_pred: MatrixLike | ArrayLike, *, sample_weight: None | ArrayLike = None, multioutput: ArrayLike | Literal["raw_values", "uniform_average", "uniform_average"] = "uniform_average", squared: bool = True, ) -> ndarray | Float: ... The decorator was added recently in this PR. I created an issue and submitted a fix. Since these stubs come with Pylance, you will have to wait for the next version of Pylance with which the fix is included. In the meantime, you can either ignore the warning or enable the python.analysis.disableTaggedHints setting: | 2 | 3 |
79,105,904 | 2024-10-19 | https://stackoverflow.com/questions/79105904/approximating-logarithm-using-harmonic-mean | Here is a function to approximate log10(x+1) for (x+1) < ~1.2: a = 1.097 b = 0.085 c = 2.31 ans = 1 / (a - b*x + c/x) It should look like that: It works by adjusting harmonic mean to match log10, but the problem is in values of a, b, c. The question is how to get just right a, b and c and how to make better approximation. I made this code that can give a pretty good approximation for a, b, c, but my code wasn't able to make it any better. import numpy as np a = 1 b = 0.01 c = 2 def mlg(t): x = t if t == 0: x = 0.00000001 x2 = x*x o = a - (b * x) + (c / x) return 1/o def mlg0(t): x = t if t == 0: x = 0.00000001 x2 = x*x o = a - (b * x) + (c / x) return o for i in range(9000): n1 = np.random.uniform(0,1.19,1000) for i in range(1000): n = n1[i] o = np.log10(n+1) u = mlg(n) - o e = u ** 2 de_da = 0 - 2 * (u) / (mlg0(n) ** 2) de_db = de_da * n de_dc = de_da / n a -= de_da * 0.00001 b -= de_db * 0.00001 c -= de_dc * 0.00001 print(a,b,c) How could the code be changed to generate better values? I've used a method alike back propagation in NN, but it could not give me values any better. Here is how the error is calculated: | Here are two approaches. Method 1: series expansion in x (better for negative and positive x) Method 2: fit the curve that passes through 3 points (here, x=0, ½ and 1) Method 1. If you expand them by Taylor series as powers of x then Equating coefficients of x, x^2 and x^3 gives In code: import math import numpy as np import matplotlib.pyplot as plt c = math.log( 10 ) a = c / 2 b = c / 12 print( "a, b, c = ", a, b, c ) x = np.linspace( -0.2, 0.2, 50 ) y = np.log10( 1 + x ) fit = x / ( a * x - b * x ** 2 + c ) plt.plot( x, y , 'b-', label="Original" ) plt.plot( x, fit, 'ro', label="Fit" ) plt.legend() plt.show() Output: a, b, c = 1.151292546497023 0.19188209108283716 2.302585092994046 Method 2. Fit to three points. Here we require If we require this to fit at x=0, ½ and 1 we get (including L’Hopital’s rule for the limit at x=0) This time I have used your interval x in [0,1] to plot the fit import math import numpy as np import matplotlib.pyplot as plt c = math.log( 10 ) a = c * ( 2/math.log(1.5) - 1/math.log(2) - 3 ) b = 2 * c * ( 1/math.log(1.5) - 1/math.log(2) - 1 ) print( "a, b, c = ", a, b, c ) x = np.linspace( 0.0, 1.0, 50 ) y = np.log10( 1 + x ) fit = x / ( a * x - b * x ** 2 + c ) plt.plot( x, y , 'b-', label="Original" ) plt.plot( x, fit, 'ro', label="Fit" ) plt.legend() plt.show() Output: a, b, c = 1.1280638006656465 0.1087207987723298 2.302585092994046 | 2 | 3 |
79,106,642 | 2024-10-20 | https://stackoverflow.com/questions/79106642/how-to-webscrape-elements-using-beautifulsoup-properly | I am not from web scaping or website/html background and new to this field. Trying out scraping elements from this link that contains containers/cards. I have tried below code and find a little success but not sure how to do it properly to get just informative content without getting html/css elements in the results. from bs4 import BeautifulSoup as bs import requests url = 'https://ihgfdelhifair.in/mis/Exhibitors' page = requests.get(url) soup = bs(page.text, 'html') What I am looking to extract (as practice) info from below content: cards = soup.find_all('div', class_="row Exhibitor-Listing-box") cards below sort of content it display: [<div class="row Exhibitor-Listing-box"> <div class="col-md-3"> <div class="card"> <div class="container"> <h4><b> 1 ARTIFACT DECOR (INDIA)</b></h4> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Email : </span> [email protected]</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Contact Person : </span> SHEENU</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>State : </span> UTTAR PRADESH</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>City : </span> AGRA</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Hall No. : </span> 12</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Stand No. : </span> G-15/43</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Mobile No. : </span> +91-5624010111, +91-7055166000</p> <p style="margin-bottom: 5px!important; font-size: 11px;"><span>Website : </span> www.artifactdecor.com</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Source Retail : </span> Y</p> <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Vriksh Certified : </span> N</p> </div> Now when I use below code to extract element: for element in cards: title = element.find_all('h4') email = element.find_all('p') print(title) print(email) Output: It is giving me the info that I need but with html/css content in it which I do not want [<h4><b> 1 ARTIFACT DECOR (INDIA)</b></h4>, <h4><b> 10G HOUSE OF CRAFT</b></h4>, <h4><b> 2 S COLLECTION</b></h4>, <h4><b> ........] [<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Email : </span> [email protected]</p>, <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Contact Person : </span> ..................] So how can I take out just title, email, Contact Person, State, City elements from this without html/css in results? | As Manos Kounelakis suggested, what you're likely looking for is the text attribute of BeautifulSoup HTML elements. Also, it is more natural to split up the html based on the elements with the class card rather than the row elements, as the card elements correspond to each visual card unit on the screen. Here is some code which will print the info fairly nicely: import requests from bs4 import BeautifulSoup as bs url = "https://ihgfdelhifair.in/mis/Exhibitors" page = requests.get(url) soup = bs(page.text, features="html5lib") cards = soup.find_all("div", class_="card") for element in cards: title = element.find("h4").text other_info = [" ".join(elem.text.split()) for elem in element.find_all("p")] print("Title:", title) for info in other_info: print(info) print("-" * 80) | 1 | 0 |
79,106,107 | 2024-10-19 | https://stackoverflow.com/questions/79106107/is-this-benchmark-valid-tinygrad-is-impossibly-fast-vs-torch-or-numpy-for-medi | I ran the following benchmark code on google collab CPU with high ram enabled. Please point out any errors in the way I am benchmarking, (if any) as well as why there is a such a high performance boost with tinygrad. # Set the size of the matrices size = 10000 # Generate a random 10000x10000 matrix with NumPy np_array = np.random.rand(size, size) # Generate a random 10000x10000 matrix with PyTorch torch_tensor = torch.rand(size, size) # Generate a random 10000x10000 matrix with TinyGrad tg_tensor = Tensor.rand(size, size) # Benchmark NumPy start_np = time.time() np_result = np_array @ np_array # Matrix multiplication np_time = time.time() - start_np print(f"NumPy Time: {np_time:.6f} seconds") # Benchmark PyTorch start_torch = time.time() torch_result = torch_tensor @ torch_tensor # Matrix multiplication torch_time = time.time() - start_torch print(f"PyTorch Time: {torch_time:.6f} seconds") # Benchmark TinyGrad start_tg = time.time() tg_result = tg_tensor @ tg_tensor # Matrix multiplication tg_time = time.time() - start_tg print(f"TinyGrad Time: {tg_time:.6f} seconds") NumPy Time: 11.977072 seconds PyTorch Time: 7.905509 seconds TinyGrad Time: 0.000607 seconds These were the results. After running the code many times, the results were very similar | Tinygrad performs operations in a "lazy" way, so the matrix multiplication hasn't been performed yet. Change your matrix multiplication line to: tg_result = (tg_tensor @ tg_tensor).realize() or tg_result = (tg_tensor @ tg_tensor).numpy() | 4 | 5 |
79,106,128 | 2024-10-19 | https://stackoverflow.com/questions/79106128/how-to-filter-out-a-dataframe-based-on-another-dataframe | My dataframe loads from a csv file that looks like this RepID Account Rank 123 Abcd 1 345 Zyxw 2 567 Hijk 3 ... ... 837 Kjsj 8 and I have another csv that has only one column RepID 345 488 I load the first csv in a dataframe df and the other csv in dataframe dE I want to have a new datafrmae dX that is all records from df that RepID does not exist in dE and dY all the records that RepID exist in dE how to do that? | A possible solution, which uses boolean indexing and isin: df[df['RepID'].isin(dE['RepID'])] # dY df[~df['RepID'].isin(dE['RepID'])] # dX Output: # dY RepID Account Rank 1 345 Zyxw 2 # dX RepID Account Rank 0 123 Abcd 1 2 567 Hijk 3 3 837 Kjsj 8 | 2 | 3 |
79,105,679 | 2024-10-19 | https://stackoverflow.com/questions/79105679/python-regular-expression-for-multiple-split-criteria | I'm struggling to split some text in a piece of code that I'm writing. This software is scanning through about 3.5 million lines of text of which there are varying formats throughout. I'm kind of working my way through everything still, but the line below appears to be fairly standard within the file: EXAMPLE_FILE_TEXT ID="20211111.111111 11111" I want to split it as follows: EXAMPLE_FILE_TEXT, ID, 20211111.111111 11111 As much as possible, I'd prefer to avoid hard coding any certain text to look for as I'm still parsing through the file & trying to determine all the different variables. I've tried running the following code: conditioned_line = re.sub(r'(\w+=)(\w+)', r'\1"\2"', input_line) output = shlex.split(conditioned_line) When I run this code, I'm getting this output: ['EXAMPLE_FILE_TEXT', 'ID=20211111.111111 11111'] I've managed to successfully split each and every element of this, but I have not managed to split them all together successfully. I suspect this is manageable via a regular expression, or with a regular expression and a shlex split, but I could really use some suggestions if anyone has some ideas. As requested, here's another example of some text that's in the file I'm scanning: EXAMPLE_TEXT TAG="AB-123-ABCD_$B" ABCDE_ABCD="ABCD_A" ABCDEF_ABCDE="ABCDEF_ABCDEF_$A" ABCDEFGH="" This should separate to the following: EXAMPLE_TEXT, TAG, AB-123-ABCD_$B, ABCDE_ABCD, ABCD_A, ABCDEF_ABCDE, ABCDEF_ABCDEF_$A, ABCDEFGH | I suggest a tokenizing approach with regex: create a regex with alternations, starting with the most specific ones, and ending with somewhat generic ones. In your case, you may try import re x = 'EXAMPLE_FILE_TEXT ID="20211111.111111 11111"' res = re.findall(r'"([^"]*)"|(\d+(?:\.\d+)*)|(\w+)', x) print( ["".join(r) for r in res] ) # => ['EXAMPLE_FILE_TEXT', 'ID', '20211111.111111 11111'] See the Python demo. The regex matches "([^"]*)" - a string between two double quotes: " matches a ", then ([^"]*) captures zero or more chars other than " and then " matches a " char (NOTE: to match string between quotes with escaped quote support use "([^"\\]*(?:\\.[^"\\]*)*)", add a similar pattern for single quotes if needed) | - or (\d+(?:\.\d+)*) - Group 2: one or more digits and then zero or more sequences of . and one or more digits | - or (\w+) - Group 3: one or more word chars. | 2 | 2 |
79,100,317 | 2024-10-18 | https://stackoverflow.com/questions/79100317/how-can-i-align-the-numbers-to-the-top-of-the-cells | I want to align the numbers to the top left corners of each cell. I was able to find the set_text_props, but it isn't doing the alignment as I expect. import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot() ax.axis('off') table = ax.table( cellText=[[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14]], loc='center', cellLoc='left', ) table.set_fontsize(10) table.scale(1, 4) for (row, col), cell in table.get_celld().items(): cell.set_text_props(va='bottom') | (This is a hack, not a solution) Add a newline to your text and adjust the cell.PAD. colab import matplotlib.pyplot as plt fig = plt.figure(figsize=(4,3)) ax = fig.add_subplot() ax.axis('off') cellText=[[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14]] for i in range(len(cellText)): for j in range(len(cellText[i])): cellText[i][j] = f"{cellText[i][j]}\n" table = ax.table( cellText=cellText, loc='center', cellLoc='left', ) table.set_fontsize(10) table.scale(1, 4) for (row, col), cell in table.get_celld().items(): cell.set_text_props(va='bottom') cell.PAD=0.05 | 4 | 3 |
79,093,014 | 2024-10-16 | https://stackoverflow.com/questions/79093014/moviepy-is-unable-to-load-video | Using python 3.11.10 and moviepy 1.0.3 on ubuntu 24.04.1 (in a VirtualBox 7.1.3 on windows 10) I have problems to load a video clip. The test code is just from moviepy.editor import VideoFileClip clip = VideoFileClip("testvideo.ts") but the error is Traceback (most recent call last): File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 285, in ffmpeg_parse_infos line = [l for l in lines if keyword in l][index] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^ IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/alex/Repos/pypdzug/tester.py", line 5, in <module> clip = VideoFileClip("testvideo.ts") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos raise IOError(("MoviePy error: failed to read the duration of file %s.\n" OSError: MoviePy error: failed to read the duration of file testvideo.ts. Here are the file infos returned by ffmpeg: ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8 (Debian 8.3.0-6) configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 It says it failed to read the duration of the file, but the file plays properly (with mplayer) and ffmpeg -i testvideo.ts returns ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 13 (Ubuntu 13.2.0-23ubuntu3) configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared libavutil 58. 29.100 / 58. 29.100 libavcodec 60. 31.102 / 60. 31.102 libavformat 60. 16.100 / 60. 16.100 libavdevice 60. 3.100 / 60. 3.100 libavfilter 9. 12.100 / 9. 12.100 libswscale 7. 5.100 / 7. 5.100 libswresample 4. 12.100 / 4. 12.100 libpostproc 57. 3.100 / 57. 3.100 Input #0, mpegts, from 'testvideo.ts': Duration: 00:10:10.13, start: 0.133333, bitrate: 3256 kb/s Program 1 Metadata: service_name : 2024-10-04 11:49:49.917 service_provider: gvos-6.0 Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080, 15 fps, 15 tbr, 90k tbn Here the duration is clearly given to be 10 minutes and 10.13 seconds. So what could be the cause of this error/issue? | FFMPEG_BINARY Normally you can leave it to its default (‘ffmpeg-imageio’) in which case imageio will download the right ffmpeg binary (on first use) and then always use that binary. The second option is "auto-detect". In this case ffmpeg will be whatever binary is found on the computer: generally ffmpeg (on Linux/macOS) or ffmpeg.exe (on Windows). Lastly, you can set it to use a binary at a specific location on your disk by specifying the exact path. use this code: import os os.environ["FFMPEG_BINARY"] = "/path/to/custom/ffmpeg" where path is path to ffmpeg 6.1.1-3ubuntu5 which is likely to be in your PATH which is likely to be here /usr/local/bin/bin/ffmpeg, if not just download ffmpeg 6.1.2 from https://ffmpeg.org/download.html#releases and use it Why: debugging from errors: moviepy uses ffmpeg version 4.2.2-static by default(stated in error), you have ffmpeg version 6.1.1-3ubuntu5(stated in output in terminal using ffmpeg -i *.ts) in your pc which is able to read time of video, also moviepy states "That may also mean that you are using a deprecated version of FFMPEG. On Ubuntu/Debian for instance the version in the repos is deprecated. Please update to a recent version from the website." in the last line of a similar error here also this is the very reason for "The code works fine for Windows" over here ig Therefore using the ffmpeg which is able to deal with your .ts file in moviepy(i.e. 6.1.1) would be solution. FFMPEG is a fast developing library so using the latest version is best, if this solution works,also try latest 7.1.1 or any latest version of series like 6.1.2 or 4.4.5 | 5 | 5 |
79,104,005 | 2024-10-19 | https://stackoverflow.com/questions/79104005/using-hist-to-bin-data-while-grouping-with-over | Consider the following example: import polars as pl df = pl.DataFrame( [ pl.Series( "name", ["A", "B", "C", "D"], dtype=pl.Enum(["A", "B", "C", "D"]) ), pl.Series("month", [1, 2, 12, 1], dtype=pl.Int8()), pl.Series( "category", ["x", "x", "y", "z"], dtype=pl.Enum(["x", "y", "z"]) ), ] ) print(df) shape: (4, 3) ┌──────┬───────┬──────────┐ │ name ┆ month ┆ category │ │ --- ┆ --- ┆ --- │ │ enum ┆ i8 ┆ enum │ ╞══════╪═══════╪══════════╡ │ A ┆ 1 ┆ x │ │ B ┆ 2 ┆ x │ │ C ┆ 12 ┆ y │ │ D ┆ 1 ┆ z │ └──────┴───────┴──────────┘ We can count the number of months in the dataframe that match each month of the year: from math import inf binned_df = ( df.select( pl.col.month.hist( bins=[x + 1 for x in range(11)], include_breakpoint=True, ).alias("binned"), ) .unnest("binned") .with_columns( pl.col.breakpoint.map_elements( lambda x: 12 if x == inf else x, return_dtype=pl.Float64() ) .cast(pl.Int8()) .alias("month") ) .drop("breakpoint") .select("month", "count") ) print(binned_df) shape: (12, 2) ┌───────┬───────┐ │ month ┆ count │ │ --- ┆ --- │ │ i8 ┆ u32 │ ╞═══════╪═══════╡ │ 1 ┆ 2 │ │ 2 ┆ 1 │ │ 3 ┆ 0 │ │ 4 ┆ 0 │ │ 5 ┆ 0 │ │ … ┆ … │ │ 8 ┆ 0 │ │ 9 ┆ 0 │ │ 10 ┆ 0 │ │ 11 ┆ 0 │ │ 12 ┆ 1 │ └───────┴───────┘ (Note: there are 3 categories "x", "y", and "z", so we expect a dataframe of shape 12 x 3 = 36.) Suppose I want to bin the data per the column "category". I can do the following: # initialize an empty dataframe category_binned_df = pl.DataFrame() for cat in df["category"].unique(): # repeat the binning logic from earlier, except on a dataframe filtered for # the particular category we are iterating over binned_df = ( df.filter(pl.col.category.eq(cat)) # <--- the filter .select( pl.col.month.hist( bins=[x + 1 for x in range(11)], include_breakpoint=True, ).alias("binned"), ) .unnest("binned") .with_columns( pl.col.breakpoint.map_elements( lambda x: 12 if x == inf else x, return_dtype=pl.Float64() ) .cast(pl.Int8()) .alias("month") ) .drop("breakpoint") .select("month", "count") .with_columns(category=pl.lit(cat).cast(df["category"].dtype)) ) # finally, vstack ("append") the resulting dataframe category_binned_df = category_binned_df.vstack(binned_df) print(category_binned_df) shape: (36, 3) ┌───────┬───────┬──────────┐ │ month ┆ count ┆ category │ │ --- ┆ --- ┆ --- │ │ i8 ┆ u32 ┆ enum │ ╞═══════╪═══════╪══════════╡ │ 1 ┆ 1 ┆ x │ │ 2 ┆ 1 ┆ x │ │ 3 ┆ 0 ┆ x │ │ 4 ┆ 0 ┆ x │ │ 5 ┆ 0 ┆ x │ │ … ┆ … ┆ … │ │ 8 ┆ 0 ┆ z │ │ 9 ┆ 0 ┆ z │ │ 10 ┆ 0 ┆ z │ │ 11 ┆ 0 ┆ z │ │ 12 ┆ 1 ┆ z │ └───────┴───────┴──────────┘ It seems to me that there should be a way to do this using over, something like pl.col.month.hist(bins=...).over("category"), but the very first step of trying to do so raises an error: df.select( pl.col.month.hist( bins=[x + 1 for x in range(11)], include_breakpoint=True, ) .over("category") .alias("binned"), ) ComputeError: the length of the window expression did not match that of the group Error originated in expression: 'col("month").hist([Series]).over([col("category")])' So there's some sort of conceptual error I am making when thinking of over? Is there a way to use over here at all? | Here's one approach using Expr.over: bins = range(1,12) out = df.select( pl.col('month').hist( bins=bins, include_breakpoint=True ) .over(partition_by='category', mapping_strategy='explode') .alias('binned'), pl.col('category').unique(maintain_order=True).repeat_by(len(bins)+1).flatten() ).unnest('binned').with_columns( pl.col('breakpoint').replace(float('inf'), 12).cast(int) ).rename({'breakpoint': 'month'}) Output: shape: (36, 3) ┌───────┬───────┬──────────┐ │ month ┆ count ┆ category │ │ --- ┆ --- ┆ --- │ │ i64 ┆ u32 ┆ enum │ ╞═══════╪═══════╪══════════╡ │ 1 ┆ 1 ┆ x │ │ 2 ┆ 1 ┆ x │ │ 3 ┆ 0 ┆ x │ │ 4 ┆ 0 ┆ x │ │ 5 ┆ 0 ┆ x │ │ … ┆ … ┆ … │ │ 8 ┆ 0 ┆ z │ │ 9 ┆ 0 ┆ z │ │ 10 ┆ 0 ┆ z │ │ 11 ┆ 0 ┆ z │ │ 12 ┆ 1 ┆ z │ └───────┴───────┴──────────┘ Explanation The key is to use mapping_strategy='explode'. As mentioned in the docs, under explode: Explodes the grouped data into new rows, similar to the results of group_by + agg + explode. Sorting of the given groups is required if the groups are not part of the window operation for the operation, otherwise the result would not make sense. This operation changes the number of rows. (I do not think sorting is required here, but anyone please correct me if I am wrong.) This would be faster than using df.group_by, but the gain in performance is offset by the need to get back the categories: Expr.unique with maintain_order=True + Expr.repeat_by + Expr.flatten. Adding a performance comparison with the method suggested in the answer by @BallpointBen, testing: over: over, maintaining order + 'category' over_ex_cat: over, maintaining order, ex 'category' (highlights the bottleneck) group_by: group_by, not maintaining order + 'category' group_by_order: group_by, maintaining order + 'category' I've left out trivial operations like renaming "breakpoint" column and getting the columns in the same order. Script can be found here (updated for second plot below). Maybe someone can suggest a better way to get back the categories. Otherwise, there does not seem to be too much between the two methods. Update: performance comparison with suggested answers by @HenryHarback, testing: over: over, maintaining order + 'category' (= mapping_strategy='explode') over_join: over, not maintaining order + 'category' (= mapping_strategy='join') spine: cross join + left join, not maintaining order + 'category' Not included is the group_by option + select + struct, which has similar performance to group_by compared above (with unnest). Extended the N-range to show spine catching up with, though apparently not overtaking, over, if the df gets really big. | 4 | 2 |
79,104,578 | 2024-10-19 | https://stackoverflow.com/questions/79104578/pd-to-datetime-fails-with-old-dates | I have a csv file with very old dates, and pd.to_datetime fails. It works in polars. Is this an inherent limitation in pandas, a bug or something else? import pandas as pd dates = ["12/31/1672","12/31/1677","10/19/2024"] df = pd.DataFrame(dates, columns=['Date']) df['Date'] = pd.to_datetime(df['Date'], format='%m/%d/%Y', errors='coerce') df Date 0 NaT 1 1677-12-31 2 2024-10-19 in polars import polars as pl df = pl.DataFrame({ 'Date': dates}) df = df.with_columns(pl.col('Date').str.strptime(pl.Date, format="%m/%d/%Y")) df shape: (3, 1) ┌────────────┐ │ Date │ │ --- │ │ date │ ╞════════════╡ │ 1672-12-31 │ │ 1677-12-31 │ │ 2024-10-19 │ └────────────┘ | pandas has timestamp limitations; the docs suggests use of period for such cases (of course it depends if the period data type covers your use case): df.assign(new_dates=pd.PeriodIndex(df.Date, freq='D')) Date new_dates 0 12/31/1672 1672-12-31 1 12/31/1677 1677-12-31 2 10/19/2024 2024-10-19 | 4 | 3 |
79,096,544 | 2024-10-17 | https://stackoverflow.com/questions/79096544/what-is-the-thon-executable | On Ubuntu or other Linux-based systems, Python 3.14's venv creates an extra executable named 𝜋thon: $ python --version Python 3.13.0 $ python -m venv .venv $ cd .venv/bin && ls Activate.ps1 activate activate.csh activate.fish pip pip3 pip3.13 python python3 python3.13 $ python --version Python 3.14.0a1+ $ python -m venv .venv $ cd .venv/bin && ls 𝜋thon Activate.ps1 activate activate.csh activate.fish pip pip3 pip3.14 python python3 python3.14 What does it do and why is it there? | This is an easter egg. The 𝜋thon executable works exactly the same as python, python3 and python3.14. The name 𝜋thon itself is a pun on "Python" and 𝜋 ("Pi") the mathematical constant, whose decimal representation starts with "3.14". This executable was originally named python𝜋 as a parallelism to python3.14 and other python3.xx executables that are only created on non-Windows operating systems. python𝜋 didn't make the cut, however, since people seem to like 𝜋thon more. | 1 | 6 |
79,103,936 | 2024-10-18 | https://stackoverflow.com/questions/79103936/merging-numpy-arrays-converts-int-to-decimal | I am need to merge 2 arrays together so if a = [] and b is array([76522, 82096], dtype=int64) the merge will be [76522, 82096] but i am getting this in a form of decimal array([76522., 82096.]) here is my code a = np.concatenate((a, b)) how can i merge both arrays with same datatype? | Since a is empty, when it gets converted to a numpy array, it chooses a default dtype=float64. Do the conversion explicitly so you can specify the dtype. np.concatenate((np.array(a, dtype=np.int64), b)) | 1 | 2 |
79,103,687 | 2024-10-18 | https://stackoverflow.com/questions/79103687/how-to-easily-perform-this-random-matrix-multiplication-with-numpy | I want to produce 2 random 3x4 matrices where the entries are normally distributed, A and B. After that, I have a 2x2 matrix C = [[a,b][c,d]], and I would like to use it to produce 2 new 3x4 matrices A' and B', where A' = a A + b B, B' = c A + d B. In order to produce the matrices A and B, I was thinking to use this line of code: Z = np.random.normal(0.0, 1.0, [2,3, 4]) But, given the matrix C, I don't know how to use simple Numpy vectorization to achieve the matrices A' and B' or, equivalently, a 2x3x4 array containing A' and B'. Any idea? | I think you can use np.einsum np.einsum("ij, jkl -> ikl", C, Z) where "ij, jkl -> ikl" specifies the contraction pattern, where i and j are the indices of the C matrix, and j, k, and l are the indices of the Z array. Example Given dummy data like below np.random.seed(0) Z = np.random.normal(0.0, 1.0, [2, 3, 4]) C = [[1,2],[3,4]] You will see print("AB_prim(einsum): \n", np.einsum("ij, jkl -> ikl", C, Z)) shows AB_prim(einsum): [[[ 3.2861278 0.64350724 1.86646445 2.90824185] [ 4.85571614 -1.38759441 1.57622382 -1.85954869] [ -5.20919848 1.71783569 1.87291597 -0.03005653]] [[ 8.33630794 1.68717169 4.71166688 8.05737691] [ 11.57899026 -3.75246669 4.10253606 -3.87045458] [-10.52161582 3.84626989 3.88987551 1.39416044]]] and A, B = Z[0], Z[1] print("A_prim: \n", C[0][0] * A + C[0][1] * B) print("B_prim: \n", C[1][0] * A + C[1][1] * B) shows A_prim: [[ 3.2861278 0.64350724 1.86646445 2.90824185] [ 4.85571614 -1.38759441 1.57622382 -1.85954869] [-5.20919848 1.71783569 1.87291597 -0.03005653]] B_prim: [[ 8.33630794 1.68717169 4.71166688 8.05737691] [ 11.57899026 -3.75246669 4.10253606 -3.87045458] [-10.52161582 3.84626989 3.88987551 1.39416044]] | 1 | 1 |
79,102,009 | 2024-10-18 | https://stackoverflow.com/questions/79102009/how-to-load-tests-from-some-files-and-not-others | I want to run a suite of unit tests in the tests folder. The basic code is: suite = unittest.defaultTestLoader.discover('tests') I want only some of these tests to run, for example test_e1 if file e1.py is present, test_e5 if e5.py is present, but not test_e2 and test_e11 (because files e2.py and e11.py are missing). I tried the pattern argument of the discoverer() function, which defaults to test_*.py, but it does not allow enough control for what I need (see How to match specific files with a shell pattern in unit test discoverer? ). One answer in that thread suggests finding these tests with unittest.TestLoader().loadTestsFromNames, so I tried this code: file_list = [] for some_file in some_file_list: full_filepath = os.path.join(some_dir, some_file) if not os.path.exists(full_filepath): continue file_list.append("tests/test_%s.TestDocs" % some_file) suite = unittest.TestLoader().loadTestsFromNames(file_list) print(suite) The name TestDocs is the class name that inherits from the unit test: class TestDocs(unittest.TestCase): But this shows a list of failed tests such as: <unittest.suite.TestSuite tests=[<unittest.loader._FailedTest testMethod=tests/test_>]> How can I run tests only for a certain set of files? | You are passing hybrid file-path/object names to loadTestsFromNames. Drop the tests from the name, and ensure that tests appears on your module search path, either by Modifying sys.path before calling the method, or Adding tests to the PYTHONPATH environment variable before running your tests. | 2 | 2 |
79,100,411 | 2024-10-18 | https://stackoverflow.com/questions/79100411/pyopengl-calling-glend-gives-opengl-error-1282-after-modifying-imports | I'm trying to follow this tutorial for PyOpenGL, but I get an OpenGL error 1282 when calling glEnd(): import OpenGL import OpenGL.GL from OpenGL.GLUT import * import OpenGL.GLU from OpenGL.raw.GL.VERSION.GL_1_1 import * import time def square(): glBegin(GL_QUADS) glVertex2f(100,100) glVertex2f(200,100) glVertex2f(200, 200) glVertex2f(100, 200) glEnd() def iterate(): glViewport(0, 0, 500,500) glMatrixMode(GL_PROJECTION) glLoadIdentity() glOrtho(0.0, 500, 0.0, 500, 0.0, 1.0) glMatrixMode (GL_MODELVIEW) glLoadIdentity() def showScreen(): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glLoadIdentity() iterate() glColor3f(1.0, 0.0, 3.0) square() glutSwapBuffers() time.sleep(0.017) glutInit() glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(500,500) glutInitWindowPosition(0, 0) wind = glutCreateWindow(b'OpenGL Coding Practice') glutDisplayFunc(showScreen) glutIdleFunc(showScreen) glutMainLoop() This my requirements.txt: numpy==2.1.2 PyOpenGl==3.1.7 This is what the terminal showed: Traceback (most recent call last): File "C:\Users\foo\OPENGLProject\.venv\Lib\site-packages\OpenGL\GLUT\special.py", line 130, in safeCall return function( *args, **named ) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\foo\OPENGLProject\src\main.py", line 37, in showScreen square() File "C:\Users\foo\OPENGLProject\src\main.py", line 14, in square glEnd() File "C:\Users\foo\OPENGLProject\.venv\Lib\site-packages\OpenGL\platform\baseplatform.py", line 415, in __call__ return self( *args, **named ) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\foo\OPENGLProject\.venv\Lib\site-packages\OpenGL\error.py", line 230, in glCheckError raise self._errorClass( OpenGL.error.GLError: GLError( err = 1282, description = b'operaci\xf3n no v\xe1lida', baseOperation = glEnd, cArguments = () ) GLUT Display callback <function showScreen at 0x000002BB62CCFEC0> with (),{} failed: returning None GLError( err = 1282, description = b'operaci\xf3n no v\xe1lida', baseOperation = glEnd, cArguments = () ) I'm using Windows 10 and Python 3.12.3 in a venv. What could be the cause of this? As additional notes that might help: I had to manually install freeglut.dll. I didn't install PyOpenGL_accelerate since the installation produced errors. I also tried to change the glBegin argument to GL_LINES and GL_TRIANGLES, use other methods for glVertex* like glVertex3f, glVertex3d or glVertex3fv and change the number of times glVertex* was called; all giving the same result. | Note that the OpenGL-related imports are different from the ones in the tutorial. The tutorial uses from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.GLU import * In particular, adding import OpenGL imports the PyOpenGL's default error checking mechanism, which accompanies every call to the OpenGL library with a call to glCheckError, and raises an exception if an error is found (ref). This is fine when using modern OpenGL (typically version >=3.3, using VAOs and VBOs), but runs into a problem when using the legacy interface (glBegin, glEnd). From the documentation: GL_INVALID_OPERATION is generated if a command other than glVertex, glColor, glSecondaryColor, glIndex, glNormal, glFogCoord, glTexCoord, glMultiTexCoord, glVertexAttrib, glEvalCoord, glEvalPoint, glArrayElement, glMaterial, glEdgeFlag, glCallList, or glCallLists is executed between the execution of glBegin and the corresponding execution glEnd. The problem is that the function glCheckError is not on this list and may not be called here. And hence there is an "invalid operation" error (or operación no válida in your language). One way to solve this, naturally, would be to replace the imports in the code of the question by the ones from the tutorial. Another way would be to at least not import plain OpenGL and the raw version. The latter provides access to the C-interface and this is currently not necessary. In other words, keep: import OpenGL.GL from OpenGL.GLUT import * import OpenGL.GLU Just as a demonstration, the error can also be made to go away by disabling the default error checking mechanism. The purple quad will still render: import OpenGL OpenGL.ERROR_CHECKING = False import OpenGL.GL from OpenGL.GLUT import * import OpenGL.GLU from OpenGL.raw.GL.VERSION.GL_1_1 import * | 2 | 1 |
79,096,787 | 2024-10-17 | https://stackoverflow.com/questions/79096787/incorrect-calculation-in-the-list-processing-logic-based-on-dependencies-between | My code gives me this incorrect output: ['CT', 'X', 'Z'] [100, 1.0583, 1.0633] [200, 3.012, 5.873600000000001] [300, 1.79, 2.5220000000000002] ['Total', 0, 0] The reason is that 5.873600000000001 is incorrectly calculated because according to list_components_and_hierarchical_relationships, it is indicated that Z is a quantity of 1: ['Z', ['PRODUCT', 1]]. Then, F has a quantity of 3: ['F', [['Z', 3]]] G depends on F with a quantity of 1: ['G', [['F', 1]]] Then we can build another branch, where Z is a quantity of 1: ['Z', ['PRODUCTO', 1]], and E has a quantity of 4: ['E', [['Z', 4]]]. G depends on E with a quantity of 2: ['G', [['E', 2]]], leaving the calculation as follows: 0.4804 * 1 * 3 + 0.351 * 1 * 2 + 0.77 * 1 * 4 + 0.2168 * 1 * 4 * 2 + 0.2168 * 1 * 3 * 1 = 7.608 This should be the "correct output": capacity_list = [ ['CT', 'X', 'Z'], [100, 1.0583 * 1, 1.0633 * 1], [200, 0.351 * 1 * 2 + 0.77 * 1 * 3, 0.4804 * 1 * 3 + 0.351 * 1 * 2 + 0.77 * 1 * 4 + 0.2168 * 1 * 4 * 2 + 0.2168 * 1 * 3 * 1], [300, 0.895 * 1 * 2, 0.895 * 1 * 2 + 0.244 * 1 * 3], ['Total'] ] Here is my code: # Initial data routing_info_list = [ ['Art.', 'CT', 'Batch_Size', 'Setup_Time', 'Unit_Run_Time'], ['X', 100, 30, 0.0083, 1.0583], ['Z', 100, 30, 0.0033, 1.0633], ['D', 200, 40, 0.011, 0.351], ['D', 300, 30, 0.015, 0.895], ['E', 200, 10, 0.0, 0.77], ['F', 200, 25, 0.0304, 0.4804], ['F', 300, 50, 0.004, 0.244], ['G', 200, 25, 0.0068, 0.2168] ] capacity_list = [ ['CT', 'X', 'Z'], [100], [200], [300], ['Total'] ] # Represents the component relationship scheme in first image list_components_and_hierarchical_relationships = [ [ ['X', ['PRODUCT', 1]], ['D', [['X', 2]]], ['E', [['X', 3]]] ], [ ['Z', ['PRODUCT', 1]], ['F', [['Z', 3]]], ['D', [['Z', 2]]], ['E', [['Z', 4]]], ['G', [['F', 1]]], ['G', [['E', 2]]] ] ] This code seeks to determine the total execution times of the components in a production or manufacturing process, taking into account the dependency relationships between the different components and subcomponents. # Function to search for Unit_Run_Time in routing_info_list def search_execution_time_func(articulo, ct): for row in routing_info_list[1:]: if row[0] == articulo and row[1] == ct: return row[4] return None # Function to process a product def process_product_info_func(components, ct): total_time = 0 for component, dependencies in components: run_time = search_execution_time_func(component, ct) if run_time is not None: # Multiply by the dependencies total_quantity = 1 for dep in dependencies: if isinstance(dep, list) and dep[0] != 'PRODUCT': total_quantity *= dep[1] print( str(run_time) + " hrs * " + str(total_quantity) ) total_time += run_time * total_quantity return total_time # Process each CT row from the product capacity list for each column for row in capacity_list[1:]: ct = row[0] times = [] for product_idx, product in enumerate(list_components_and_hierarchical_relationships): total_time = process_product_info_func(product, ct) times.append(total_time) row.extend(times) # Calculate the 'Total' row # Get the number of columns, excluding 'CT' num_columns = len(capacity_list[0]) - 1 # Sum the values of each column, ignoring the header row and the 'CT' column for col in range(1, num_columns + 1): total_sum = sum(row[col] for row in capacity_list[1:-1]) capacity_list[-1][col] = total_sum # Display the final result for row in capacity_list: print(row) What should I do to correct it, and tell me how it would give the correct output, adding the terms of each branch of the components according to the diagram? | For this task I'd use networkx module: import networkx as nx def create_hierarchical_graph(relationships_list): for relationships in relationships_list: G = nx.DiGraph() root = None for item in relationships: node = item[0] connections = item[1] if connections[0] == "PRODUCT": G.add_edge("PRODUCT", node, weight=connections[1]) root = node else: for connection in connections: parent_node = connection[0] weight = connection[1] G.add_edge(parent_node, node, weight=weight) yield G, root def get_weight(G, path): total = 1 for u, v in zip(path[:-1], path[1:]): total *= G.edges[u, v]["weight"] return total list_components_and_hierarchical_relationships = [ [ ["X", ["PRODUCT", 1]], ["D", [["X", 2]]], ["E", [["X", 3]]], ], [ ["Z", ["PRODUCT", 1]], ["F", [["Z", 3]]], ["D", [["Z", 2]]], ["E", [["Z", 4]]], ["G", [["F", 1]]], ["G", [["E", 2]]], ], ] routing_info_list = [ ["Art.", "CT", "Batch_Size", "Setup_Time", "Unit_Run_Time"], ["X", 100, 30, 0.0083, 1.0583], ["Z", 100, 30, 0.0033, 1.0633], ["D", 200, 40, 0.011, 0.351], ["D", 300, 30, 0.015, 0.895], ["E", 200, 10, 0.0, 0.77], ["F", 200, 25, 0.0304, 0.4804], ["F", 300, 50, 0.004, 0.244], ["G", 200, 25, 0.0068, 0.2168], ] # transform routing info list to # easily find unit run time unit_run_times = {} for l in routing_info_list[1:]: unit_run_times.setdefault(l[1], {}).setdefault(l[0], {}) unit_run_times[l[1]][l[0]] = l[-1] for G, root in create_hierarchical_graph(list_components_and_hierarchical_relationships): print(f"{root=}") for CT in [100, 200, 300]: s = 0 for a in unit_run_times[CT]: for p in nx.all_simple_paths(G, "PRODUCT", a): s += get_weight(G, p) * unit_run_times[CT][a] print(CT, s) print("-" * 80) Prints: root='X' 100 1.0583 200 3.012 300 1.79 -------------------------------------------------------------------------------- root='Z' 100 1.0633 200 7.6080000000000005 300 2.5220000000000002 -------------------------------------------------------------------------------- | 3 | 1 |
79,099,118 | 2024-10-17 | https://stackoverflow.com/questions/79099118/override-value-in-pydantic-model-with-environment-variable | I am building some configuration logic for a Python 3 app, and trying to use pydantic and pydantic-settings to manage validation etc. I'm able to load raw settings from a YAML file and create my settings object from them. I'm also able to read a value from an environment variable. But I can't figure out how to make the environment variable value take precedence over the raw settings: import os import yaml as pyyaml from pydantic_settings import BaseSettings, SettingsConfigDict class FooSettings(BaseSettings): foo: int bar: str model_config = SettingsConfigDict(env_prefix='FOOCFG__') raw_yaml = """ foo: 13 bar: baz """ os.environ.setdefault("FOOCFG__FOO", "42") raw_settings = pyyaml.safe_load(raw_yaml) settings = FooSettings(**raw_settings) assert settings.foo == 42 If I comment out foo: 13 in the input yaml, the assertion passes. How can I make the env value take precedence? | Are you sure you want the environment to take precedence? While not ubiquitous, it is very common for environment variables to have the lowest precedence (typically, the ordering is built-in defaults, then environment variables, then configuration files, then command line options). Deviating from this convention can be surprising. You could get the behavior you want for a specific field by adding a field validator that checks for the appropriate environment variable and uses that value in preference to an existing value if it is avaiable. Something like: import os import yaml as pyyaml from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic import field_validator class FooSettings(BaseSettings): model_config = SettingsConfigDict(env_prefix="FOOCFG__") foo: int bar: str @field_validator("foo", mode="after") @classmethod def validate_foo(cls, val): '''Always use the value from the environment if it's available.''' if env_val := os.environ.get(f"{cls.model_config['env_prefix']}FOO"): return int(env_val) return val raw_yaml = """ foo: 13 bar: baz """ os.environ.setdefault("FOOCFG__FOO", "42") raw_settings = pyyaml.safe_load(raw_yaml) settings = FooSettings(**raw_settings) assert settings.foo == 42 If you wanted to do this for all fields, you could use a model validator instead. Maybe something like this? @model_validator(mode="after") @classmethod def validate_foo(cls, data): for field in cls.model_fields: env_name = f'{cls.model_config["env_prefix"]}{field.upper()}' if env_val := os.environ.get(env_name): setattr(data, field, type(getattr(data, field))(env_val)) return data | 2 | 2 |
79,099,138 | 2024-10-17 | https://stackoverflow.com/questions/79099138/usage-of-retain-graph-in-pytorch | I get error if I don't supply retain_graph=True in y1.backward() import torch x = torch.tensor([2.0], requires_grad=True) y = torch.tensor([3.0], requires_grad=True) f = x+y z = 2*f y1 = z**2 y2 = z**3 y1.backward() y2.backward() Traceback (most recent call last): File "/Users/a0m08er/pytorch/pytorch_tutorial/tensor.py", line 58, in <module> y2.backward() File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/_tensor.py", line 521, in backward torch.autograd.backward( File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/autograd/__init__.py", line 289, in backward _engine_run_backward( File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. But I don't get error when I do this: import torch x = torch.tensor([2.0], requires_grad=True) y = torch.tensor([3.0], requires_grad=True) z = x+y y1 = z**2 y2 = z**3 y1.backward() y2.backward() Since z is a common node for y1 and y2 why it is not showing me error when I do y2.backward() | basically the error Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. Error comes when the backwards pass tries to access tensors that were saved for the backwards pass (using ctx.save_for_backward), and those are not present (usually because they were freed after doing the first backward pass witoutretain_graph=True). So the computation graph is still there after the first backwards pass, only the tensors saved in context were freed. But the thing is, addition operations do not need to save tensors for backwards pass (the gradient along each of the inputs is the same as the gradient over the sum — so the gradient is just passed along the graph without doing any operation, no need to save anything for backward). Thus the error doesn't happen if the only shared node is an addition node. In comparison, multiplication needs to save the input values for the backward pass (since the gradient for a * b along b is a * grad(a * b)). Thus the exception gets raised when it tries to access them | 3 | 3 |
79,098,721 | 2024-10-17 | https://stackoverflow.com/questions/79098721/fixing-badly-formatted-floats-with-numpy | I am reading a text file only containing floating point numbers using numpy.loadtxt. However, some of the data is corrupted and reads something like X.XXXXXXX+YYY instead of X.XXXXXXXE+YY (Missing E char). I'd like to interpret them as the intended floating point number (or NaN if impossible) and wondered if there was any easy way to do this upon reading the file instead of manually correcting each entries in the file since it contains hundreds of thousands of lines of data. MWE: import numpy as np data = np.loadtxt("path/to/datafile") Example of error raised: ValueError: could not convert string '0.710084093014+195' to float64 at row 862190, column 6 | The following works: import numpy as np import re def converter(txt): txt = re.sub(r"(?<=\d)(?<!E)[\+\-]", lambda x: 'E'+x[0], txt.decode()) return float(txt) np.loadtxt("path/to/datafile", converters = converter) | 2 | 2 |
79,098,997 | 2024-10-17 | https://stackoverflow.com/questions/79098997/python-date-time-missing-month-but-it-is-there | I've been trying to create this machine learning tool to make predictions on the amount of orders in the next year per month but I have been getting this error: ValueError: to assemble mappings requires at least that [year, month, day] be specified: [month] is missing here is my code. I am passing in the month and it should be getting assigned a number that is supposed to represent the respective month, but form some reason this does not appear to be happening. I am also aware that the months are not all capitalized but this should not be an issue as they are all getting passed to lowercase. import pandas as pd # Example DataFrame creation from CSV (replace this with your actual CSV upload logic) data = { 'Year': [2021, 2021, 2021, 2022, 2022, 2023, 2023], 'Month': ['january', 'february', 'march', 'january', 'february', 'march', 'april'], 'OrderCount': [60, 55, 70, 64, 56, 76, 70] } df = pd.DataFrame(data) # Convert 'Month' to numerical values (January = 1, February = 2, etc.) month_map = { 'january': 1, 'february': 2, 'march': 3, 'april': 4, 'may': 5, 'june': 6, 'july': 7, 'august': 8, 'september': 9, 'october': 10, 'november': 11, 'december': 12 } # Map month names to numbers df['Month'] = df['Month'].str.lower() df['MonthNum'] = df['Month'].map(month_map) # Convert Year and MonthNum to integers df['Year'] = df['Year'].astype(int) df['MonthNum'] = df['MonthNum'].astype(int) # Combine Year and Month into a DateTimeIndex # The next line is where the issue is likely occurring df['Date'] = pd.to_datetime(df[['Year', 'MonthNum']].assign(DAY=1)) # Print the resulting DataFrame to see if 'Date' was successfully created print(df) | If you check to_datetime documentation, you will find that it requires the column called month. Your month column contains the month names. You should rename the columns before using to_datetime like this: df=df.rename(columns={"Month": "MonthName", "MonthNum": "Month"}). This way, pandas will look for the month numeric column and find it. | 3 | 1 |
79,098,592 | 2024-10-17 | https://stackoverflow.com/questions/79098592/how-to-identify-cases-where-both-elements-of-a-pair-are-greater-than-others-res | I have a case where I have a list of pairs, each with two numerical values. I want to find the subset of these elements containing only those pairs that are not exceeded by both elements of another (let's say "eclipsed" by another). For example, the pair (1,2) is eclipsed by (4,5) because both elements are less than the respective elements in the other pair. Also, (1,2) is considered eclipsed by (1,3) because while the first element is equal to the other and the second element is less than the other's. However the pair (2, 10) is not eclipsed by (9, 9) because only one of its elements is exceeded by the other's. Cases where the pairs are identical should be reduced to just one (duplicates removed). Ultimately, I am looking to reduce the list of pairs to a subset where only pairs that were not eclipsed by any others remain. For example, take the following list: (1,2) (1,5) (2,2) (1,2) (2,2) (9,1) (1,1) This should be reduced to the following: (1,5) (2,2) (9,1) My initial implementation of this in python was the following, using polars: import polars as pl pairs_list = [ (1,2), (1,5), (2,2), (1,2), (2,2), (9,1), (1,1), ] # tabulate pair elements as 'a' and 'b' pairs = pl.DataFrame( data=pairs_list, schema={'a': pl.UInt32, 'b': pl.UInt32}, orient='row', ) # eliminate any duplicate pairs unique_pairs = pairs.unique() # self join so every pair can be compared (except against itself) comparison_inputs = ( unique_pairs .join( unique_pairs, how='cross', suffix='_comp', ) .filter( pl.any_horizontal( pl.col('a') != pl.col('a_comp'), pl.col('b') != pl.col('b_comp'), ) ) ) # flag pairs that were eclipsed by others comparison_results = ( comparison_inputs .with_columns( pl.all_horizontal( pl.col('a') <= pl.col('a_comp'), pl.col('b') <= pl.col('b_comp'), ) .alias('is_eclipsed') ) ) # remove pairs that were eclipsed by at least one other principal_pairs = ( comparison_results .group_by('a', 'b') .agg(pl.col('is_eclipsed').any()) .filter(is_eclipsed=False) .select('a', 'b') ) While this does appear to work, it is computationally infeasible for large datasets due to the sheer size of the self-joined table. I have considered filtering the comparison_inputs table down by removing redundant reversed comparisons, e.g., pair X vs pair Y and pair Y vs pair X don't both need to be in the table as they currently are, but changing that requires an additional condition in each comparison to report which element was eclipsed in the comparison and only reduces the dataset in half, which isn't that significant. I have found I can reduce the needed comparisons substantially by doing a window function filter that filters to only the max b for each a and vice versa before doing the self joining step. In other words: unique_pairs = ( pairs .unique() .filter(a = pl.col('a').last().over('b', order_by='a') .filter(b = pl.col('b').last().over('a', order_by='b') But of course this only does so much and depends on the cardinality of a and b. I still need to self-join and compare after this to get a result. I am curious if there is already some algorithm established for calculating this and whether anyone has ideas for a more efficient method. Interested to learn more anyway. Thanks in advance. | What we can do from my perspective is. First, we remove duplicates and sort the pairs - First element in des order and with the ties in first element, sort by second element in des order unique_pairs = sorted(set(pairs), reverse=True) By keeping the condition for each pair If - b is greater than the maximum second element seen so far for all previous pairs with larger first elements, this pair cannot be eclipsed. from typing import List, Tuple import bisect def find_non_eclipsed_pairs(pairs: List[Tuple[int, int]]) -> List[Tuple[int, int]]: if not pairs: return [] unique_pairs = sorted(set(pairs), reverse=True) result = [] max_second_elements = [] for pair in unique_pairs: if not max_second_elements or pair[1] > max_second_elements[-1]: result.append(pair) while max_second_elements and max_second_elements[-1] <= pair[1]: max_second_elements.pop() max_second_elements.append(pair[1]) return sorted(result) Testing def test_pareto_pairs(): test_cases = [ ( [(1,2), (1,5), (2,2), (1,2), (2,2), (9,1), (1,1)], [(1,5), (2,2), (9,1)] ), ( [], [] ), ( [(1,1)], [(1,1)] ), ( [(1,1), (2,2), (3,3), (4,4)], [(4,4)] ), ( [(1,5), (5,1)], [(1,5), (5,1)] ), ( [(1,1), (1,2), (2,1), (2,2), (3,1), (1,3)], [(1,3), (2,2), (3,1)] ) ] for i, (input_pairs, expected) in enumerate(test_cases, 1): result = find_non_eclipsed_pairs(input_pairs) assert result == sorted(expected), f"Test case {i} failed: expected {expected}, got {result}" print(f"Test case {i} passed") if __name__ == "__main__": test_pareto_pairs() pairs_list = [ (1,2), (1,5), (2,2), (1,2), (2,2), (9,1), (1,1), ] result = find_non_eclipsed_pairs(pairs_list) print("\nOriginal pairs:", pairs_list) print("Non-eclipsed pairs:", result) Which results =================== RESTART: C:/Users/Bhargav/Desktop/test.py ================== Test case 1 passed Test case 2 passed Test case 3 passed Test case 4 passed Test case 5 passed Test case 6 passed Original pairs: [(1, 2), (1, 5), (2, 2), (1, 2), (2, 2), (9, 1), (1, 1)] Non-eclipsed pairs: [(1, 5), (2, 2), (9, 1)] Time complexity - O(n log n) Space complexity is O(n) Edit: Thanks for @no comment for suggesting using sort with reverse=True unique_pairs = sorted(set(pairs), reverse=True) | 4 | 4 |
79,098,013 | 2024-10-17 | https://stackoverflow.com/questions/79098013/precision-of-jax | I have a question regarding the precision of float in JAX. For the following code, import numpy as np import jax.numpy as jnp print('jnp.arctan(10) is:','%.60f' % jnp.arctan(10)) print('np.arctan(10) is:','%.60f' % np.arctan(10)) jnp.arctan(10) is: 1.471127629280090332031250000000000000000000000000000000000000 np.arctan(10) is: 1.471127674303734700345103192375972867012023925781250000000000 print('jnp.arctan(10+1e-7) is:','%.60f' % jnp.arctan(10+1e-7)) print('np.arctan(10+1e-7) is:','%.60f' % np.arctan(10+1e-7)) jnp.arctan(10+1e-7) is: 1.471127629280090332031250000000000000000000000000000000000000 np.arctan(10+1e-7) is: 1.471127675293833592107262120407540351152420043945312500000000 jnp gave identical results for arctan(x) for a small change of input variable (1e-7), but np did not. My question is how to let jax.numpy get the right number for a small change of x? Any comments are appreciated. | JAX defaults to float32 computation, which has a relative precision of about 1E-7. This means that your two inputs are effectively identical: >>> np.float32(10) == np.float32(10 + 1E-7) True If you want 64-bit precision like NumPy, you can enable it as discussed at JAX sharp bits: double precision, and then the results will match to 64-bit precision: import jax jax.config.update('jax_enable_x64', True) import jax.numpy as jnp import numpy as np print('jnp.arctan(10) is:','%.60f' % jnp.arctan(10)) print('np.arctan(10) is: ','%.60f' % np.arctan(10)) print('jnp.arctan(10+1e-7) is:','%.60f' % jnp.arctan(10+1e-7)) print('np.arctan(10+1e-7) is: ','%.60f' % np.arctan(10+1e-7)) jnp.arctan(10) is: 1.471127674303734700345103192375972867012023925781250000000000 np.arctan(10) is: 1.471127674303734700345103192375972867012023925781250000000000 jnp.arctan(10+1e-7) is: 1.471127675293833592107262120407540351152420043945312500000000 np.arctan(10+1e-7) is: 1.471127675293833592107262120407540351152420043945312500000000 (but please note that even the 64-bit precision used by Python and NumPy is only accurate to about one part in 10^16, so most of the digits in the representation you printed are inaccurate compared to the true arctan value). | 2 | 4 |
79,097,421 | 2024-10-17 | https://stackoverflow.com/questions/79097421/rolling-sum-with-right-closed-interval-in-duckdb | In Polars / pandas I can do a rolling sum where row each row the window is (row - 10 minutes, row]. For example: import polars as pl data = { "timestamp": [ "2023-08-04 10:00:00", "2023-08-04 10:05:00", "2023-08-04 10:10:00", "2023-08-04 10:10:00", "2023-08-04 10:20:00", "2023-08-04 10:20:00", ], "value": [1, 2, 3, 4, 5, 6], } df = pl.DataFrame(data).with_columns(pl.col("timestamp").str.strptime(pl.Datetime)) print( df.with_columns(pl.col("value").rolling_sum_by("timestamp", "10m", closed="right")) ) This outputs shape: (6, 2) ┌─────────────────────┬───────┐ │ timestamp ┆ value │ │ --- ┆ --- │ │ datetime[μs] ┆ i64 │ ╞═════════════════════╪═══════╡ │ 2023-08-04 10:00:00 ┆ 1 │ │ 2023-08-04 10:05:00 ┆ 3 │ │ 2023-08-04 10:10:00 ┆ 9 │ │ 2023-08-04 10:10:00 ┆ 9 │ │ 2023-08-04 10:20:00 ┆ 11 │ │ 2023-08-04 10:20:00 ┆ 11 │ └─────────────────────┴───────┘ How can I do this in DuckDB? Closest I could come up with is: rel = duckdb.sql(""" SELECT timestamp, value, SUM(value) OVER roll AS rolling_sum FROM df WINDOW roll AS ( ORDER BY timestamp RANGE BETWEEN INTERVAL 10 minutes PRECEDING AND CURRENT ROW ) ORDER BY timestamp; """) print(rel) but that makes the window [row - 10 minutes, row], not (row - 10 minutes, row] Alternatively, I could do rel = duckdb.sql(""" SELECT timestamp, value, SUM(value) OVER roll AS rolling_sum FROM df WINDOW roll AS ( ORDER BY timestamp RANGE BETWEEN INTERVAL '10 minutes' - INTERVAL '1 microsecond' PRECEDING AND CURRENT ROW ) ORDER BY timestamp; """) but I'm not sure about how robust that'd be? | Maybe not particularly neat, but from the top of my head you could exclude the rows which are exactly 10 minutes back by additional window clause import duckdb rel = duckdb.sql(""" SELECT timestamp, value, SUM(value) OVER roll - coalesce(SUM(value) OVER exclude, 0) AS rolling_sum FROM df WINDOW roll AS ( ORDER BY timestamp RANGE BETWEEN INTERVAL 10 minutes PRECEDING AND CURRENT ROW ), exclude AS ( ORDER BY timestamp RANGE BETWEEN INTERVAL 10 minutes PRECEDING AND INTERVAL 10 minutes PRECEDING ) ORDER BY timestamp; """) print(rel) ┌─────────────────────┬───────┬─────────────┐ │ timestamp │ value │ rolling_sum │ │ timestamp │ int64 │ int128 │ ├─────────────────────┼───────┼─────────────┤ │ 2023-08-04 10:00:00 │ 1 │ 1 │ │ 2023-08-04 10:05:00 │ 2 │ 3 │ │ 2023-08-04 10:10:00 │ 3 │ 9 │ │ 2023-08-04 10:10:00 │ 4 │ 9 │ │ 2023-08-04 10:20:00 │ 5 │ 11 │ │ 2023-08-04 10:20:00 │ 6 │ 11 │ └─────────────────────┴───────┴─────────────┘ | 5 | 2 |
79,097,636 | 2024-10-17 | https://stackoverflow.com/questions/79097636/looping-if-statement | I want to loop through an array with an if statement and only after it has looped through the entire array execute the else. This is how i have my code now for index, nameList in enumerate(checkedName) if record["first_name"] == nameList["first_name"] and record["last_name"] == nameList["last_name"]: print("Match name") else: print("No match name") checkedName.append({"id" : record["id"], "first_name" : record["first_name"], "last_name" : record["last_name"]}) But i would like it to be more like this: for index, nameList in enumerate(checkedName) if record["first_name"] == nameList["first_name"] and record["last_name"] == nameList["last_name"]: print("Match name") else: print("No match name") checkedName.append({"id" : record["id"], "first_name" : record["first_name"], "last_name" : record["last_name"]}) But i have no idea how to do this in a not so messy way i have an idea but i feel like i could do this shorter | Using any() is more pythonic and readable...Logic is like check if any record in the list matches your conditions if any(record["first_name"] == nameList["first_name"] and record["last_name"] == nameList["last_name"] for nameList in checkedName): print("Match name") else: print("No match name") checkedName.append({ "id": record["id"], "first_name": record["first_name"], "last_name": record["last_name"] }) | 1 | 5 |
79,096,016 | 2024-10-16 | https://stackoverflow.com/questions/79096016/how-do-i-get-the-methods-with-parameters-of-a-python-class-while-keeping-the-o | The dir() function prints the methods in alphabetical order. Is there a way to get the methods of a class (with their parameters) but keeping the original order of the methods? Here's my code return [ (m, getattr(PythonClass, m).__code__.co_varnames) for m in dir(PythonClass) ] | As @Barmar mentioned in the comments, you can use the __dict__ attribute of a class to access its attributes. Since Python 3.7 dict keys are guaranteed to retain their insertion order, so by iterating over PythonClass.__dict__ you can obtain attributes of PythonClass in the order of definition. It is also more idiomatic to use the vars function instead of the __dict__ attribute to access the attributes dict. To filter class attributes for methods, you can use inspect.isfunction to test if an attribute is a function: from inspect import isfunction class PythonClass: var = 1 def method_b(self, b): ... def method_a(self, a): ... print([ (name, obj.__code__.co_varnames) for name, obj in vars(PythonClass).items() if isfunction(obj) ]) This outputs: [('method_b', ('self', 'b')), ('method_a', ('self', 'a'))] Demo: https://ideone.com/rNMcYO | 1 | 3 |
79,096,452 | 2024-10-17 | https://stackoverflow.com/questions/79096452/what-does-mean-in-python | Have been told that in Python ''' is used to indicate the start of a multi-line string. However I have also been taught that this code also allows for the documentation of functions and modules. Googling, surprisingly, doesn't give a clear answer on what ''' definitively refers to. So how should I remember, as a beginner, what this Python code refers to? A multi-line string? An operator to assist documentation? Both? Something else? | Triple quotes ''' (and """) is a marker for a string literal just like quote characters ' and ", which you can see in Python's grammar for String and Bytes literals. Its only difference to regular quotes is that newlines and unescaped quote characters are allowed within a triple-quoted string literal, which makes triple quotes ideal for documentation in natural language where newlines and unescaped quote characters can appear often. It is why as a convention triple quotes are used for docstrings, as suggested in PEP-257, although you can still use regular quotes for docstrings. | 2 | 1 |
79,095,809 | 2024-10-16 | https://stackoverflow.com/questions/79095809/using-pyparsing-for-parsing-filter-expressions | I'm currently trying to write a parser (using pyparsing) that can parse strings that can then be applied to a (pandas) dataframe to filter data. I've already got it working after much trial & error for all kinds of example strings, however I am having trouble with extending it further from this point on. First, here is my current code (that should be working if you just copy paste, at least for my Python 3.11.9 and pyparsing 3.1.2): import pyparsing as pp # Define the components of the grammar field_name = pp.Word(pp.alphas + "_", pp.alphanums + "_") action = pp.one_of("include exclude") sub_action = pp.one_of("equals contains starts_with ends_with greater_than not_equals not_contains not_starts_with not_ends_with empty not_empty less_than less_than_or_equal_to greater_than_or_equal_to between regex in_list not_in_list") # Custom regex pattern parser that handles regex ending at the first space def regex_pattern(): def parse_regex(t): # Join tokens to form the regex pattern return ''.join(t[0]) return pp.Regex(r'[^ ]+')("regex").setParseAction(parse_regex) # Define value as either a quoted string, a regex pattern, or a simple word with allowed characters quoted_string = pp.QuotedString('"') unquoted_value = pp.Word(pp.alphanums + "_-;, ") | pp.Regex(r'[^/]+') value = pp.Optional(quoted_string | regex_pattern() | unquoted_value)("value") slash = pp.Suppress("/") filter_expr = pp.Group(field_name("field") + slash + action("action") + slash + sub_action("sub_action") + pp.Optional(slash + value, default="")) # Define logical operators and_op = pp.one_of("AND and") or_op = pp.one_of("OR or") not_op = pp.one_of("NOT not") # Define the overall expression using infix notation expression = pp.infixNotation(filter_expr, [ (not_op, 1, pp.opAssoc.RIGHT), (and_op, 2, pp.opAssoc.LEFT), (or_op, 2, pp.opAssoc.LEFT) ]) # List of test filters test_filters = [ "order_type/exclude/contains/STOP ORDER AND order_validity/exclude/contains/GOOD FOR DAY", "order_status/include/regex/^New$ AND order_id/include/equals/123;124;125", "order_id/include/equals/123;124;125", "order_id/include/equals/125 OR currency/include/equals/EUR", "trade_amount/include/greater_than/1500 AND currency/include/equals/USD", "trade_amount/include/between/1200-2000 AND currency/include/in_list/USD,EUR", "order_status/include/starts_with/New;Filled OR order_status/include/ends_with/ed", "order_status/exclude/empty AND filter_code/include/not_empty", "order_status/include/regex/^New$", "order_status/include/regex/^New$ OR order_status/include/regex/^Changed$", "order_status/include/contains/New;Changed" ] # Loop over test filters, parse each, and display the results for test_string in test_filters: print(f"Testing filter: {test_string}") try: parse_result = expression.parse_string(test_string, parseAll=True).asList()[0] print(f"Parsed result: {parse_result}") except Exception as e: print(f"Error with filter: {test_string}") print(e) print("\n") Now, if you run the code, you'll notice that all the test strings parse just fine, except the first element of the list, "order_type/exclude/contains/STOP ORDER AND order_validity/exclude/contains/GOOD FOR DAY". The problem (as far as I can tell) is that the empty space between "STOP" and "ORDER" is being recognized as the end of the "value" part of that part of the group, and then it breaks. What I've tried is to use Skipsto to just skip to the next logical operator after the sub_action part is done, but that didn't work. Also, I wasn't sure how extendable that is, because in theory it should even be possible to have many chained expressions (e.g. part1 AND part2 OR part3), where each part consits of the 3-4 elements (field_name, action, sub_action and the optional value). I've also tried extending the unquoted_value to also include empty spaces, but that changed nothing, either. I've also looked at some of the examples over at https://github.com/pyparsing/pyparsing/tree/master/examples, but I couldn't really see anything that was similar to my use case. (Maybe once my code is working properly, it could be added as an example there, not sure how useful my case is to others). | Rather than define a term that includes the spaces, better to define a term that parses words, so that it can detect and stop if it finds a word that shouldn't be included (like one of the logical operator words). I did this and then wrapped it in a Combine that a) allows for whitespace between the words (adjacent=False), and b) joins them back together with single spaces. I made these changes in your parser, and things look like they will work better for you: # I used CaselessKeywords so that you get a repeatable return value, # regardless of the input and_op = pp.CaselessKeyword("and") or_op = pp.CaselessKeyword("or") not_op = pp.CaselessKeyword("not") # define an expression for any logical operator, to be used when # defining words that unquoted_value should not include any_operator = and_op | or_op | not_op value_chars = pp.alphanums + "--;," # an unquoted_value is one or more words that are not operators unquoted_value = pp.Combine(pp.OneOrMore(pp.Word(value_chars), stop_on=any_operator), join_string=" ", adjacent=False) # can also be written as # unquoted_value = pp.Combine(pp.Word(value_chars)[1, ...:any_operator], join_string=" ", adjacent=False) # regex_pattern() can be replaced by this regex single_word = pp.Regex(r"\S+") value = (quoted_string | unquoted_value | single_word)("value") Lastly, your testing loop looks a lot like the loops I used to write in many of these StackOverflow question responses. I wrote them so many times that I finally added a ParserElement method run_tests, which you can call like this to replace your test loop: expression.run_tests(test_filters) | 2 | 1 |
79,095,934 | 2024-10-16 | https://stackoverflow.com/questions/79095934/how-to-extract-a-cell-value-from-a-dataframe | I am trying to extract a cell value from a dataframe, then why I always get a series instead of a value. For example: df_test=pd.DataFrame({'Well':['test1','test2','test3'],'Region':['east','west','east']}) df_test Well Region 0 test1 east 1 test2 west 2 test3 eas well='test2' region_thiswell=df_test.loc[df_test['Well']==well,'Region'] region_thiswell 1 west Name: Region, dtype: object I am expecting variable of region_thiswell is equal to 'west' string only. Why I am getting a series? Thanks | A potential issue with item/values/iloc is that it will yield an exception of there is no match. squeeze will return an empty Series: df_test.loc[df_test["Well"] == 'test999', "Region"].item() # ValueError: can only convert an array of size 1 to a Python scalar df_test.loc[df_test["Well"] == 'test999', "Region"].values[0] # IndexError: index 0 is out of bounds for axis 0 with size 0 df_test.loc[df_test["Well"] == 'test999', "Region"].iloc[0] # IndexError: single positional indexer is out-of-bounds df_test.loc[df_test["Well"] == 'test999', "Region"].squeeze() # Series([], Name: Region, dtype: object) One robust approach to get a scalar would be to use next+iter: next(iter(df_test.loc[df_test["Well"] == 'test2', "Region"]), None) # 'west' next(iter(df_test.loc[df_test["Well"] == 'test999', "Region"]), None) # None In case of multiple matches you'll get the first one: next(iter(df_test.loc[df_test["Well"].str.startswith('test'), "Region"]), None) # 'east' Alternatively, but more verbose and less efficient: df_test.loc[df_test["Well"] == 'test999', "Region"].reset_index().get(0) | 2 | 3 |
79,096,122 | 2024-10-17 | https://stackoverflow.com/questions/79096122/call-function-within-a-function-but-keep-default-values-if-not-specified | I have two sub functions that feed into one main functions as defined below: Sub function 1: def func(x=1, y=2): z = x + y return z Sub function 2: def func2(a=3, b=4): c = a - b return c Main function: def finalFunc(lemons, input1, input2, input3, input4): result = func(input1, input2) + func2(input3, input4) + lemons return result How do I call my main function but if the values for the sub functions aren't specified, they're treated as default? Similarly, if they are specified, then use them instead, e.g. >>> finalFunc(lemons=1) 3 or >>> finalFunc(lemons=1, input1=4, input4=6) 4 I don't want to specify the default values in my main function, as the sub functions are always changing. I want to keep the default values set at whatever the sub functions contain. | My recommendation: def func(x=None, y=None): if x is None: x = 1 if y is None: y = 2 z = x + y return z def func2(a=None, b=None): if a is None: a = 3 if b is None: b = 4 c = a-b return c def finalFunc(lemons, input1=None, input2=None, input3=None, input4=None): result = func(input1, input2) + func2(input3, input4) + lemons return result It is low-tech, but it is DRY and creates no unnecessary coupling between the func, func2 and finalFunc. The simple if arg is None: idiom is familiar to most Python developers, and doesn't come with any weird surprises. I've attempted more "clever" approaches in the past, such as sharing the defaults in module level globals, or digging values out from func.__defaults__ to re-use, but the downsides always seem to outweigh the advantages. | 1 | 3 |
79,093,236 | 2024-10-16 | https://stackoverflow.com/questions/79093236/how-to-create-multiple-columns-in-output-on-when-condition-in-polars | I am trying to create 2 new columns in output on checking condition but not sure how to do that. sample df: so_df = pl.DataFrame({"low_limit": [1, 3, 0], "high_limit": [3, 4, 2], "value": [0, 5, 1]}) low_limit high_limit value i64 i64 i64 1 3 0 3 4 5 0 2 1 Code for single column creation that works: so_df.with_columns(pl.when(pl.col('value') > pl.col('high_limit')) .then(pl.lit("High")) .when((pl.col('value') < pl.col('low_limit'))) .then(pl.lit("Low")) .otherwise(pl.lit("Within Range")).alias('Flag') ) output low_limit high_limit value Flag i64 i64 i64 str 1 3 0 "Low" 3 4 5 "High" 0 2 1 "Within Range" Issue/Doubt: Creating 2 columns that doesn't work so_df.with_columns(pl.when(pl.col('value') > pl.col('high_limit')) .then(Flag = pl.lit("High"), Normality = pl.lit("Abnormal")) .when((pl.col('value') < pl.col('low_limit'))) .then(Flag = pl.lit("Low"), Normality = pl.lit("Abnormal")) .otherwise(Flag = pl.lit("Within Range"), Normality = pl.lit("Normal")) ) Desired output: low_limit high_limit value Flag Normality i64 i64 i64 str str 1 3 0 "Low" "Abnormal" 3 4 5 "High" "Abnormal" 0 2 1 "Within Range" "Normal" I know I can do another with_Columns and using when-then again but that will take double the computation. So how can I create 2 new columns in 1 go ? something like: if (condition): Flag = '', Normality = '' | You can select into a pl.struct and then extract multiple values out using .struct.field(...): df = so_df.with_columns( pl.when(pl.col("value") > pl.col("high_limit")) .then(pl.struct(Flag=pl.lit("High"), Normality=pl.lit("Abnormal"))) .when(pl.col("value") < pl.col("low_limit")) .then(pl.struct(Flag=pl.lit("Low"), Normality=pl.lit("Abnormal"))) .otherwise(pl.struct(Flag=pl.lit("Within Range"), Normality=pl.lit("Normal"))) .struct.field("Flag", "Normality") ) Output: shape: (3, 5) ┌───────────┬────────────┬───────┬──────────────┬───────────┐ │ low_limit ┆ high_limit ┆ value ┆ Flag ┆ Normality │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ str ┆ str │ ╞═══════════╪════════════╪═══════╪══════════════╪═══════════╡ │ 1 ┆ 3 ┆ 0 ┆ Low ┆ Abnormal │ │ 3 ┆ 4 ┆ 5 ┆ High ┆ Abnormal │ │ 0 ┆ 2 ┆ 1 ┆ Within Range ┆ Normal │ └───────────┴────────────┴───────┴──────────────┴───────────┘ | 2 | 1 |
79,092,715 | 2024-10-16 | https://stackoverflow.com/questions/79092715/calculate-new-column-value-based-on-max-for-group-in-pandas-dataframe | I have dataframe containing list of subjects + dates of dispensing, one subject has more Dates of Dispensing and one single Date of dispensing for one subject can occur several times. Here is example: {'Subject': {1449: 'CZ100030006', 1786: 'CZ100030006', 1958: 'CZ100030006', 1964: 'CZ100030006', 4067: 'CZ100030006', 4119: 'CZ100030006', 4143: 'CZ100030006', 4441: 'CZ100030006', 4467: 'CZ100030006', 4530: 'CZ100030006', 4532: 'CZ100030006', 4585: 'CZ100030006', 4703: 'CZ100030006', 4767: 'CZ100030006', 4850: 'CZ100030006', 4888: 'CZ100030006', 4974: 'CZ100030006', 4987: 'CZ100030006', 5108: 'CZ100030006', 5476: 'CZ100030006', 9768: 'CZ100030005', 9815: 'CZ100030005', 9822: 'CZ100030005', 9837: 'CZ100030005', 9852: 'CZ100030005', 9853: 'CZ100030005', 9889: 'CZ100030005', 9945: 'CZ100030005', 10009: 'CZ100030005', 10050: 'CZ100030005', 10052: 'CZ100030005', 10060: 'CZ100030005', 11532: 'CZ100030005', 11582: 'CZ100030005', 11640: 'CZ100030005', 11722: 'CZ100030005', 13267: 'CZ100030005', 13339: 'CZ100030005', 13354: 'CZ100030005', 13655: 'CZ100030005'}, 'Date Dispensed': {1449: datetime.date(2024, 7, 4), 1786: datetime.date(2024, 7, 4), 1958: datetime.date(2024, 6, 21), 1964: datetime.date(2024, 6, 21), 4067: datetime.date(2024, 9, 16), 4119: datetime.date(2024, 9, 16), 4143: datetime.date(2024, 7, 19), 4441: datetime.date(2024, 7, 19), 4467: datetime.date(2024, 7, 19), 4530: datetime.date(2024, 7, 19), 4532: datetime.date(2024, 9, 16), 4585: datetime.date(2024, 7, 19), 4703: datetime.date(2024, 10, 11), 4767: datetime.date(2024, 7, 19), 4850: datetime.date(2024, 7, 19), 4888: datetime.date(2024, 7, 19), 4974: datetime.date(2024, 10, 11), 4987: datetime.date(2024, 9, 16), 5108: datetime.date(2024, 10, 11), 5476: datetime.date(2024, 10, 11), 9768: datetime.date(2024, 7, 4), 9815: datetime.date(2024, 7, 4), 9822: datetime.date(2024, 8, 28), 9837: datetime.date(2024, 7, 4), 9852: datetime.date(2024, 7, 4), 9853: datetime.date(2024, 7, 4), 9889: datetime.date(2024, 8, 28), 9945: datetime.date(2024, 7, 4), 10009: datetime.date(2024, 7, 4), 10050: datetime.date(2024, 7, 4), 10052: datetime.date(2024, 8, 28), 10060: datetime.date(2024, 8, 28), 11532: datetime.date(2024, 6, 20), 11582: datetime.date(2024, 6, 5), 11640: datetime.date(2024, 6, 20), 11722: datetime.date(2024, 6, 5), 13267: datetime.date(2024, 9, 25), 13339: datetime.date(2024, 9, 25), 13354: datetime.date(2024, 9, 25), 13655: datetime.date(2024, 9, 25)}} What I want is to add to df new column where TRUE is if date of dispensing is 2nd to max FOR THAT GIVEN SUBJECT and False for all other cases. So for Subject CZ100030005, there will be True in added column if in that row Dispensing date is 28AUG2024, because this is 2nd max value of date dispensed. I am able to find max value per group maxima = df_cov.groupby('Subject')['Date Dispensed'].max(), but I am not able to find 2nd to max. And I am not able to do 2nd step at all, i.e. to make new column True/False based on whether 2nd to max value equals/not equals to current row Date Dispensed value. Can you advice please? | @Dmitry543 has the correct logic, but this should used groupby.transform and a comparison with itself in the function: # ensure datetime df['Date Dispensed'] = pd.to_datetime(df['Date Dispensed']) # find largest second(s) for each group df['new'] = (df.groupby('Subject')['Date Dispensed'] .transform(lambda x: x==x.drop_duplicates().nlargest(2).iloc[-1]) ) Output: Subject Date Dispensed new 1449 CZ100030006 2024-07-04 False 1786 CZ100030006 2024-07-04 False 1958 CZ100030006 2024-06-21 False 1964 CZ100030006 2024-06-21 False 4067 CZ100030006 2024-09-16 True 4119 CZ100030006 2024-09-16 True 4143 CZ100030006 2024-07-19 False 4441 CZ100030006 2024-07-19 False 4467 CZ100030006 2024-07-19 False 4530 CZ100030006 2024-07-19 False 4532 CZ100030006 2024-09-16 True 4585 CZ100030006 2024-07-19 False 4703 CZ100030006 2024-10-11 False 4767 CZ100030006 2024-07-19 False 4850 CZ100030006 2024-07-19 False 4888 CZ100030006 2024-07-19 False 4974 CZ100030006 2024-10-11 False 4987 CZ100030006 2024-09-16 True 5108 CZ100030006 2024-10-11 False 5476 CZ100030006 2024-10-11 False 9768 CZ100030005 2024-07-04 False 9815 CZ100030005 2024-07-04 False 9822 CZ100030005 2024-08-28 True 9837 CZ100030005 2024-07-04 False 9852 CZ100030005 2024-07-04 False 9853 CZ100030005 2024-07-04 False 9889 CZ100030005 2024-08-28 True 9945 CZ100030005 2024-07-04 False 10009 CZ100030005 2024-07-04 False 10050 CZ100030005 2024-07-04 False 10052 CZ100030005 2024-08-28 True 10060 CZ100030005 2024-08-28 True 11532 CZ100030005 2024-06-20 False 11582 CZ100030005 2024-06-05 False 11640 CZ100030005 2024-06-20 False 11722 CZ100030005 2024-06-05 False 13267 CZ100030005 2024-09-25 False 13339 CZ100030005 2024-09-25 False 13354 CZ100030005 2024-09-25 False 13655 CZ100030005 2024-09-25 False explaining the logic: def f(x): unique = x.drop_duplicates() print(f'unique dates: {unique.tolist()}') top_2 = unique.nlargest(2) print(f'largest two: {top_2.tolist()}') print(f'equality to second largest ({top_2.iloc[-1]}):') print(x == top_2.iloc[-1]) return x == top_2.iloc[-1] (df.groupby('Subject')['Date Dispensed'] .transform(f) ) Intermediates: unique dates: [Timestamp('2024-07-04 00:00:00'), Timestamp('2024-08-28 00:00:00'), Timestamp('2024-06-20 00:00:00'), Timestamp('2024-06-05 00:00:00'), Timestamp('2024-09-25 00:00:00')] largest two: [Timestamp('2024-09-25 00:00:00'), Timestamp('2024-08-28 00:00:00')] equality to second largest (2024-08-28 00:00:00): 9768 False 9815 False 9822 True 9837 False 9852 False 9853 False 9889 True 9945 False 10009 False 10050 False 10052 True 10060 True 11532 False 11582 False 11640 False 11722 False 13267 False 13339 False 13354 False 13655 False Name: CZ100030005, dtype: bool unique dates: [Timestamp('2024-07-04 00:00:00'), Timestamp('2024-06-21 00:00:00'), Timestamp('2024-09-16 00:00:00'), Timestamp('2024-07-19 00:00:00'), Timestamp('2024-10-11 00:00:00')] largest two: [Timestamp('2024-10-11 00:00:00'), Timestamp('2024-09-16 00:00:00')] equality to second largest (2024-09-16 00:00:00): 1449 False 1786 False 1958 False 1964 False 4067 True 4119 True 4143 False 4441 False 4467 False 4530 False 4532 True 4585 False 4703 False 4767 False 4850 False 4888 False 4974 False 4987 True 5108 False 5476 False Name: CZ100030006, dtype: bool | 2 | 0 |
79,078,236 | 2024-10-11 | https://stackoverflow.com/questions/79078236/capturing-matplotlib-coordinates-with-mouse-clicks-using-ipywidgets-in-jupyter-n | Short question I want to capture coordinates by clicking different locations with a mouse on a Matplotlib figure inside a Jupyter Notebook. I want to use ipywidgets without using any Matplotlib magic command (like %matplotlib ipympl) to switch the backend and without using extra packages apart from Matplotlib, ipywidgets and Numpy. Detailed explanation I know how to achieve this using the ipympl package and the corresponding Jupyter magic command %matplotlib ipympl to switch the backend from inline to ipympl (see HERE). After installing ipympl, e.g. with conda install ipympl, and switching to the ipympl backend, one can follow this procedure to capture mouse click coordinates in Matplotlib. import matplotlib.pyplot as plt # Function to store mouse-click coordinates def onclick(event): x, y = event.xdata, event.ydata plt.plot(x, y, 'ro') xy.append((x, y)) # %% # Start Matplotlib interactive mode %matplotlib ipympl plt.plot([0, 1]) xy = [] # Initializes coordinates plt.connect('button_press_event', onclick) However, I find this switching back and forth between inline and ipympl backend quite confusing in a Notebook. An alternative for interactive Matplotlib plotting in Jupyter Notebook is to use the ipywidgets package. For example, with the interact command one can easily create sliders for Matplotlib plots, without the need to switcxh backend. (see HERE). from ipywidgets import interact import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2 * np.pi) def update(w=1.0): plt.plot(np.sin(w * x)) plt.show() interact(update); However, I have not found a way to use the ipywidgets package to capture (x,y) coordinates from mouse clicks, equivalent to my above example using ipympl. | Short answer Capturing mouse clicks on a non-interactive Matplotlib figure is not possible – that's what the interactive backends are for. If you want to avoid switching back and forth between non-interactive and interactive backends, maybe try the reverse approach: Rather than trying to get interactivity from non-interactive plots, use an interactive backend by default, and disable interactivity where it is not necessary. Detailed answer What Matplotlib says Regarding interactivity, Matplotlib's documentation explicitly states (emphasis by me): To get interactive figures in the 'classic' notebook or Jupyter lab, use the ipympl backend (must be installed separately) which uses the ipywidget framework. And further down: The default backend in notebooks, the inline backend, is not [interactive]. backend_inline renders the figure once and inserts a static image into the notebook when the cell is executed. Because the images are static, they cannot be panned / zoomed, take user input, or be updated from other cells. I guess that should make the situation pretty clear. Interactivity with ipywidgets As you noted, you can interact with (static) Matplotlib figures using ipywidgets. What happens there, however, is the following: The widgets (e.g. the slider that you show) are interactive, while the figure is still not. So "interactivity" in this context means interacting with a widget that then triggers the re-rendering of a static image. This use case and setup is fundamentally different from trying to interactively capture inputs from a static image. Proposed approach What I would suggest is: Install ipympl, as it is meant to be used for your purpose. If you want to avoid switching back and forth between backends, set your interactive backend once for your notebook, and disable interactive features in plots where you don't need them. Following Matplotlib's "comprehensive ipympl example", the display() function can be used for this purpose. Altogether, this could look as follows in code: %matplotlib widget # Alternatively: %matplotlib ipympl import matplotlib.pyplot as plt import numpy as np # Provide some dummy data x = np.linspace(-10, 10, num=10000) y1 = x ** 2 y2 = x ** 3 # Plot `y1` in an interactive plot def on_click(event): plt.plot(event.xdata, event.ydata, "ro") plt.connect("button_press_event", on_click) plt.plot(x, y1) # Plot `y2` in a 'static' plot with plt.ioff(): plt.figure() # Create new figure for 2nd plot plt.plot(x, y2) display(plt.gcf()) # Display without interactivity The resulting notebook would look as follows: Semi-off-topic: "interactive mode" You might have noticed that display() is used in connection with ioff() for the static figure here. And although ioff() is documented as the function to, quote, disable interactive mode, it is not the one that is responsible for disabling click capturing etc. here. In this context, "interactive" refers to yet another concept, which is explained with the isinteractive() function; namely, … whether plots are updated after every plotting command. The interactive mode is mainly useful if you build plots from the command line and want to see the effect of each command while you are building the figure. In the given example, we don't want plotting commands to have immediate effects on the output, because this would mean that already the figure() and plot() calls would render the figure (with all its interaction capabilities in our original sense!), rather than only rendering it (as a static image) when we call display(). Moreover, we would get two outputs of our figure: one (interactive) plot because of the figure() and plot() calls, one (static) plot because of the display() call. To suppress the first one, we use an ioff() context. | 3 | 2 |
79,084,728 | 2024-10-14 | https://stackoverflow.com/questions/79084728/how-do-to-camera-calibration-using-charuco-board-for-opencv-4-10-0 | I am trying to do a camera calibration using OpenCV, version 4.10.0. I already got a working version for the usual checkerboard, but I can't figure out how it works with charuco. I would be grateful for any working code example. What I tried: I tried following this tutorial: https://medium.com/@ed.twomey1/using-charuco-boards-in-opencv-237d8bc9e40d It seems that essential functions like: cv.aruco.interpolateCornersCharuco and cv.aruco.interpolateCornersCharuco are missing. Even threw the documentation states an existing Python implementation, see: https://docs.opencv.org/4.10.0/d9/d6a/group__aruco.html#gadcc5dc30c9ad33dcf839e84e8638dcd1 I also tried following the official documentation for C++, see https://docs.opencv.org/4.10.0/da/d13/tutorial_aruco_calibration.html The ArucoDetector in Python doesnt have the detectBoard method. So its also impossible to follow this tutorial in total. But I guess from a hint in the documentation that the functions used by Medium are deprectated? But in no place marked as “removed”! I already got the markers detected: But then getting the object and image point fails: `object_points_t, image_points_t = charuco_board.matchImagePoints( marker_corners, marker_ids)` Any help or working code would be highly appreciated. P.S.: My output of the “detectMarkers” method seems valid. Detected corners are of type std::vector<std::vector<Point2f>. (So translated to python an Array of Arrays containing the 4 points made by 2 coordinates each.) ID’s are std::vector<int> so in Python a list of Integers. So I guess the python function “matchImagePoints” gets what it wants! The marker detection seems succesful. I already tried changing the corner array: The detectMarkers method returns a tuple. I used the folloqing code to create the desired array of shape (X, 4, 2). (X beeing the number of deteced markers. Each has 4 corners with 2 coordinates x and y.) marker_corners = np.array(marker_corners) marker_corners = np.squeeze(marker_corners) So I have the following: marker_corners = [ [[8812. 5445.] [8830. 5923.] [8344. 5932.] [8324. 5452.]], [[7172. 5469.] [7184. 5947.] [6695. 5949.] [6687. 5476.]], [[3896. 5481.] [3885. 5952.] [3396. 5951.] [3406. 5483.]], ... ] marker_ids = [ [11], [27], [19], ... ] Both, Passing the original return I get from detector.detectMarkers into the function and passing my modified array are failing. (Also not using squeeze and inputting a (X, 1, 4, 2) array fails!) I can't get further anymore. Minimum working example: Use this picture: import cv2 as cv import numpy as np image = cv.imread("charuco_board.png") im_gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) charuco_marker_dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_6X6_250) charuco_board = cv.aruco.CharucoBoard( size=(11, 8), squareLength=500, markerLength=300, dictionary=charuco_marker_dictionary ) # Initial method of this question: params = cv.aruco.DetectorParameters() detector = cv.aruco.ArucoDetector(charuco_marker_dictionary, params) marker_corners, marker_ids, rejected_candidates = detector.detectMarkers(im_gray) marker_corners = np.array(marker_corners) marker_corners = np.squeeze(marker_corners) # Using cv.aruco.CharucoDetector as pointed out in the comments. detector_charuco = cv.aruco.CharucoDetector(charuco_board) result = detector_charuco.detectBoard(im_gray) marker_corners_charuco, marker_ids_charuco = result[2:] # Compare the two results assert (marker_ids == marker_ids_charuco).all() # Detected ID's are identical. # assert (marker_corners == marker_corners_charuco).all() # There seems to be a difference. print(marker_corners[0:2], marker_corners_charuco[0:2]) # They seem to be in a different order. # Proof of the different order statement: def reshape_and_sort(array): array_reshaped = array.copy().reshape(-1, 2) return np.array(sorted(array_reshaped, key=lambda x: x[0]**2 + x[1]**2)) # Using geometric distance between each point and the point (0, 0). Leaving out the square. marker_corners_reshaped = reshape_and_sort(marker_corners) marker_corners_reshaped_charuco = reshape_and_sort(np.array(marker_corners_charuco)) assert (marker_corners_reshaped == marker_corners_reshaped_charuco).all() # Trying with new resutls: # Still fails! try: object_points_t, image_points_t = charuco_board.matchImagePoints( marker_corners_charuco, marker_ids_charuco ) except cv.error as err: print(err) | I managed to get the calibration done with the official non-contrib opencv code. Here is a minimal working example: The problem: Be careful when defining your detection board. A 11x8 charuco board has a different order of the aruco markers as an 8x11. Even if they look very similar when printed. The detecion will fail. from typing import NamedTuple import math import matplotlib.pyplot as plt import cv2 as cv import numpy as np class BoardDetectionResults(NamedTuple): charuco_corners: np.ndarray charuco_ids: np.ndarray aruco_corners: np.ndarray aruco_ids: np.ndarray class PointReferences(NamedTuple): object_points: np.ndarray image_points: np.ndarray class CameraCalibrationResults(NamedTuple): repError: float camMatrix: np.ndarray distcoeff: np.ndarray rvecs: np.ndarray tvecs: np.ndarray SQUARE_LENGTH = 500 MARKER_LENGHT = 300 NUMBER_OF_SQUARES_VERTICALLY = 11 NUMBER_OF_SQUARES_HORIZONTALLY = 8 charuco_marker_dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_6X6_250) charuco_board = cv.aruco.CharucoBoard( size=(NUMBER_OF_SQUARES_HORIZONTALLY, NUMBER_OF_SQUARES_VERTICALLY), squareLength=SQUARE_LENGTH, markerLength=MARKER_LENGHT, dictionary=charuco_marker_dictionary ) image_name = f'ChArUco_Marker_{NUMBER_OF_SQUARES_HORIZONTALLY}x{NUMBER_OF_SQUARES_VERTICALLY}.png' charuco_board_image = charuco_board.generateImage( [i*SQUARE_LENGTH for i in (NUMBER_OF_SQUARES_HORIZONTALLY, NUMBER_OF_SQUARES_VERTICALLY)] ) cv.imwrite(image_name, charuco_board_image) def plot_results(image_of_board, original_board, detection_results, point_references): fig, axes = plt.subplots(2, 2) axes = axes.flatten() img_rgb = cv.cvtColor(img_bgr, cv.COLOR_BGR2RGB) axes[0].imshow(img_rgb) axes[0].axis("off") axes[1].imshow(img_rgb) axes[1].axis("off") axes[1].scatter( np.array(detection_results.aruco_corners).squeeze().reshape(-1, 2)[:, 0], np.array(detection_results.aruco_corners).squeeze().reshape(-1, 2)[:, 1], s=5, c="green", marker="x", ) axes[2].imshow(img_rgb) axes[2].axis("off") axes[2].scatter( detection_results.charuco_corners.squeeze()[:, 0], detection_results.charuco_corners.squeeze()[:, 1], s=20, edgecolors="red", marker="o", facecolors="none" ) axes[3].imshow(cv.cvtColor(charuco_board_image, cv.COLOR_BGR2RGB)) axes[3].scatter( point_references.object_points.squeeze()[:, 0], point_references.object_points.squeeze()[:, 1] ) fig.tight_layout() fig.savefig("test.png", dpi=900) plt.show() def generate_test_images(image): """Use random homograpy. -> Just to test detection. This doesn't simulate a perspective projection of one single camera! (Intrinsics change randomly.) For a "camera simulation" one would need to define fixed intrinsics and random extrinsics. Then cobine them into a projective matrix. And apply this to the Image. -> Also you need to add a random z coordinate to the image, since a projection is from 3d space into 2d space. """ h, w = image.shape[:2] src_points = np.float32([[0, 0], [w, 0], [w, h], [0, h]]) dst_points = np.float32([ [np.random.uniform(w * -0.2, w * 0.2), np.random.uniform(0, h * 0.2)], [np.random.uniform(w * 0.8, w*1.2), np.random.uniform(0, h * 0.6)], [np.random.uniform(w * 0.8, w), np.random.uniform(h * 0.8, h)], [np.random.uniform(0, w * 0.2), np.random.uniform(h * 0.8, h*1.5)] ]) homography_matrix, _ = cv.findHomography(src_points, dst_points) image_projected = cv.warpPerspective(image, homography_matrix, (w, h)) return image_projected def display_images(images): N = len(images) cols = math.ceil(math.sqrt(N)) rows = math.ceil(N / cols) for i, img in enumerate(images): plt.subplot(rows, cols, i + 1) plt.imshow(img, cmap='gray') plt.axis('off') plt.tight_layout() plt.show() # Create N test images based on the originaly created pattern. N = 10 random_images = [] charuco_board_image = cv.cvtColor(charuco_board_image, cv.COLOR_GRAY2BGR) for _ in range(N): random_images.append(generate_test_images(charuco_board_image)) display_images(random_images) total_object_points = [] total_image_points = [] for img_bgr in random_images: img_gray = cv.cvtColor(img_bgr, cv.COLOR_BGR2GRAY) charuco_detector = cv.aruco.CharucoDetector(charuco_board) detection_results = BoardDetectionResults( *charuco_detector.detectBoard(img_gray) ) point_references = PointReferences( *charuco_board.matchImagePoints( detection_results.charuco_corners, detection_results.charuco_ids ) ) plot_results( img_gray, charuco_board_image, detection_results, point_references ) total_object_points.append(point_references.object_points) total_image_points.append(point_references.image_points) calibration_results = CameraCalibrationResults( *cv.calibrateCamera( total_object_points, total_image_points, img_gray.shape, None, None ) ) """P.S.: Markers are too small in bigger pictures. They seem to not be adjustable. img_bgr_aruco = cv.aruco.drawDetectedMarkers( img_bgr.copy(), detection_results.aruco_corners ) img_bgr_charuco = cv.aruco.drawDetectedCornersCharuco( img_bgr.copy(), detection_results.charuco_corners ) """ The other possibility is to install pip install opencv-contrib-python and not pip install opencv-python. At best make sure you install it in a new enviroment without any old installations. Here the following two essential functions will be available: cv.aruco.interpolateCornersCharuco cv.aruco.calibrateCameraCharuco For a more indepth explanation see here: | 1 | 1 |
79,088,388 | 2024-10-15 | https://stackoverflow.com/questions/79088388/configuring-django-testing-in-pycharm | I have a simple django project that I'm making in pycharm. The directory structure is the following: zelda_botw_cooking_simulator |-- cooking_simulator_project |---- manage.py |---- botw_cooking_simulator # django app |------ init.py |------ logic.py |------ tests.py |------ all_ingredients.py |------ other standard django app files |---- cooking_simulator_project # django project |------ manage.py |------ other standard django project files When I run python manage.py test in the PyCharm terminal, everything works great. When I click the little triangle icon in PyCharm next to a test to run that test, however, I get one of two errors depending on how I've tried to configure the configuration for testing in PyCharm: Error 1: File ".../zelda_botw_cooking_simulator/cooking_simulator_proj/botw_cooking_simulator/tests.py", line 5, in <module> from .all_ingredients import all_ingredients ImportError: attempted relative import with no known parent package Error 2: /opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py test botw_cooking_simulator.tests.TestAllIngredients.test_hearty_durian /Users/brendenmillstein/Dropbox (Personal)/BSM_Personal/Coding/BSM_Projects/zelda_botw_cooking_simulator/cooking_simulator_proj Testing started at 10:05 PM ... Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 168, in <module> utility.execute() File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 142, in execute _create_command().run_from_argv(self.argv) File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv super().run_from_argv(argv) File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/base.py", line 413, in run_from_argv self.execute(*args, **cmd_options) File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/base.py", line 459, in execute output = self.handle(*args, **options) File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 104, in handle failures = TestRunner(test_labels, **options) File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_runner.py", line 254, in run_tests return DjangoTeamcityTestRunner(**options).run_tests(test_labels, File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_runner.py", line 156, in run_tests return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs) TypeError: DiscoverRunner.run_tests() takes 2 positional arguments but 3 were given Process finished with exit code 1 How can I fix this? I have tried configuring run environments and test environments in PyCharm for 2 hours now and I'm not getting it. The questions/answers here and here are close, but there's not quite enough detail for me to fix it. Exactly what do I put in each field in each window? What's the 'target', the 'working directory', do I need an environment variable? What goes in the settings part and what goes in the configuration? ChatGPT recommended a bunch of stuff that didn't work, and I can't seem to find a YouTube video showing the right way to do this. Thank you! **** Expanding Answer in Response to Comments/Questions **** Here is my tests.py code: from datetime import timedelta from django.test import TestCase from .all_ingredients import all_ingredients from .data_structures import MealType, MealResult, MealName, SpecialEffect, EffectLevel from .logic import ( determine_meal_type, simulate_cooking, check_if_more_than_one_effect_type, calculate_sell_price, ) # Create your tests here. class TestAllIngredients(TestCase): def test_hearty_durian(self): ingredient = all_ingredients["Hearty Durian"] self.assertEqual(ingredient.effect_type.value, "Hearty") self.assertEqual(ingredient.category.value, "Fruit") self.assertEqual(ingredient.price, 15) self.assertEqual(ingredient.base_hp, 12) self.assertEqual(ingredient.bonus_hp, 0) self.assertEqual(ingredient.base_time, timedelta(seconds=00)) self.assertEqual(ingredient.bonus_time, timedelta(seconds=00)) self.assertEqual(ingredient.potency, 4) etc. Here is a screenshot of the configuration: Thank you!! | Figured it out! The problem lay in how PyCharm was interpreting the test. The question and answer here was super helpful, but I had to do the opposite and add a working directory: Replace from django.test import TestCase with from unittest import TestCase in my tests.py file. Add the working directory to the templates. Note I had to update the autodetect template, not only the unittest template: First: select Make sure to select 'Edit Configuration Templates', don't just update one. Second: select autodetect within the Python tests menu option. Then update the working directory to be the one with your manage.py file in it. Now the little green buttons work! | 2 | 0 |
79,091,469 | 2024-10-15 | https://stackoverflow.com/questions/79091469/how-to-convert-java-serialization-data-into-json | A vendor-provided application we're maintaining stores (some of) its configuration in the form of "Java serialization data, version 5". A closer examination shows, that the actual contents is a java.util.ArrayList with several dozens of elements of the same vendor-specific type (vendor.apps.datalayer.client.navs.shared.api.Function). As we seek to deploy and configure instances of this application with Ansible, we'd like all configuration-files to be human-readable -- and subject to textual revision-control. To that end, we need to be able to decode the Java serialization binary data into a human-readable form of some kind -- preferably, JSON. That JSON also needs to be convertible back into the same Java serialization format for the application to read it. The accepted answer to an earlier question on this topic is Java-based: Read the Java serialization data using ObjectInputStream, casting it to the known type -- thus instantiating each object. Write it back out using GSON. Though usable, that approach is less than ideal for us because: it requires full knowledge of the vendor's type serialized in the data, even though we don't need to instantiate the objects; we'd rather it be a Python-script, that we could integrate into Ansible. There is a Python module for this, but custom classes seem to require providing custom Python code -- a lot of custom code -- even when all the fields of the class are themselves of standard Java-types. It is my understanding, the serialized data itself already provides all the information necessary -- one does not need to access the class-definition(s), unless one wants to invoke the methods of the class, which we don't... | The documentation on the format is available here. To explain the example more thoroughly: 00: ac ed 00 05 73 72 00 04 4c 69 73 74 69 c8 8a 15 >....sr..Listi...< 10: 40 16 ae 68 02 00 02 49 00 05 76 61 6c 75 65 4c >Z......I..valueL< 20: 00 04 6e 65 78 74 74 00 06 4c 4c 69 73 74 3b 78 >..nextt..LList;x< 30: 70 00 00 00 11 73 71 00 7e 00 00 00 00 00 13 70 >p....sq.~......p< 40: 71 00 7e 00 03 >q.~..< 4b: 0xAC 0xED 0x00 0x05 – Magic header indicating this is java serialized data 1b: 0x73 - First item that was stored is a java object (the format is recursive; 'top level' items are always going to be an object, at least in your situation). 1b: 0x72 - It's a 'normal' object. Not a proxy, for example. Specifically, it's an object of a type we haven't seen before, so the immediately following bytes will first describe the class that this object is an instance of, before we get around to giving you the actual contents of this object. ?b: 0x00 0x04 | 0x4C 0x69 0x 73 0x74 - The name of the class that this object is an instance of, is List (0x00 0x04 is the size of the UTF-8 data). Normally it'd be com.foo.pkgname.ClassName but this example has an unfortunate name (because java.util.List exists, but this example isn't j.u.List, just some random example class also named List), and is in the unnamed package. 8b: 0x69 0xc8 0x8a 0x15 0x40 0x16 0xae 0x68 – The serialVersionUID of this class. Irrelevant to you; just skip past 8 bytes here. 1b: 0x02 - the flags for it; this one has the flag 'serializable'. Irrelevant to you. 2b: 0x00 0x02 – this class contains 2 fields; they shall now be described. Described field 1 1b: 0x49 (the letter I) it is a primitive field, of type integer. ?b: 0x00 0x05 | 0x76 0x61 0x6c 0x75 0x65 – The name of this field is 'value'. Described field 2 1b: 0x4c (the letter L) it is an object field. ?b: 0x00 0x04 | 0x6e 0x65 0x78 0x74 – The name of this field is 'next'. 1b: 0x74 - you now get the name of the type of the field, in the form of actual data; it's always a string. So now we get to see how strings are encoded: It starts with the constant TC_STRING, which is 0x74. ?b: 0x00 0x06 | 0x4c 0x4c 0x69 0x73 0x74 0x3b – The string LList;. This is a java JVM-styled typename. It usually looks like e.g. Ljava/lang/Number; for the type java.lang.Number (starts with L, packages divided by slashes, ends in a semicolon); here it's just List again because unnamed package. 1b: 0x78 - the constant TC_ENDBLOCKDATA - there are no annotations for this class. In your case I bet there never is. 1b: 0x70 - the constant TC_NULL - the superclass is now described, but this class's superclass is j.l.Object and as a special shortcut that is never shown (j.l.Object doesn't itself have a superclass and has no fields so there'd be no point). We have now described the class. Note that almost everything also stores itself for future reference so that the serialized data doesn't have to keep repeating this stuff over and over, which is represented by the zero-length item newHandle; you are supposed to have a big array of sorts, storing everything in it, and incrementing the counter every time you see newHandle, storing the thing you just read / are about to read. So such a description is only provided once, next time you just get the handle. The actual data now follows. Each value is just piled on one after the other; you need to use the described fields to track along so you know what you are looking at. Value of field 1 4b: 0x00 0x00 0x00 0x11 - the first field was of type I (integer), as you may recall. ints in java are 4 bytes long, and here it is; the decimal value 17 in hexadecimal is 11; this is because the list was made with list1.value = 17;. Value of field 2 1b: 0x73 – the constant TC_OBJECT. Remember up top I said it's a recursive format? The whole thing we are looking at was an object being stored. One of the fields in this object is referring to another object, so we now get to that, and it starts with 0x73 for the same reason. This object is also of type List (it's the list1.next = list2; field). We'll see that handle stuff soon. We need to go through the rigamarole of describing the class this object is an instance of again. 1b: 0x71 – Last time we saw 0x72 here. This time we get 0x71 - a normal object, but it is an instance of a type we've seen before so it won't be described again. Instead you just get the handle. 2b: 0x00 0x7e 0x00 0x00 – The handle. This is the ID that is referring to the definition of the List class we saw earlier. Handles start counting from 0x007E0000 (I have no idea why, but, spec says so), and it's the first newHandle-d thing we saw in this stream. At any rate, we're now done with the description by using a handle so we get straight to the data which you can't unpack without knowing the structure. We do: It is an int, followed by an object, so, first.. Data for field 1 4b: 0x00 0x00 0x00 0x13 - list2.value = 19 (19 dec is 0x13 in hex). Data for field 2 it's null, so, we just see 0x70 for the null ref. The call out.writeObject(list1) is now completed. We now see very few bytes that all represent the result of out.writeObject(list2): The second object It's just: 5b: 0x71 0x00 0x7e 0x00 0x03 0x71 is reference to an object we saw before, and its handle is the next 4 bytes. It's referring to the actual object that variable list2 is pointing at, which we already stored before, and it's the 4th thing that newHandle-d, hence, the handle for it is 0x007E0003. You have to remember that java allows circular assignment, for example imagine I wrote list1.next = list1;, and if the storage format of serialization couldn't deal with references, trying to serialize list1 would crash with a stack overflow. Most JSON serializers really do crash on that, but java's does not. Presumably, in your situation, this circularity business won't bother you. Now, to the answer! I'm not aware of any library that can read these, and due to java serialization being able to store a wide variety of exotica, it's pretty much impossible to parse anything that has been serialized by a JVM without using that JVM and the classes that were in the classpath at the time it was serialized. But, if we assume no such exotica is going to happen (and that seems somewhat fair to do, if indeed the java code you are dealing with is just serializing fairly simple data objects), then.. it's not too hard to write some code that 'JSON-izes' and 'de-JSON-izes' java serially stored data. You'd have to write that yourself. But, there's good news here. The serialization format does include the names of fields as well as their types, though, the types are stated in terms of java. Still, that means, if the following things are ALL true: No exotica such as proxies or functionrefs or whatever are serialized, just 'plain' objects of 'plain' classes, These objects store only data written in terms of java primitives (booleans, integers (char/byte/short/int/long), and floats (float/double)), java strings, well known core java types (ArrayList, HashMap, that sort of thing), and objects of instances that adhere to this rule, then and only then could you write a converter that can convert java serialization formatted data to JSON and back again without the need of a JVM and without the need of those class definitions in the first place. It's not an itch I feel a need to scratch, but sounds like fun. Someone with some skill at crafting parsers for binary data should have little trouble making that in a person-day or 3. | 1 | 3 |
79,084,176 | 2024-10-13 | https://stackoverflow.com/questions/79084176/polars-python-api-read-json-fails-to-parse-date | I want to read in a polars dataframe from a json string containing dates in the standard iso-format "yyyy-mm-dd". When I try to read the string in and set the dtype of the date column witheither schema or schema_override this results in only NULL values. MRE from datetime import datetime, timedelta from io import StringIO import polars as pl # Generate a list of dates start_date = datetime.today() dates = [start_date + timedelta(days=i) for i in range(100)] date_strings = [date.strftime("%Y-%m-%d") for date in dates] # Create a Polars DataFrame df = pl.DataFrame({"dates": date_strings}) df_reread = pl.read_json( StringIO(df.write_json()), schema_overrides={"dates": pl.Date}, ) output of print(df_reread) Error shape: (100, 1) ┌───────┐ │ dates │ │ --- │ │ date │ ╞═══════╡ │ null │ │ null │ │ null │ │ null │ │ null │ │ … │ │ null │ │ null │ │ null │ │ null │ │ null │ └───────┘ Question Is there anyway to correctly read in the Date dtype from a json string? | After having a bit of a play around, it looks like unfortunately dates being read from a JSON file have a bit of a quirk. It seems to me that currently they must be written in days since the unix epoch (which is how Polars internally represents dates) for things to work as you expect. I have raised this feature request on their github to hopefully get that improved. In the mean time, df = ( pl.DataFrame({"dates": "2024-01-01"}) # add this line below .select(pl.col("dates").cast(pl.Date).dt.epoch("d")) ) df_reread = pl.read_json( df.write_json().encode(), schema_overrides={"dates": pl.Date}, ) print(df_reread) # shape: (1, 1) # ┌────────────┐ # │ dates │ # │ --- │ # │ date │ # ╞════════════╡ # │ 2024-01-01 │ # └────────────┘ or do as you say with df_reread.with_columns(pl.col("dates").cast(pl.Date) | 2 | 2 |
79,078,422 | 2024-10-11 | https://stackoverflow.com/questions/79078422/looking-for-a-function-equivalent-to-resizerowtoindex-for-qtableview | I am working with a QTableView, and I'd like for the row to change its height to accommodate the content of the selected index. The function resizeRowToContents is not really what I am looking for. If I click on a cell [A] that doesn't need to change its height to display everything but a cell [B] in the row needs to, the row will increase its height to accommodate [B]. Using sizeHintForRow would result in the same behaviour as using resizeRowToContents The function sizeHintForIndex doesn't return the correct value for some reason. In the C++ source of Qt there is a private function heightHintForIndex that would be exactly what I need (because it's used by resizeRowToContents). Is there a function that I am missing, or does anyone know a way to do it? I am using my own QStyledItemDelegate for those cells. | Basically, I wrote the function heightHintForIndex in Python. def on_selection_changed(self, selected: QModelIndex , deselected: QModelIndex): self.setRowHeight(deselected.row(), 1) editor = self.indexWidget(selected) hint = 0 if (editor is not None) and (self.isPersistentEditorOpen(selected)): hint = max(hint, editor.sizeHint().height()) min_ = editor.minimumSize().height() max_ = editor.maximumSize().height() hint = max(min_, min(max_, hint)) option = QStyleOptionViewItem() self.initViewItemOption(option) option.rect.setY(self.rowViewportPosition(selected.row())) height = self.rowHeight(selected.row()) if height == 0: height = 1 option.rect.setHeight(self.rowHeight(selected.row())) option.rect.setX(self.columnViewportPosition(selected.column())) option.rect.setWidth(self.columnWidth(selected.column())) if self.showGrid(): option.rect.setWidth(option.rect.width() - 1) new_height = max(hint, self.itemDelegateForIndex(selected).sizeHint(option, selected).height()) self.setRowHeight(selected.row(), new_height) | 2 | 0 |
79,091,436 | 2024-10-15 | https://stackoverflow.com/questions/79091436/using-unpacked-typeddict-for-specifying-function-arguments | I was wondering whether following was possible with TypedDict and Unpack, inspired by PEP 692... Regular way of using TypedDict would be: class Config(TypedDict): a: str b: int def inference(name, **config: Unpack[Config]): c = config['a']+str(config["b"]) config: Config = {"a": "1", "b": 2} inference("example", **config) # or I could get argument hinting for each one from Config However, I would be really interested if I could somehow unpack those keys to make it behave like they were directly introduced as variables into function signature: def inference(name, **?: Unpack[Config]): c = a+str(b) so to mimic this explicit approach: def inference(name, a: str, b: int): c = a + str(b) My motivation is I want to still leave code uncluttered with field or key access (verbose_name.another_verbose_name or verbose_name["another_verbose_name"], where latter has no type hinting in VSC). Also, this way I could define TypedDict for Config once, and when needing some new parameter for my inference, i could just straight add it to Config definition, without changing inference signature. Currently my workaround is explicit definition of arguments in inference signature and still invoking this with inference(name, **config). I still prefer this approach more than Dataclass, since I can skip asdict(dataclass_config) doing it so, which feels very unnecessary, especially when loading config from yaml or similar... Maybe my approach is all wrong and not suiting to the best practices... Also would love your input on this. Im just starting using more advanced Python topics... I can imagine that adding this functionality via PEP would require more than just typing work, so I would like to know if there is some common established solution. | No, there is no python syntax that will allow you to "unpack the dictionary to make it behave like they were directly introduced as variables into function signature". You could technically do it at runtime with something like this: def inference(name, **config: Unpack[Config]): vars().update(config) dosomething(a, b) but that's very hacky, and the typechecker would not recognize subsequent uses of a or b (even though they will work at runtime). The typechecker will only recognize a variable if it is declared or passed as argument. I understand you want to declutter your code, but as @Barmar's comment suggests, skipping variable declarations may not be the best way to do that if you are actually using those variables in your function. ** and Unpack are mostly useful if those arguments are passed down to another function | 1 | 2 |
79,090,251 | 2024-10-15 | https://stackoverflow.com/questions/79090251/confused-in-how-to-do-web-scraping-for-the-first-time | There is a website like bonbast.com and I'm trying to get values but I'm just confused about how to do it. Values should be something like the output of "US Dollar" and "Euro". My code: import requests from bs4 import BeautifulSoup r = requests.get("https://bonbast.com/") soup = BeautifulSoup(r.content, "html.parser") usd1 = soup.find(id="usd1") print(usd1) same_val = soup.find_all(class_="same_val") print(same_val) the output is blank and not showing the number of the currency. HTML page https://i.imgur.com/NrTaUY8.png | From the Reference of this Repo - https://github.com/drstreet/Currency-Exchange-Rates-Scraper/blob/master/scrapper.py Focusing only on the necessary parts to retrieve exchange rates USD and Euro. import requests from bs4 import BeautifulSoup import re def get_currency_data(): url = 'https://www.bonbast.com' session = requests.Session() headers = { 'User-Agent': 'Mozilla/5.0', 'Referer': url, 'Accept': 'application/json, text/javascript, */*; q=0.01', } response = session.get(url, headers=headers) soup = BeautifulSoup(response.text, 'html.parser') # Find the script containing the 'param' value scripts = soup.find_all('script') param = None for script in scripts: if '$.post' in script.text: match = re.search(r'param:\s*"([^"]+)"', script.text) if match: param = match.group(1) break if param: post_url = url + '/json' post_data = {'param': param} # Make POST request to fetch the JSON data response = session.post(post_url, headers=headers, data=post_data) data = response.json() # Extract relevant currency data (USD and Euro) currencies = {} for key, value in data.items(): if key == 'usd1': # USD sell price currencies['USD_sell'] = value elif key == 'usd2': # USD buy price currencies['USD_buy'] = value elif key == 'eur1': # Euro sell price currencies['EUR_sell'] = value elif key == 'eur2': # Euro buy price currencies['EUR_buy'] = value return currencies else: return {'error': 'Param not found'} # Execute the function and print the result result = get_currency_data() print(result) output {'EUR_sell': '68500', 'EUR_buy': '68400', 'USD_sell': '62800', 'USD_buy': '62700'} | 1 | 2 |
79,089,003 | 2024-10-15 | https://stackoverflow.com/questions/79089003/no-solution-found-or-tools-vrp | I am having an issue with no solution found for an instance of the OR-Tools VRP. I am new to OR-Tools. After consulting the docs, my understanding if no first solution is found then no solution at all will be found. To help this, I should somehow loosen constraints of finding the first solution. However, I have tried making the vehicle-level distance constraints massive without any luck. Here is the code I am using def solve_or_tools_routing(self, data): """ Solves an OR tools routing problem. Documentation here: https://developers.google.com/optimization/routing/pickup_delivery """ def print_solution(data, manager, routing, solution): """HELPER FUNC. Prints solution on console.""" print(f"Objective: {solution.ObjectiveValue()}") print(f"data in print solution {data}") total_distance = 0 for vehicle_id in range(data["num_vehicles"]): index = routing.Start(vehicle_id) plan_output = f"Route for vehicle {vehicle_id}:\n" route_distance = 0 while not routing.IsEnd(index): plan_output += f" {manager.IndexToNode(index)} -> " previous_index = index index = solution.Value(routing.NextVar(index)) distance = routing.GetArcCostForVehicle(previous_index, index, vehicle_id) route_distance += distance plan_output += f"{manager.IndexToNode(index)}\n" plan_output += f"Distance of the route: {route_distance}m\n" print(plan_output) total_distance += route_distance print(f"Total Distance of all routes: {total_distance}m") # Create the routing index manager. manager = pywrapcp.RoutingIndexManager(len(data["distance_matrix"]), data["num_vehicles"], data["depot"]) # Create Routing Model. routing = pywrapcp.RoutingModel(manager) # Define cost of each arc. def distance_callback(from_index, to_index): """Returns the distance between the two nodes.""" from_node = manager.IndexToNode(from_index) to_node = manager.IndexToNode(to_index) return data["distance_matrix"][from_node][to_node] transit_callback_index = routing.RegisterTransitCallback(distance_callback) routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index) # Add Distance constraint. dimension_name = "Distance" routing.AddDimension( transit_callback_index, 1000, # no slack 50000, # max vehicle distance travelled True, # start cumul to zero dimension_name, ) distance_dimension = routing.GetDimensionOrDie(dimension_name) distance_dimension.SetGlobalSpanCostCoefficient(100) # Define Transportation Requests. print(f'PRINTING NUMBER OF PICKUPS/DELIVERIES: {(data["pickups_deliveries"])}') for request in data["pickups_deliveries"]: pickup_index = manager.NodeToIndex(request[0]) delivery_index = manager.NodeToIndex(request[1]) routing.AddPickupAndDelivery(pickup_index, delivery_index) routing.solver().Add( routing.VehicleVar(pickup_index) == routing.VehicleVar(delivery_index) ) routing.solver().Add( distance_dimension.CumulVar(pickup_index) <= distance_dimension.CumulVar(delivery_index) ) # Setting first solution heuristic. search_parameters = pywrapcp.DefaultRoutingSearchParameters() search_parameters.first_solution_strategy = ( routing_enums_pb2.FirstSolutionStrategy.PARALLEL_CHEAPEST_INSERTION ) # Solve the problem. solution = routing.SolveWithParameters(search_parameters) # Print solution on console. if solution: print_solution(data, manager, routing, solution) else: print(f'no solution found...')``` An example data dict that I can't find a solution for would be: {'distance_matrix': [[0, 4877, 243, 2774, 1998, 10539, 8731, 4408, 2324, 2219, 541, 2349, 2371, 2114, 5102, 2258, 2404, 2616, 2703, 9823, 7404, 1784, 3341, 12120, 1614, 2924, 7042], [4635, 0, 4684, 4252, 6362, 10603, 7595, 755, 3153, 3050, 5177, 2812, 3008, 2814, 4454, 2864, 2717, 2536, 2623, 10770, 6615, 4557, 4241, 12184, 3759, 2205, 5961], [252, 4782, 0, 2531, 2250, 10308, 8488, 4168, 2340, 2235, 584, 2255, 2279, 2020, 4859, 2097, 2244, 2385, 2472, 9801, 7161, 2032, 3098, 11889, 1371, 2681, 6799], [2649, 4239, 2397, 0, 4064, 8234, 7752, 3521, 4712, 4607, 2309, 4275, 4577, 3869, 2615, 3762, 3618, 3473, 3560, 7826, 4801, 4314, 857, 9815, 1599, 2034, 6063], [2350, 6602, 2181, 3991, 0, 11036, 10669, 6134, 3960, 3913, 1798, 4075, 4097, 3840, 5970, 3983, 4130, 4342, 4429, 9974, 8066, 2473, 4258, 12617, 3552, 4863, 8980], [10198, 10465, 9945, 8114, 10905, 0, 7581, 10465, 12261, 12155, 9830, 11823, 12126, 11418, 6172, 11400, 11256, 11144, 11231, 2189, 4110, 11835, 7402, 1607, 9399, 9270, 6013], [8735, 7626, 8639, 7751, 10733, 7539, 0, 7750, 10516, 10413, 9012, 10094, 10371, 9690, 6008, 9439, 9293, 9145, 9232, 9407, 5693, 10287, 7739, 7721, 7529, 6599, 1706], [4080, 755, 4129, 3534, 5807, 10603, 7719, 0, 2783, 2680, 4622, 2441, 2637, 2204, 4578, 2163, 2016, 1818, 1906, 10511, 6739, 4002, 3523, 12184, 3041, 1487, 6085], [2215, 3155, 2331, 4857, 3899, 12634, 10487, 2799, 0, 105, 2756, 450, 145, 856, 7185, 1107, 1253, 1401, 1488, 12038, 9487, 1907, 5424, 14215, 3276, 3987, 8853], [2109, 3050, 2226, 4752, 3836, 12529, 10383, 2695, 105, 0, 2651, 345, 251, 750, 7080, 1002, 1148, 1295, 1383, 11933, 9382, 1868, 5319, 14110, 3171, 3881, 8748], [1109, 5361, 937, 2239, 1798, 9997, 9269, 4893, 2808, 2703, 0, 2833, 2856, 2598, 4567, 2742, 2889, 3101, 3188, 9281, 6869, 2004, 2807, 11578, 2154, 3462, 7580], [2111, 2812, 2160, 4409, 3838, 12186, 10067, 2456, 450, 346, 2653, 0, 304, 405, 6737, 657, 803, 950, 1038, 11772, 9039, 2033, 4976, 13767, 2826, 3536, 8433], [2187, 3009, 2236, 4712, 3914, 12489, 10341, 2653, 145, 251, 2729, 304, 0, 710, 7040, 961, 1107, 1255, 1342, 12011, 9342, 2053, 5279, 14070, 3130, 3841, 8707], [1876, 2817, 1925, 4003, 3603, 11780, 9661, 2349, 856, 750, 2418, 405, 710, 0, 6331, 251, 397, 750, 837, 11366, 8633, 1797, 4570, 13361, 2420, 3130, 8027], [5062, 4491, 4809, 2616, 6046, 6280, 6041, 4615, 7124, 7019, 4720, 6687, 6989, 6281, 0, 6264, 6119, 6008, 6095, 6658, 2351, 6725, 1760, 7861, 4148, 3427, 4352], [1969, 2864, 2018, 3752, 3696, 11529, 9410, 2308, 1107, 1002, 2511, 657, 962, 251, 6080, 0, 146, 709, 796, 11115, 8382, 1891, 4319, 13110, 2169, 2879, 7776], [2116, 2717, 2165, 3608, 3843, 11385, 9266, 2161, 1253, 1148, 2657, 803, 1107, 397, 5936, 146, 0, 562, 649, 10971, 8238, 2037, 4175, 12966, 2025, 2735, 7632], [2318, 2535, 2367, 3462, 4045, 11239, 9119, 1818, 1400, 1295, 2859, 950, 1254, 594, 5790, 502, 356, 0, 87, 10825, 8092, 2239, 4029, 12820, 1877, 2588, 7485], [2405, 2623, 2454, 3549, 4132, 11326, 9206, 1905, 1487, 1383, 2947, 1038, 1342, 681, 5877, 589, 443, 87, 0, 10912, 8179, 2327, 4116, 12907, 1964, 2675, 7572], [9966, 10663, 9713, 7944, 10136, 2200, 9406, 10564, 12046, 11940, 9330, 11608, 11911, 11202, 6659, 11185, 11041, 10929, 11016, 0, 4753, 11335, 7485, 2816, 9184, 9369, 7690], [7364, 6716, 7111, 4856, 8091, 4118, 5693, 6840, 9426, 9321, 7017, 8989, 9291, 8583, 2351, 8529, 8384, 8236, 8323, 4744, 0, 9021, 4062, 5699, 6373, 5652, 3977], [1984, 4611, 2227, 4301, 2443, 12059, 10434, 4143, 1936, 1887, 2061, 2084, 2082, 1849, 6629, 1993, 2139, 2351, 2438, 11343, 8931, 0, 4868, 13640, 3183, 3896, 8799], [3301, 4242, 3048, 858, 4287, 7532, 7733, 3524, 5364, 5258, 2960, 4927, 5229, 4521, 1760, 4503, 4359, 4247, 4334, 7292, 4062, 4965, 0, 9113, 2447, 2037, 6044], [11687, 11954, 11434, 9603, 12394, 1488, 7664, 11954, 13750, 13644, 11319, 13312, 13615, 12907, 7661, 12889, 12745, 12633, 12720, 2919, 5599, 13324, 8891, 0, 10888, 10759, 7502], [1437, 3544, 1393, 1600, 3436, 9545, 7502, 2826, 3263, 3158, 1974, 2820, 3125, 2414, 4096, 2163, 2019, 1874, 1961, 9131, 6331, 2987, 2335, 11126, 0, 1339, 5813], [2755, 2205, 2711, 2047, 4754, 9377, 6599, 1487, 3978, 3875, 3235, 3535, 3832, 3131, 3391, 2879, 2734, 2586, 2673, 9185, 5552, 3705, 2036, 10958, 1555, 0, 4965], [7028, 5991, 6932, 6044, 9026, 6015, 1717, 6115, 8882, 8779, 7305, 8460, 8736, 8056, 4302, 7805, 7659, 7511, 7598, 7701, 3987, 8652, 6033, 7596, 5822, 4965, 0]], 'pickups_deliveries': [[0, 1], [2, 1], [3, 4], [5, 4], [0, 6], [7, 6], [0, 8], [5, 8], [0, 9], [7, 9], [10, 11], [0, 11], [0, 12], [3, 12], [2, 13], [14, 13], [0, 13], [10, 13], [5, 15], [3, 15], [0, 16], [7, 16], [10, 16], [3, 17], [0, 17], [3, 18], [14, 18], [10, 18], [0, 19], [0, 20], [3, 20], [5, 21], [10, 21], [0, 22], [2, 22], [5, 23], [14, 23], [0, 24], [7, 24], [10, 25], [0, 25]], 'num_vehicles': 10, 'depot': 26} Tried setting num vehicles differently. Tried setting distance limit per vehicle much higher. Tried different depots, different distance matrixes. Tried lower amounts of pickups and deliveries. | you cannot have multiple pickups or deliveries on the same node. Only 1 action per node. | 2 | 1 |
79,089,082 | 2024-10-15 | https://stackoverflow.com/questions/79089082/merging-tables-using-python | I want to merge some 'n' tables in python. Each table has 2 columns in it. Currently, I'm trying with these 3 tables (table12, table13, table23). Context: I have certain image files, each image has some objects inside it which are labelled as 'A', 'B', 'C', etc. If I'm comparing 2 images, I get an equivalence table which tells which object in image1 is similar to which object of image2. (for example. first row of table12 tells that object 'A' of image1 and object 'P' of image2 are same) There are chances that an object in one image doesn't have an equivalent object in the other image. I want to create a single table, making use of all these tables. Number of columns in the final merged table = number of images. Means, under each column, its objects will be mentioned and in the same row, we can find which objects are similar. I tried pandas merge (merging 2 tables, then merging third one on the merged of first and second) but it's giving result like: . If you want to test, here is the code: import pandas as pd import numpy as np table12 = pd.DataFrame({ 'img1': ['A', 'B', 'C', 'D', np.NaN, np.NaN], 'img2': ['P', np.NaN, 'Q', np.NaN, 'R', 'S'] }) table13 = pd.DataFrame({ 'img1': ['A', 'B', 'C', 'D', np.NaN, np.NaN], 'img3': [np.NaN, 'X', np.NaN, 'Y', 'Z', 'W'] }) table23 = pd.DataFrame({ 'img2': ['P', np.NaN, 'Q', np.NaN, 'R', np.NaN, 'S'], 'img3': [np.NaN, 'X', np.NaN, 'Y', 'Z', 'W', np.NaN] }) merged_12_13 = pd.merge(table12, table13, on='img1', how='outer') final_output = pd.merge(merged_12_13, table23, on=['img2', 'img3'], how='outer') | Your logic in unclear, but assuming that the tables are to be kept in their original order and are top aligned, you should probably concat the individual columns: pd.concat([table12[['img1']], table23[['img2']], table13[['img3']]], axis=1) Or, taking all columns as input and removing the duplicates: pd.concat([table12, table13, table23], axis=1).T.groupby(level=0).first().T Output: img1 img2 img3 0 A P NaN 1 B NaN X 2 C Q NaN 3 D NaN Y 4 NaN R Z 5 NaN S W 6 NaN NaN NaN | 1 | 2 |
79,088,735 | 2024-10-15 | https://stackoverflow.com/questions/79088735/i-would-like-to-know-how-to-avoid-unnecessary-duplication-when-comparing-two-dat | import pandas df1 = pandas.DataFrame( { 'code': ['001', '001'], 'name': ['test1', 'test1'], 'date': ['2024-01-01', '2024-01-01'], 'value1': [1, 2], 'value2': [1, 2], 'sum': [2, 4] } ) df2 = pandas.DataFrame( { 'code': ['001', '001', '001', '002'], 'name': ['test1', 'test1', 'test1', 'test2'], 'date': ['2024-01-01', '2024-01-01', '2024-01-01', '2024-02-01'], 'value1': [1, 2, 3, 4], 'value2': [1, 2, 3, 4], 'sum': [2, 4, 6, 8] } ) result = pandas.merge(df1, df2, on=['code', 'name', 'date'], how='outer', indicator=True) print(result[['code', 'name', 'date', 'sum_x', 'sum_y', '_merge']]) When the above data is printed, it appears as shown below. code name date sum_x sum_y _merge 0 001 test1 2024-01-01 2.0 2 both 1 001 test1 2024-01-01 2.0 4 both 2 001 test1 2024-01-01 2.0 6 both 3 001 test1 2024-01-01 4.0 2 both 4 001 test1 2024-01-01 4.0 4 both 5 001 test1 2024-01-01 4.0 6 both 6 002 test2 2024-02-01 NaN 8 right_only Two data have been added to df2. I looked into it through many methods and searches, but the closest thing I tried was to use merge to create unique values and compare them. But I couldn't create the output value I wanted. My goal is to compare 1:1 what was created and what was lost in df1 and df2. I kept trying to remove row values that were unnecessarily created during merge, but it didn't work. The material I want to create is as below. code name date sum_x sum_y _merge 0 001 test1 2024-01-01 2 2 both 4 001 test1 2024-01-01 4 4 both 5 001 test1 2024-01-01 NaN 6 right_only 6 002 test2 2024-02-01 NaN 8 right_only I tried to delete unnecessary rows using deduplication and qurey. I also tried creating eigenvalues and comparing them, but I couldn't create what I wanted. I tried looking up information using Stack Overflow and gpt. | It looks like you should include the sum columns as keys, optionally renaming them before the merge: result = pandas.merge(df1.rename(columns={'sum': 'sum_x'}) .drop(columns=['value1', 'value2']), df2.rename(columns={'sum': 'sum_y'}) .drop(columns=['value1', 'value2']), left_on=['code', 'name', 'date', 'sum_x'], right_on=['code', 'name', 'date', 'sum_y'], how='outer', indicator=True) Output: code name date sum_x sum_y _merge 0 001 test1 2024-01-01 2.0 2 both 1 001 test1 2024-01-01 4.0 4 both 2 001 test1 2024-01-01 NaN 6 right_only 3 002 test2 2024-02-01 NaN 8 right_only | 1 | 2 |
79,085,795 | 2024-10-14 | https://stackoverflow.com/questions/79085795/batched-matrix-multiplication-with-jax-on-gpu-faster-with-larger-matrices | I'm trying to perform batched matrix multiplication with JAX on GPU, and noticed that it is ~3x faster to multiply shapes (1000, 1000, 3, 35) @ (1000, 1000, 35, 1) than it is to multiply (1000, 1000, 3, 25) @ (1000, 1000, 25, 1) with f64 and ~5x with f32. What explains this difference, considering that on cpu neither JAX or NumPy show this behaviour, and on GPU CuPy doesn't show this behaviour? I'm running this with JAX: 0.4.32 on an NVIDIA RTX A5000 (and get similar results on a Tesla T4), code to reproduce: import numpy as np import cupy as cp from cupyx.profiler import benchmark from jax import config config.update("jax_enable_x64", True) import jax import jax.numpy as jnp import matplotlib.pyplot as plt rng = np.random.default_rng() x = np.arange(5, 55, 5) GPU timings: dtype = cp.float64 timings_cp = [] for i in range(5, 55, 5): a = cp.array(rng.random((1000, 1000, 3, i)), dtype=dtype) b = cp.array(rng.random((1000, 1000, i, 1)), dtype=dtype) timings_cp.append(benchmark(lambda a, b: a@b, (a, b), n_repeat=10, n_warmup=10)) dtype = jnp.float64 timings_jax_gpu = [] with jax.default_device(jax.devices('gpu')[0]): for i in range(5, 55, 5): a = jnp.array(rng.random((1000, 1000, 3, i)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, i, 1)), dtype=dtype) func = jax.jit(lambda a, b: a@b) timings_jax_gpu.append(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=10, n_warmup=10)) plt.figure() plt.plot(x, [i.gpu_times.mean() for i in timings_cp], label="CuPy") plt.plot(x, [i.gpu_times.mean() for i in timings_jax_gpu], label="JAX GPU") plt.legend() Timings with those specific shapes: dtype = jnp.float64 with jax.default_device(jax.devices('gpu')[0]): a = jnp.array(rng.random((1000, 1000, 3, 25)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, 25, 1)), dtype=dtype) func = jax.jit(lambda a, b: a@b) print(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=1000, n_warmup=10).gpu_times.mean()) a = jnp.array(rng.random((1000, 1000, 3, 35)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, 35, 1)), dtype=dtype) print(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=1000, n_warmup=10).gpu_times.mean()) Gives f64: 0.01453789699935913 0.004859122595310211 f32: 0.005860503035545349 0.001209742688536644 CPU timings: timings_np = [] for i in range(5, 55, 5): a = rng.random((1000, 1000, 3, i)) b = rng.random((1000, 1000, i, 1)) timings_np.append(benchmark(lambda a, b: a@b, (a, b), n_repeat=10, n_warmup=10)) timings_jax_cpu = [] with jax.default_device(jax.devices('cpu')[0]): for i in range(5, 55, 5): a = jnp.array(rng.random((1000, 1000, 3, i))) b = jnp.array(rng.random((1000, 1000, i, 1))) func = jax.jit(lambda a, b: a@b) timings_jax_cpu.append(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=10, n_warmup=10)) plt.figure() plt.plot(x, [i.cpu_times.mean() for i in timings_np], label="NumPy") plt.plot(x, [i.cpu_times.mean() for i in timings_jax_cpu], label="JAX CPU") plt.legend() | The difference seems to come from the compiler emitting a kLoop fusion for smaller sizes, and a kInput fusion for larger sizes. You can read about the effect of these in this source comment: https://github.com/openxla/xla/blob/e6b6e61b29cc439350a6ad2f9d39535cb06011e5/xla/hlo/ir/hlo_instruction.h#L639-L656 The compiler likely uses some heuristic to choose between the two, and it appears that this heuristic is suboptimal at the boundary for your particular problem. You can see this by outputting the compiled HLO for your operation: a = jnp.array(rng.random((1000, 1000, 3, 25)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, 25, 1)), dtype=dtype) print(jax.jit(lambda a, b: a @ b).lower(a, b).compile().as_text()) HloModule jit__lambda_, is_scheduled=true, entry_computation_layout={(f64[1000,1000,3,25]{3,2,1,0}, f64[1000,1000,25,1]{3,2,1,0})->f64[1000,1000,3,1]{3,2,1,0}}, allow_spmd_sharding_propagation_to_parameters={true,true}, allow_spmd_sharding_propagation_to_output={true}, frontend_attributes={fingerprint_before_lhs="a02cbfe0fda9d44e2bd23462363b6cc0"} %scalar_add_computation (scalar_lhs: f64[], scalar_rhs: f64[]) -> f64[] { %scalar_rhs = f64[] parameter(1) %scalar_lhs = f64[] parameter(0) ROOT %add.2 = f64[] add(f64[] %scalar_lhs, f64[] %scalar_rhs) } %fused_reduce (param_0.7: f64[1000,1000,3,25], param_1.6: f64[1000,1000,25,1]) -> f64[1000,1000,3] { %param_0.7 = f64[1000,1000,3,25]{3,2,1,0} parameter(0) %param_1.6 = f64[1000,1000,25,1]{3,2,1,0} parameter(1) %bitcast.28.5 = f64[1000,1000,25]{2,1,0} bitcast(f64[1000,1000,25,1]{3,2,1,0} %param_1.6) %broadcast.2.5 = f64[1000,1000,3,25]{3,2,1,0} broadcast(f64[1000,1000,25]{2,1,0} %bitcast.28.5), dimensions={0,1,3}, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-4-68f2557428ff>" source_line=3} %multiply.2.3 = f64[1000,1000,3,25]{3,2,1,0} multiply(f64[1000,1000,3,25]{3,2,1,0} %param_0.7, f64[1000,1000,3,25]{3,2,1,0} %broadcast.2.5) %constant_4 = f64[] constant(0) ROOT %reduce.2 = f64[1000,1000,3]{2,1,0} reduce(f64[1000,1000,3,25]{3,2,1,0} %multiply.2.3, f64[] %constant_4), dimensions={3}, to_apply=%scalar_add_computation, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-4-68f2557428ff>" source_line=3} } ENTRY %main.4 (Arg_0.1.0: f64[1000,1000,3,25], Arg_1.2.0: f64[1000,1000,25,1]) -> f64[1000,1000,3,1] { %Arg_1.2.0 = f64[1000,1000,25,1]{3,2,1,0} parameter(1), metadata={op_name="b"} %Arg_0.1.0 = f64[1000,1000,3,25]{3,2,1,0} parameter(0), metadata={op_name="a"} %loop_reduce_fusion = f64[1000,1000,3]{2,1,0} fusion(f64[1000,1000,3,25]{3,2,1,0} %Arg_0.1.0, f64[1000,1000,25,1]{3,2,1,0} %Arg_1.2.0), kind=kLoop, calls=%fused_reduce, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-4-68f2557428ff>" source_line=3} ROOT %bitcast.1.0 = f64[1000,1000,3,1]{3,2,1,0} bitcast(f64[1000,1000,3]{2,1,0} %loop_reduce_fusion), metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-4-68f2557428ff>" source_line=3} } a = jnp.array(rng.random((1000, 1000, 3, 35)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, 35, 1)), dtype=dtype) print(jax.jit(lambda a, b: a @ b).lower(a, b).compile().as_text()) %scalar_add_computation (scalar_lhs: f64[], scalar_rhs: f64[]) -> f64[] { %scalar_rhs = f64[] parameter(1) %scalar_lhs = f64[] parameter(0) ROOT %add.2 = f64[] add(f64[] %scalar_lhs, f64[] %scalar_rhs) } %fused_reduce (param_0.5: f64[1000,1000,3,35], param_1.2: f64[1000,1000,35,1]) -> f64[1000,1000,3] { %param_0.5 = f64[1000,1000,3,35]{3,2,1,0} parameter(0) %param_1.2 = f64[1000,1000,35,1]{3,2,1,0} parameter(1) %bitcast.28.3 = f64[1000,1000,35]{2,1,0} bitcast(f64[1000,1000,35,1]{3,2,1,0} %param_1.2) %broadcast.2.3 = f64[1000,1000,3,35]{3,2,1,0} broadcast(f64[1000,1000,35]{2,1,0} %bitcast.28.3), dimensions={0,1,3}, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-3-eb3ac06eae7a>" source_line=4} %multiply.2.1 = f64[1000,1000,3,35]{3,2,1,0} multiply(f64[1000,1000,3,35]{3,2,1,0} %param_0.5, f64[1000,1000,3,35]{3,2,1,0} %broadcast.2.3) %constant_3 = f64[] constant(0) ROOT %reduce.2 = f64[1000,1000,3]{2,1,0} reduce(f64[1000,1000,3,35]{3,2,1,0} %multiply.2.1, f64[] %constant_3), dimensions={3}, to_apply=%scalar_add_computation, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-3-eb3ac06eae7a>" source_line=4} } ENTRY %main.4 (Arg_0.1.0: f64[1000,1000,3,35], Arg_1.2.0: f64[1000,1000,35,1]) -> f64[1000,1000,3,1] { %Arg_1.2.0 = f64[1000,1000,35,1]{3,2,1,0} parameter(1), metadata={op_name="b"} %Arg_0.1.0 = f64[1000,1000,3,35]{3,2,1,0} parameter(0), metadata={op_name="a"} %input_reduce_fusion = f64[1000,1000,3]{2,1,0} fusion(f64[1000,1000,3,35]{3,2,1,0} %Arg_0.1.0, f64[1000,1000,35,1]{3,2,1,0} %Arg_1.2.0), kind=kInput, calls=%fused_reduce, metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-3-eb3ac06eae7a>" source_line=4} ROOT %bitcast.1.0 = f64[1000,1000,3,1]{3,2,1,0} bitcast(f64[1000,1000,3]{2,1,0} %input_reduce_fusion), metadata={op_name="jit(<lambda>)/jit(main)/dot_general" source_file="<ipython-input-3-eb3ac06eae7a>" source_line=4} } Here's a script to observe this compiler decision with respect to size: for size in range(10, 55, 5): a = jnp.array(rng.random((1000, 1000, 3, size)), dtype=dtype) b = jnp.array(rng.random((1000, 1000, size, 1)), dtype=dtype) hlo_text = jax.jit(lambda a, b: a @ b).lower(a, b).compile().as_text() print(f"{size=} {'kLoop' in hlo_text=}") size=10 'kLoop' in hlo_text=True size=15 'kLoop' in hlo_text=True size=20 'kLoop' in hlo_text=True size=25 'kLoop' in hlo_text=True size=30 'kLoop' in hlo_text=True size=35 'kLoop' in hlo_text=False size=40 'kLoop' in hlo_text=False size=45 'kLoop' in hlo_text=False size=50 'kLoop' in hlo_text=False I don't have any suggestion beyond perhaps reporting this at https://github.com/openxla/xla; it may be that the compiler heuristic for choosing to emit kLoop vs. kInput needs some additional logic. | 1 | 5 |
79,086,873 | 2024-10-14 | https://stackoverflow.com/questions/79086873/pandas-polars-write-list-of-jsons-to-database-fails-with-ndarray-is-not-json | I have multiple json columns which I concat to an array of json columns. The DataFarme looks like this ┌─────────────────────────────────┐ │ json_concat │ │ --- │ │ list[str] │ ╞═════════════════════════════════╡ │ ["{"integer_col":52,"string_co… │ │ ["{"integer_col":93,"string_co… │ │ ["{"integer_col":15,"string_co… │ │ ["{"integer_col":72,"string_co… │ │ ["{"integer_col":61,"string_co… │ │ ["{"integer_col":21,"string_co… │ │ ["{"integer_col":83,"string_co… │ │ ["{"integer_col":87,"string_co… │ │ ["{"integer_col":75,"string_co… │ │ ["{"integer_col":75,"string_co… │ └─────────────────────────────────┘ Here is the output of polars glimpse Rows: 10 Columns: 1 $ json_concat <list[str]> ['{"integer_col":52,"string_col":"v"}', '{"float_col":86.61761457749351,"bool_col":true}', '{"datetime_col":"2021-01-01 00:00:00","categorical_col":"Category3"}'], ['{"integer_col":93,"string_col":"l"}', '{"float_col":60.11150117432088,"bool_col":false}', '{"datetime_col":"2021-01-02 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":15,"string_col":"y"}', '{"float_col":70.80725777960456,"bool_col":false}', '{"datetime_col":"2021-01-03 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":72,"string_col":"q"}', '{"float_col":2.0584494295802447,"bool_col":true}', '{"datetime_col":"2021-01-04 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":61,"string_col":"j"}', '{"float_col":96.99098521619943,"bool_col":true}', '{"datetime_col":"2021-01-05 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":21,"string_col":"p"}', '{"float_col":83.24426408004217,"bool_col":true}', '{"datetime_col":"2021-01-06 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":83,"string_col":"o"}', '{"float_col":21.233911067827616,"bool_col":true}', '{"datetime_col":"2021-01-07 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":87,"string_col":"o"}', '{"float_col":18.182496720710063,"bool_col":true}', '{"datetime_col":"2021-01-08 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":75,"string_col":"s"}', '{"float_col":18.34045098534338,"bool_col":true}', '{"datetime_col":"2021-01-09 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":75,"string_col":"l"}', '{"float_col":30.42422429595377,"bool_col":true}', '{"datetime_col":"2021-01-10 00:00:00","categorical_col":"Category2"}'] I want to write the json column to a table called testing. I tried both pd.DataFrame.to_sql() as well as pl.DataFrame.write_database() both failing with a similary error Error The essential part is this sqlalchemy.exc.StatementError: (builtins.TypeError) Object of type ndarray is not JSON serializable File "/usr/lib/python3.10/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' sqlalchemy.exc.StatementError: (builtins.TypeError) Object of type ndarray is not JSON serializable [SQL: INSERT INTO some_schema.testing (json_concat) VALUES (%(json_concat)s)] [parameters: [{'json_concat': array(['{"integer_col":52,"string_col":"v"}', '{"float_col":86.61761457749351,"bool_col":true}', '{"datetime_col":"2021-01-01 00:00:00","categorical_col":"Category3"}'], dtype=object)}, # ... abbreviated dtype=object)}, {'json_concat': array(['{"integer_col":75,"string_col":"l"}', '{"float_col":30.42422429595377,"bool_col":true}', '{"datetime_col":"2021-01-10 00:00:00","categorical_col":"Category2"}'], dtype=object)}]] Code that produces the Error (exemplary for pandas) df_pandas.to_sql( "testing", con=engines.engine, schema=schema, index=False, if_exists="append", dtype=DTYPE, ) Question How do I need to prepare the concated json column for it to be json serializable? MRE (Create Example Data) from typing import Any import numpy as np import pandas as pd import polars as pl from myengines import engines from sqlalchemy import dialects, text schema = "some_schema" # Seed for reproducibility np.random.seed(42) n = 10 # Generate random data integer_col = np.random.randint(1, 100, n) float_col = np.random.random(n) * 100 string_col = np.random.choice(list("abcdefghijklmnopqrstuvwxyz"), n) bool_col = np.random.choice([True, False], n) datetime_col = pd.date_range(start="2021-01-01", periods=n, freq="D") categorical_col = np.random.choice(["Category1", "Category2", "Category3"], n) # Creating the DataFrame df = pl.DataFrame( { "integer_col": integer_col, "float_col": float_col, "string_col": string_col, "bool_col": bool_col, "datetime_col": datetime_col, "categorical_col": categorical_col, } ) df = df.select( pl.struct(pl.col("integer_col", "string_col")).struct.json_encode().alias("json1"), pl.struct(pl.col("float_col", "bool_col")).struct.json_encode().alias("json2"), pl.struct(pl.col("datetime_col", "categorical_col")) .struct.json_encode() .alias("json3"), ).select(pl.concat_list(pl.col(["json1", "json2", "json3"])).alias("json_concat")) DTYPE: dict[str, Any] = {"json_concat": dialects.postgresql.JSONB} | There is unfortunately no polars function to serialize a list to a JSON array. Here's how you can do it manually: df = df.select( pl.struct(pl.col("integer_col", "string_col")).struct.json_encode().alias("json1"), pl.struct(pl.col("float_col", "bool_col")).struct.json_encode().alias("json2"), pl.struct(pl.col("datetime_col", "categorical_col")).struct.json_encode().alias("json3"), ).select( pl.format("[{}]", pl.concat_list(pl.col(["json1", "json2", "json3"])).list.join(",")).alias("json_concat"), ) engine = create_engine("postgresql+psycopg2://postgres:postgres@localhost:5432/postgres", echo=True) df.write_database( "testing", connection=engine, if_table_exists="append", ) Also, in expressions, strings are read as column names, so pl.col isn't required. Here's the cleaned-up code: df = df.select( pl.struct("integer_col", "string_col").struct.json_encode().alias("json1"), pl.struct("float_col", "bool_col").struct.json_encode().alias("json2"), pl.struct("datetime_col", "categorical_col").struct.json_encode().alias("json3"), ).select( pl.format("[{}]", pl.concat_list("json1", "json2", "json3").list.join(",")).alias("json_concat"), ) Alternatively, the concatenation can be written in one format expression: df = df.select( pl.format( "[{}, {}, {}]", pl.struct("integer_col", "string_col").struct.json_encode(), pl.struct("float_col", "bool_col").struct.json_encode(), pl.struct("datetime_col", "categorical_col").struct.json_encode(), ).alias("json_concat"), ) | 2 | 1 |
79,087,381 | 2024-10-14 | https://stackoverflow.com/questions/79087381/define-a-regex-pattern-to-remove-all-special-characters-but-with-an-exception-in | This is the problem: I'm trying to clean a text from all the special characters but want to keep the compound words like 'self-restraint' or 'e-mail' united as they are, with the middle dash. The problem is that this hyphen is recognized as a special character. I have python 3.10 I used several regex patterns to do it, but none worked fine. import re text = "This is -a -! -sample- ? e-mail text?classification? example- ! ?\} {]}[¿ with !self-restraint( - like @, #, and $." # cleaned_text = re.sub(r'(?<!\b[a-zA-Z]+)-|[^\w\s-]', ' ', text) cleaned_text = re.sub(r'\W+(?![a-zA-Z]+-[a-zA-Z]+)', ' ', text) # cleaned_text = re.sub(r'\W+[^\w]', ' ', text) # cleaned_text = re.sub(r'(?!\b[a-zA-Z]+-[a-zA-Z]+\b)', ' ', text) # cleaned_text = re.sub(r'(?!\b[a-zA-Z]+-[a-zA-Z]+\b)\W+', ' ', text) # cleaned_text = re.sub(r'\b(?![a-zA-Z]+-[a-zA-Z]+\b)\w*[^a-zA-Z\s-]+\w*\b', ' ', text) # cleaned_text = re.sub(r'\b[a-zA-Z]+-[a-zA-Z]+\b', ' ', text) # cleaned_text = re.sub(r'(?<!\w)-(?!\w)|[^\w\s-]|[^\w\s-]$', ' ', text) print(cleaned_text) Actual output: This is a sample e mail text classification example with !self restraint like and Expected output: This is a sample e-mail text classification example with self-restraint like and | You could try something like this: import re text = "This is -a -! -sample- ? e-mail text?classification? example- ! ?\} {]}[¿ with !self-restraint( - like @, #, and $." cleaned_text = re.sub(r'\s+', ' ', re.sub(r'(?<!\S)[-]|[-](?!\S)|[^\w\s-]', ' ', text)).strip() print(cleaned_text) Output This is a sample e-mail text classification example with self-restraint like and Without more examples I can't definitively say whether or not it will succeed on other strings, although this should give you an idea of one possible way to approach this issue. | 1 | 1 |
79,087,436 | 2024-10-14 | https://stackoverflow.com/questions/79087436/how-to-convert-python-list-into-ast-list | How can I convert Python object - a list, into an ast.List object, so I can appent it to the main AST tree as a node huge_list [ 1, "ABC", 4.5 ] object = ast.Assign([ast.Name(huge_list_name, ast.Store())], (ast.List(huge_list, ast.Load()))) object.lineno = None result = ast.unparse(object) print(result) tree.body.append(object) but it fails while parsing each field from the sample list. | Assuming your list is made up of simple objects like strings and numbers, you can parse its representation back into an ast.Module, then dig the ast.List out of the module body: >>> huge_list = [1, "ABC", 4.5] >>> mod = ast.parse(repr(huge_list)) >>> [expr] = mod.body >>> expr.value <ast.List at 0x7fffed231190> | 3 | 2 |
79,086,014 | 2024-10-14 | https://stackoverflow.com/questions/79086014/concat-list-with-null-values-or-how-to-fill-null-in-pl-liststr | I want to concat three list columns in a pl.LazyFrame. However the Lists often contain NULL values. Resulting in NULL for pl.concat_list MRE import polars as pl # Create the data with some NULLs data = { "a": [["apple", "banana"], None, ["cherry"]], "b": [None, ["dog", "elephant"], ["fish"]], "c": [["grape"], ["honeydew"], None], } # Create a LazyFrame lazy_df = pl.LazyFrame(data) list_cols = ["a", "b", "c"] print(lazy_df.with_columns(pl.concat_list(pl.col(list_cols)).alias("merge")).collect()) ┌─────────────────────┬─────────────────────┬──────────────┬───────────┐ │ a ┆ b ┆ c ┆ merge │ │ --- ┆ --- ┆ --- ┆ --- │ │ list[str] ┆ list[str] ┆ list[str] ┆ list[str] │ ╞═════════════════════╪═════════════════════╪══════════════╪═══════════╡ │ ["apple", "banana"] ┆ null ┆ ["grape"] ┆ null │ │ null ┆ ["dog", "elephant"] ┆ ["honeydew"] ┆ null │ │ ["cherry"] ┆ ["fish"] ┆ null ┆ null │ └─────────────────────┴─────────────────────┴──────────────┴───────────┘ Question How can I concat the lists even when some values are NULL? Tried solutions I've tried to fill the null values via expr.fill_null("") or expr.fill_null(pl.List("")) or expr.fill_null(pl.List([])) but could not get it to run through. How do I fill an empty list instead of NULL in cols of type pl.List[str]. And is there a better way to concat the three list columns? | You can use pl.Expr.fill_null() as follows: lazy_df.with_columns( pl.concat_list( pl.col(list_cols).fill_null([]) ).alias("merge") ) | 3 | 1 |
79,081,361 | 2024-10-12 | https://stackoverflow.com/questions/79081361/echo-y-from-within-python | I'm trying to import and use a module originally made as a standalone script. Preferably without altering it, to keep tool commonality with the authors. One function I'm using prompts for a y/n response, to which I want to always answer "y": input("\nContinue? (y/n): ").lower() What's the best way to automate this answer? | Try replacing sys.stdin (the place where input reads it's data from) with something that always outputs y. import io import sys one_hunderd_yesses = io.StringIO('y\n'*100) original_stdin = sys.stdin sys.stdin = one_hunderd_yesses # Run the external module print(input("\nContinue? (y/n): ")) # Outputs "y" # When you are done, reset stdin sys.stdin = original_stdin Try this on ATO | 1 | 2 |
79,070,227 | 2024-10-9 | https://stackoverflow.com/questions/79070227/calendarviewrequestbuilder-msgraph-python-sdk-query-parameter-for-start-da | I've been playing around with the start_date_time and the end_date_time like this: # Create GraphServiceClient graph_client = GraphServiceClient(credential, scopes) # Define date range for fetching events start_date = datetime.now(pytz.utc) end_date = start_date + timedelta(days=365) # Format dates correctly for the Graph API in ISO 8601 format start_date_str = start_date.strftime("%Y-%m-%dT%H:%M:%S%z") end_date_str = end_date.strftime("%Y-%m-%dT%H:%M:%S%z") # Insert colon in timezone offset start_date_str = start_date_str[:-2] + ":" + start_date_str[-2:] end_date_str = end_date_str[:-2] + ":" + end_date_str[-2:] print(f"Fetching events from {start_date_str} to {end_date_str}") # Set up the request configuration with query parameters as a dictionary query_params = CalendarViewRequestBuilder.CalendarViewRequestBuilderGetQueryParameters( start_date_time=start_date_str, end_date_time=end_date_str, select=['subject', 'start', 'end'], top=100, orderby=['start/dateTime ASC'], #filter="contains(subject, 'from HR Works')" ) headers = HeadersCollection() headers.add("Prefer", 'outlook.timezone="Europe/Berlin"') request_configuration = RequestConfiguration( query_parameters=query_params, headers=headers ) all_events = [] try: # Initial request events_page = await graph_client.users.by_user_id(user_id).calendars.by_calendar_id(calendar_id).events.get(request_configuration=request_configuration) ... It says then basically Fetching events from 2024-10-09T12:27:39+00:00 to 2025-10-09T12:27:39+00:00 but checking on them I see lot's of events from 2023? for instance Absence from 2023-10-13T10:00:00.0000000 to 2023-10-13T11:00:00.0000000: Where is my issue? I tried tweaking the timestamps, but nothing worked. Also been diving into the official documentation which did not help: https://learn.microsoft.com/en-us/graph/sdks/create-requests?tabs=python#use-select-to-control-the-properties-returned EDIT: Using this query https://graph.microsoft.com/v1.0/me/calendarView?startDateTime=2023-06-14T00:00:00Z&endDateTime=2024-06-15T00:00:00Z in the GraphExplorer works, - meaning endDateTime and startDateTime parameters | Seems like you are calling wrong method. Checking your code events_page = await graph_client.users.by_user_id(user_id).calendars.by_calendar_id(calendar_id).events.get(request_configuration=request_configuration) you are calling the endpoint v1.0/users/{user_id}/calendars/{calendar_id}/events For calendarView you need to call result = await graph_client.users.by_user_id(user_id).calendars.by_calendar_id(calendar_id).calendar_view.get(request_configuration = request_configuration) | 2 | 2 |
79,085,604 | 2024-10-14 | https://stackoverflow.com/questions/79085604/polars-python-is-in-for-lazyframe-typeerror | I am getting the following TypeError Traceback (most recent call last): File "/my/path/my_project/src/my_project/exploration/mre_lazyframe_error.py", line 39, in <module> current.with_columns(pl.col("foo_bar").is_in(reference["foo_bar"])) File "/my/path/.cache/pypoetry/virtualenvs/my_project-p95GORRi-py3.10/lib/python3.10/site-packages/polars/lazyframe/frame.py", line 619, in __getitem__ raise TypeError(msg) TypeError: 'LazyFrame' object is not subscriptable (aside from slicing) MRE import numpy as np import polars as pl num_rows = 10000 ids = np.arange(num_rows) foo_bar = np.random.randint(1, 101, num_rows) current = pl.LazyFrame( { "id": ids, "foo_bar": foo_bar, } ) reference = pl.LazyFrame( { "id": ids, "foo_bar": np.random.randint( 1, 101, num_rows ), # different random numbers for 'reference' } ) current.with_columns( pl.col("foo_bar").is_in(reference["foo_bar"]).name.suffix("_avail"), ) current.with_columns(pl.col("foo_bar").is_in(reference["foo_bar"])) As far as I understand it this should be possible (is_in docs). When I do it in eager it runs through without any error. However I would preferably compute everything in lazy. Is there anyway to make this work? | Subscription of DataFrame return series, but subscription (or get_column() method) is not implemented for LazyFrame (what would it even return?). There is a closed issue where it was decided that it's not going to be implemented. If amount of distinct values in reference frame is not large (as in your case), you could collect the values and convert them to series: ( current .with_columns( pl.col("foo_bar").is_in( reference.select("foo_bar").unique().collect().to_series() ).name.suffix("_avail") ) ) In general such operations are usually much faster with joins, though. To get all the rows where foo_bar exists in reference you could use pl.LazyFrame.join with how="semi": semi Returns rows from the left table that have a match in the right table. current.join(reference, on="foo_bar", how="semi") | 2 | 1 |
79,085,456 | 2024-10-14 | https://stackoverflow.com/questions/79085456/vs-code-fails-to-run-script-which-has-quotes-in-its-filename | I am trying to run a simple "Hello World" script which is named print("hello world!").py by pressing the "play" button in the top-right of the VS Code window. I'm using Python 3.13.0, macOS 12.6.1 and VS Code 1.94. The VS Code terminal prints: dafyddpowell@Dafydds-MBP \~ % cd "/Volumes/SAMSUNG/Python projects" dafyddpowell@Dafydds-MBP Python projects % /usr/bin/python3 "/Volumes/SAMSUNG/Python projects/print("Hello world!").py" zsh: parse error near \`)' dafyddpowell@Dafydds-MBP Python projects % What is wrong? | It seems like VSCode isn't escaping the file name that contains special characters. The simplest way around this would be to use a filename that doesn't have any special characters like hello_world.py. | 1 | 1 |
79,085,124 | 2024-10-14 | https://stackoverflow.com/questions/79085124/extract-a-class-from-a-static-method | Given a function which is a staticmethod of a class, is there a way to extract the parent class object from this? class A: @staticmethod def b(): ... ... f = A.b ... assert get_parent_object_from(f) is A I can see this buried in the __qualname__, but can't figure out how to extract this. The function get_parent_object_from should have no knowledge of the parent class A. | One approach would be to use a custom static method descriptor that sets the owner class as an attribute of the wrapped function: class StaticMethod(staticmethod): def __set_name__(self, owner, name): self.__wrapped__.owner = owner class A: @StaticMethod def b(): ... f = A.b assert f.owner is A Demo here | 2 | 2 |
79,078,700 | 2024-10-11 | https://stackoverflow.com/questions/79078700/error-when-deploying-django-drf-backend-on-google-cloud-no-matching-distributio | I am trying to deploy my Django backend (using Django Rest Framework) on a Google Cloud VM instance. However, when I run pip install -r requirements.txt, I encounter the following error: Collecting asgiref==3.8.1 Using cached asgiref-3.8.1-py3-none-any.whl (23 kB) Collecting attrs==23.2.0 Using cached attrs-23.2.0-py3-none-any.whl (60 kB) ERROR: Could not find a version that satisfies the requirement Django==5.0.4 (from -r requirements.txt (line 3)) (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29, 2.0a1, 2.0b1, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10, 2.0.12, 2.0.13, 2.1a1, 2.1b1, 2.1rc1, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.7, 2.1.8, 2.1.9, 2.1.10, 2.1.11, 2.1.12, 2.1.13, 2.1.14, 2.1.15, 2.2a1, 2.2b1, 2.2rc1, 2.2, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.2.11, 2.2.12, 2.2.13, 2.2.14, 2.2.15, 2.2.16, 2.2.17, 2.2.18, 2.2.19, 2.2.20, 2.2.21, 2.2.22, 2.2.23, 2.2.24, 2.2.25, 2.2.26, 2.2.27, 2.2.28, 3.0a1, 3.0b1, 3.0rc1, 3.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.0.10, 3.0.11, 3.0.12, 3.0.13, 3.0.14, 3.1a1, 3.1b1, 3.1rc1, 3.1, 3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.1.6, 3.1.7, 3.1.8, 3.1.9, 3.1.10, 3.1.11, 3.1.12, 3.1.13, 3.1.14, 3.2a1, 3.2b1, 3.2rc1, 3.2, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6, 3.2.7, 3.2.8, 3.2.9, 3.2.10, 3.2.11, 3.2.12, 3.2.13, 3.2.14, 3.2.15, 3.2.16, 3.2.17, 3.2.18, 3.2.19, 3.2.20, 3.2.21, 3.2.22, 3.2.23, 3.2.24, 3.2.25, 4.0a1, 4.0b1, 4.0rc1, 4.0, 4.0.1, 4.0.2, 4.0.3, 4.0.4, 4.0.5, 4.0.6, 4.0.7, 4.0.8, 4.0.9, 4.0.10, 4.1a1, 4.1b1, 4.1rc1, 4.1, 4.1.1, 4.1.2, 4.1.3, 4.1.4, 4.1.5, 4.1.6, 4.1.7, 4.1.8, 4.1.9, 4.1.10, 4.1.11, 4.1.12, 4.1.13, 4.2a1, 4.2b1, 4.2rc1, 4.2, 4.2.1, 4.2.2, 4.2.3, 4.2.4, 4.2.5, 4.2.6, 4.2.7, 4.2.8, 4.2.9, 4.2.10, 4.2.11, 4.2.12, 4.2.13, 4.2.14, 4.2.15, 4.2.16) ERROR: No matching distribution found for Django==5.0.4 (from -r requirements.txt (line 3)) Here are the steps I performed cd Project_Backend python3 -m venv env sudo apt-get install python3-venv python3 -m venv env source ./env/bin/activate pip install -r requirements.txt That's my requirements.txt asgiref==3.8.1 attrs==23.2.0 Django==5.0.4 django-cors-headers==4.4.0 djangorestframework==3.15.1 drf-spectacular==0.27.2 inflection==0.5.1 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 PyYAML==6.0.1 referencing==0.35.1 rpds-py==0.18.1 sqlparse==0.4.4 typing_extensions==4.11.0 tzdata==2024.1 uritemplate==4.1.1 I am running Ubuntu on my Google Cloud VM. How should I resolve this issue and successfully deploy my Django application? | Provide the full requirements.txt and Python version you plan to use to run the code. The errors indicate that PIP cannot install the version requested in the requirements.txt file as it conflicts with other requirements or is not supported by the Python version you have installed. If you are running something you didn't author, then your best bet is to install the Python version that the code was intended to run using the specified dependency versions frozen in the text file. Alternatively, you can go error-by-error and upgrade each dependency to the next supported version or go the opposite route and upgrade to the latest supported version for each. Dependency resolution can be tricky if you don't know which features of which dependency are being used, as things get deprecated, so using the correct versions helps. For Django, you can try removing the "==5.0.4" part from the requirements and see if PIP can find a compatible version for your project. You may need to do this one dependency at a time until you get it running. Look into PipEnv and Poetry, which help simplify dependency management. | 2 | 1 |
79,084,064 | 2024-10-13 | https://stackoverflow.com/questions/79084064/i-cant-get-the-input-and-output-to-work-with-my-function-in-python-tkinter | I can't get my tkinter project to work. i haven't been succesful into getting the input from the entry label from a user to then that info get worked on through my function of encryptation, so it can print the output in the output label. from tkinter import * from cryptography.fernet import Fernet raiz= Tk() #def output():# # texto.set(print(Encriptador))# raiz.title("Codec") texto= StringVar() raiz.resizable(False, False) #raiz.iconbitmap("logo.ico")# #raiz.geometry("650x360")# raiz.config(bg="dark green") frame1= Frame() frame1.pack() frame1.config(bg="dark green") frame1.config(width=650,height=400) prompt= Label(text="Coloque el texto a encriptar", fg="black", font="Bold", bg="dark green") prompt.place(x=230, y=80) def Encriptador(): mensaje= cuadroinput.get key= Fernet.generate_key() cifrar= Fernet(key) mensaje_cifrado = cifrar.encrypt(mensaje.encode()) cuadroutput.config(Text= mensaje_cifrado) cuadroinput=Entry(frame1, width=40) cuadroinput.place(x=200 , y=120) cuadroutput=Entry(frame1, width=70, textvariable=texto) cuadroutput.place(x=100, y=300) boton=Button(raiz,text="Encriptar", command=Encriptador) boton.place(x=305, y=170) raiz.mainloop() | The problem with your code is: def Encriptador(): mensaje= cuadroinput.get # <------ here key= Fernet.generate_key() cifrar= Fernet(key) mensaje_cifrado = cifrar.encrypt(mensaje.encode()) cuadroutput.config(Text= mensaje_cifrado) The .get that you are calling, is a function, and a function, in Python, must have (). The fix is simply changing mensaje= cuadroinput.get to mensaje= cuadroinput.get() Another thing you can do is give cuadroinput a textvariable, and create a StringVar(). Here is an example, using your code: invar = StringVar() def Encriptador(): mensaje= invar.get() key= Fernet.generate_key() cifrar= Fernet(key) mensaje_cifrado = cifrar.encrypt(mensaje.encode()) cuadroutput.config(Text= mensaje_cifrado) cuadroinput=Entry(frame1, width=40, textvariable = invar) | 1 | 2 |
79,080,076 | 2024-10-12 | https://stackoverflow.com/questions/79080076/how-to-set-a-qwidget-hidden-when-mouse-hovering-and-reappear-when-mouse-leaving | I am trying to create a small widget to display information. This widget is intended to be always on top, and set hidden when the mouse hovers over it so you can click or see whatever is underneath it without disruption, and then reappear once your mouse leaves this widget. The problem I am currently facing is that once the widget is hidden, there is no pixel drawed thus no mouse actitvity is tracked anymore, which immediately triggers the leaveEvent, thus the widget keeps blinking. Here is an example: import sys from PyQt5.QtWidgets import QApplication, QLabel, QVBoxLayout, QWidget from PyQt5.QtCore import Qt class TransparentWindow(QWidget): def __init__(self): super().__init__() # Set window attributes self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint) # | Qt.WindowTransparentForInput) self.setAttribute(Qt.WA_TranslucentBackground) self.setMouseTracking(True) # Set example text self.layout = QVBoxLayout() self.label = QLabel(self) self.label.setText("Hello, World!") self.label.setStyleSheet("background-color: rgb(255, 255, 255); font-size: 50px;") self.label.setAlignment(Qt.AlignCenter) self.layout.addWidget(self.label) self.setLayout(self.layout) def enterEvent(self, event): print("Mouse entered the window") self.label.setHidden(True) def leaveEvent(self, event): print("Mouse left the window") self.label.setHidden(False) if __name__ == "__main__": app = QApplication(sys.argv) window = TransparentWindow() window.show() sys.exit(app.exec_()) Now I have tried to add an almost transparent Qwidget item under it so I can pick up mouse events with these nearly transparent pixels: def __init__(self): super().__init__() # Set window attributes self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint) self.setAttribute(Qt.WA_TranslucentBackground) self.setMouseTracking(True) # Set example text self.layout = QVBoxLayout() self.label = QLabel(self) self.label.setText("Hello, World!") self.label.setStyleSheet("background-color: rgb(255, 255, 255); font-size: 50px;") self.label.setAlignment(Qt.AlignCenter) self.layout.addWidget(self.label) self.setLayout(self.layout) # Set an almost transparent widget self.box = QWidget(self) self.box.setStyleSheet("background-color: rgba(255, 255, 255, 0.01)") self.layout.addWidget(self.box) which makes the disappear-then-reappear part work. But I can no longer click whatever is underneath it. I have tried to add Qt.WindowTransparentForInput, but it made the window transparent to enter/leave event as well. Is there any solution to make this window only transparent to click event but not enter/leave event? Or do I have to use other global mouse tracking libraries to make this work? Platform: Windows 11 23H2 Thanks for all your help! This is how I've decided to implement it for the moment: import sys from PyQt5.QtWidgets import QApplication, QLabel, QVBoxLayout, QWidget from PyQt5.QtGui import QCursor from PyQt5.QtCore import Qt, QTimer class TransparentWindow(QWidget): def __init__(self): super().__init__() # Set window attributes self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint | Qt.Tool) # | Qt.WindowTransparentForInput) self.setAttribute(Qt.WA_TranslucentBackground) self.setMouseTracking(True) # Set example text self.layout = QVBoxLayout() self.label = QLabel(self) self.label.setText("Hello, World!") self.label.setStyleSheet("background-color: rgb(255, 255, 255); font-size: 50px;") self.label.setAlignment(Qt.AlignCenter) self.layout.addWidget(self.label) self.setLayout(self.layout) self.hidetimer = QTimer(self) self.hidetimer.setSingleShot(True) self.hidetimer.timeout.connect(self.hidecheck) self.hidecheckperiod = 300 def hidecheck(self): if self.geometry().contains(QCursor.pos()): self.hidetimer.start(self.hidecheckperiod) return print("Showing.....") self.setHidden(False) def enterEvent(self, event): self.setHidden(True) self.hidetimer.start(self.hidecheckperiod) print("Hiding.....") if __name__ == "__main__": app = QApplication(sys.argv) window = TransparentWindow() window.show() sys.exit(app.exec_()) if __name__ == "__main__": app = QApplication(sys.argv) window = TransparentWindow() window.show() sys.exit(app.exec_()) | You could poll the cursor position to see if it's contained by the current window geometry. This has the side benefit of allowing for a short delay, so that the window isn't continually shown/hidden when the cursor moves quickly over it. The delay could be configurable by the user. I think this should work on all platforms, but I've only tested it on Linux. Here's a basic demo: import sys from PyQt5.QtWidgets import QApplication, QLabel, QVBoxLayout, QWidget from PyQt5.QtGui import QCursor from PyQt5.QtCore import Qt, QTimer class TransparentWindow(QWidget): def __init__(self): super().__init__() # Set window attributes self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint) # | Qt.WindowTransparentForInput) self.setAttribute(Qt.WA_TranslucentBackground) # Set example text self.layout = QVBoxLayout() self.label = QLabel(self) self.label.setText("Hello, World!") self.label.setStyleSheet("background-color: rgb(255, 255, 255); font-size: 50px;") self.label.setAlignment(Qt.AlignCenter) self.layout.addWidget(self.label) self.setLayout(self.layout) self._timer = QTimer(self) self._timer.setInterval(500) self._timer.timeout.connect(self.pollCursor) self._timer.start() def pollCursor(self): over = self.geometry().contains(QCursor.pos()) if over != self.isHidden(): self.setHidden(over) self._timer.start() print("Mouse is over the window:", over) if __name__ == "__main__": app = QApplication(sys.argv) window = TransparentWindow() window.show() sys.exit(app.exec_()) | 4 | 1 |
79,081,940 | 2024-10-12 | https://stackoverflow.com/questions/79081940/python-flask-if-block-jinja2-exceptions-templatesyntaxerror-unexpected | <div class="mb-3"> {{ form.ism.label(class="form-label") }} {{% if form.ism.errors %}} {{ form.ism(class="form-control form-control-lg is-invalid") }} <div class="invalid-feedback"> {% for error in form.ism.errors %} <span>{{ error }}</span> {% endfor %} </div> {% else %} {{ form.ism(class="form-control form-control-lg") }} {{% endif %}} </div> jinja2.exceptions.TemplateSyntaxError: unexpected '%' How can I solve this error? I looked code again and again, but I could not find the what is real problem. Debugger saying error at 4. line. This code: {{% if form.ism.errors %}} | The syntax for variables is {{ myVar }} The syntax for expressions such as if and for is {% if ... %} Thus, instead of writing {{% %}}, you should go with {% %} which is the appropriate syntax. | 1 | 2 |
79,081,575 | 2024-10-12 | https://stackoverflow.com/questions/79081575/trying-to-solve-problem-19-on-euler-project | The question is: You are given the following information, but you may prefer to do some research for yourself. 1 Jan 1900 was a Monday. Thirty days has September, April, June and November. All the rest have thirty-one, Saving February alone, Which has twenty-eight, rain or shine. And on leap years, twenty-nine. A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400. How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)? I wrote this code: if __name__ == '__main__': count_sundays = 0 day_name = 3 # this is tuesday day = 1 month = 1 year = 1901 months = {1: 31, 2: 28, 3: 31, 4: 30, 5: 31, 6: 30, 7: 31, 8: 31, 9: 30, 10: 31, 11: 30, 0: 31} while not ((year == 2000) and (month == 0) and (day == 0)): print(year, month, day) if day_name == day == month == 1: count_sundays += 1 day += 1 day_name += 1 day_name = day_name % 7 if (year % 4 == 0 and year % 100 != 0) or year % 400 == 0: months[2] = 29 else: months[2] = 28 day = day % months[month] if day == 1: month = month + 1 month = month % 12 if month == 1 and day == 1: year += 1 print(count_sundays) I am getting 14 which is the wrong answer, if anyone can point out what's wrong with my code that would be great. | Your code has issue on following condition, where you are checking if month == 1, which means it'll only count Sundays if it's 1st January of the year. if day_name == day == month == 1: instead you should use: if day_name == day == 1: just a suggestion if you are open to code change, as you are only concern about 1st day of every month, instead of going through every day of the years, traverse through 1st day of every month. if __name__ == "__main__": count_sundays = 0 week_day = 3 # this is tuesday year = 1901 months = { 1: 31, 2: 28, 3: 31, 4: 30, 5: 31, 6: 30, 7: 31, 8: 31, 9: 30, 10: 31, 11: 30, 12: 31 } while year < 2001: if (year % 4 == 0 and year % 100 != 0) or year % 400 == 0: months[2] = 29 else: months[2] = 28 for month in range(1, 13): # first day of the next month week_day = (week_day + months[month]) % 7; if week_day == 1: count_sundays += 1 year += 1 print(count_sundays) | 1 | 2 |
79,080,117 | 2024-10-12 | https://stackoverflow.com/questions/79080117/openpyxl-is-not-able-to-understand-my-2-sub-header | I have an Excel sheet with data: when I try to create a chart like this [clustered chart] openpyxl is not able to read the data properly from openpyxl.chart import BarChart, Reference from openpyxl import load_workbook, workbook wb = load_workbook("Book1.xlsx") ws = wb.active data = Reference(ws, min_col=2, min_row=1, max_row=12,max_col=7) categories = Reference(ws, min_col=1, min_row=3, max_row=12) barChart = BarChart() barChart.type = "col" barChart.grouping = "clustered" barChart.title = "XYZ" barChart.y_axis.title = "service" barChart.x_axis.title = "month" barChart.add_data(data,titles_from_data=True) barChart.set_categories(categories,labels="dasd") barChart.style = 5 barChart.width = 20 barChart.height = 10 ws.add_chart(barChart,"k8") wb.save("test.xlsx") I tried to unmerge the headers, but again, if I do that, the graph I want will not be created. I tried all the combinations but it's just openpyxl don't capture the data properly | You can add each row as a separate Series. For each row m2 to m10 add as a Series after the initial data row m1 is added. The categories which contains the two level Header is taken from the Rows 1 & 2. Then enable 'Multi-Level Category Labels' for the X Axis. The code sample below assumes your data is contained in the range A1:G12, Code Sample I from openpyxl.chart import ( BarChart, Reference, Series ) from openpyxl import load_workbook from openpyxl.chart.layout import Layout, ManualLayout wb = load_workbook("book1.xlsx") ws = wb.active ### Chart Type barChart = BarChart() ### Initial data Series m1 B3:G3 data = Reference(ws, min_col=1, min_row=3, max_row=3, max_col=7) barChart.add_data(data, titles_from_data=True, from_rows=True) ### Data Categories (B1:G2), 2 sub header categories = Reference(ws, min_col=2, max_col=7, min_row=1, max_row=2) ### Add Categories to Chart barChart.set_categories(categories) ### Set X Axis multilevel label to enabled barChart.x_axis.noMultiLvlLbl = False ### Add additional Series m2 - m10 ### Create a Series for each row from m2 (row 4) to m10 (row 12) for i in range(4, 13): # 13 is ws.max_row+1 data = Reference(ws, min_col=1, min_row=i, max_col=7, max_row=i) ### Use the header in Column A as the title series = Series(data, title_from_data=True) ### Add the created Series to the chart barChart.append(series) ### Chart Settings barChart.type = "col" barChart.grouping = "clustered" barChart.title = "XYZ" barChart.y_axis.title = "service" barChart.x_axis.title = "month" # barChart.style = 5 # Use Chart Style required. This example leaves as default barChart.width = 20 barChart.height = 10 ### Additional Chart Settings ### Set the 'Series Overlap', gap between columns barChart.overlap = -10 # -10 is -10% # Enable Axis for display barChart.x_axis.delete = False barChart.y_axis.delete = False ### Set the legend position at the bottom of the chart barChart.legend.position = "b" ### Adjust the layout to ensure enough space for the legends at the bottom of the chart barChart.layout = Layout( manualLayout=ManualLayout( x=0, # x position of the plot area y=0, # y position of the plot area h=0.65, # height of the plot area w=0.9, # width of the plot area ) ) ### Add Chart to Sheet ws.add_chart(barChart, "K8") ### Save workbook wb.save('test.xlsx') Code Sample II Add all Series as one data reference from openpyxl.chart import ( BarChart, Reference ) from openpyxl import load_workbook from openpyxl.chart.layout import Layout, ManualLayout wb = load_workbook("book1.xlsx") ws = wb.active ### Chart Type barChart = BarChart() ### Initial data Series' B3:G12 data = Reference(ws, min_col=1, min_row=3, max_row=12, max_col=7) barChart.add_data(data, titles_from_data=True, from_rows=True) ### Data Categories (B1:G2), 2 sub header categories = Reference(ws, min_col=2, max_col=7, min_row=1, max_row=2) ### Add Categories to Chart barChart.set_categories(categories) ### Set X Axis multilevel label to enabled barChart.x_axis.noMultiLvlLbl = False ### Chart Settings barChart.type = "col" barChart.grouping = "clustered" barChart.title = "XYZ" barChart.y_axis.title = "service" barChart.x_axis.title = "month" # barChart.style = 5 # Use Chart Style required. This example leaves as default barChart.width = 20 barChart.height = 10 ### Additional Chart Settings ### Set the 'Series Overlap', gap between columns barChart.overlap = -10 # -10 is -10% # Enable Axis for display barChart.x_axis.delete = False barChart.y_axis.delete = False ### Set the legend position at the bottom of the chart barChart.legend.position = "b" ### Adjust the layout to ensure enough space for the legends at the bottom of the chart barChart.layout = Layout( manualLayout=ManualLayout( x=0, # x position of the plot area y=0, # y position of the plot area h=0.65, # height of the plot area w=0.9, # width of the plot area ) ) ### Add Chart to Sheet ws.add_chart(barChart, "K8") ### Save workbook wb.save('test.xlsx') The following is the expected Chart produced. | 3 | 4 |
79,079,685 | 2024-10-11 | https://stackoverflow.com/questions/79079685/tkinter-text-widget-unexpected-overflow-inside-grid-manager | Assume a simple 3row-1col grid, where 2nd widget is a label, while 1st and 3rd widgets are Text. sticky and weight settings are most certainly correct. Grid dimensions are defined and shouldn't be dictated by its content. The problem is that Texts in 1st and 3rd rows share the space as if the Label in the 2nd row didn't exist. Both Texts occupy half the grid height each. Weirder still, the Label is certainly there. You can see it if the grid gets stretched enough to exceed default Text height, which is around 24 lines. I will appreciate any clarification on this weird behavior. I am open to alternatives (pack?) that would allow me to combine Text-Label-Text in one column so that each one takes all the width available, Label takes minimal necessary height, and Texts share the remaining grid height equally. What I've tried The docs (1,2) regarding the grid manager show, that parent.rowconfigure(n, weight=1) for nth row ensures correct resizing, while child.grid(row=r, column=c, sticky="news") stretches the widget in the r,c cell of the grid. Sadly, all the other SO questions dance around these concepts, which fail to help in this situation. I've made a test application. If you run it with with_text=False, you can see that grid height is distributed as expected between three Labels. If you then run it with with_text=True), you can see that 1st and 3rd rows with Text widgets occupy half the grid height both. If you stretch the app enough vertically, the 2nd row with the Label does appear. from tkinter import * import tkinter.font as tkFont def application(master, with_text): for r in range(3): master.rowconfigure(r, weight=1) for c in range(1): master.columnconfigure(c, weight=1) if r == 1 and c == 0: lbl = Label(master, text="Hello") lbl.grid(row=r, column=c, sticky="news") continue if with_text: lbl = Text(master, bd=3, relief=SUNKEN) lbl.grid(row=r, column=c, sticky="news") lbl.insert(END, 1000 + r + c) else: lbl = Label(master, text="Hi") lbl.grid(row=r, column=c, sticky="news") root = Tk() root.geometry("200x100") root.title('Text grid overflow') mono_font = tkFont.nametofont("TkFixedFont") mono_font.configure(size=8) with_text=False application(root, with_text) root.mainloop() with_text = False with_text = True with_text=True, stretched vertically | Overview The factors that contribute to the problem: You didn't specify a width or height for the text widget, so it will default to 80x24 characters wide/tall. You are forcing the window to be a specific size that is too small to fit everything at their requested size. grid will attempt to fit everything into the window, and when it can't it will start reducing the size of the widgets in each row and column proportional to its weight. all of your rows have the same weight This is what happens: grid tries to fit two 80 character tall widgets and one one-character tall widget into the window. They won't fit because you forced the window to be 200x100 pixels. So, it has to start shrinking each widget down from its preferred size to make them fit. To do this, it takes one pixel from each widget since they all share the same weight. If it still doesn't fit, it takes one more from each widget. After a dozen or so attempts, the label will become zero in height so it no longer is visible. It then continues to shrink the text widgets one pixel at a time until they fit in the window. Note: this behavior is documented in the grid man page, where it states "For containers whose size is smaller than the requested layout, space is taken away from columns and rows according to their weights." Keeping the label visible To keep the label at the minimum size and have the text widgets expand to take up all of the extra space, give the row with the label a weight of zero. That will force the grid algorithm to skip over the label when removing pixels in order to make everything fit into the window. master.rowconfigure(1, weight=0) | 2 | 3 |
79,079,693 | 2024-10-11 | https://stackoverflow.com/questions/79079693/how-to-reset-a-key-in-a-defaultdict-many-times-in-a-row | d = defaultdict(str) d['one'] = 'hi' del d['one'] del d['one'] The second del raises a KeyError. d.pop('one') has the same problem. Is there a concise way to make the defaultdict reset-to-default a keypair? if 'one' in d: del d['one'] is more verbose than I would like. | The below should work (pop with None) The pop() method allows you to specify a default value that gets returned if the key is not found, avoiding a KeyError exception. from collections import defaultdict d = defaultdict(str) d['one'] = 'hi' d.pop('one', None) d.pop('one', None) print('Done') | 2 | 5 |
79,078,515 | 2024-10-11 | https://stackoverflow.com/questions/79078515/pandas-vectorized-function-to-find-time-to-grow-n-from-starting-cell | I have a pandas DataFrame with a time series (in this case price of a used car model) and am looking for a vectorized function to map each cell to the time it takes the price to grow n percent from that cell (if it never reaches n% more then return nan) It should in theory be possible to execute in a vectorized way as the output of each row is independent of what comes before it Here is a sample of the data and expected output import numpy as np import pandas as pd import datetime df = pd.DataFrame( [ 1490.47, 1492.98, 1494.69, 1497.43, 1499.02, 1503.29, 1501.60, 1502.80, 1502.30, 1509.38, 1512.01, 1508.98, 1512.63, ], columns=['price'], ) df.index.names = ['time'] n=1/100 So in this case I want to run for n=1/100 so compute for each cell the time it takes the price to increase by 1%. So for the first cell, 1% grows would be at 1490.47*1.01 = 1505.3747, the first cell greater than this value is 1509.38 which is 9 cells after the first cell so the output for that cell would be 9, and so on for the rest... Expected output would then be: df.some_functions(n=1/100) print(df) price time 0 9 1 8 2 8 3 nan 4 nan 5 nan 6 nan 7 nan 8 nan 9 nan 10 nan 11 nan 12 nan The latter 10 being nan because the price does not grow to greater than 1% of their cell in the remaining DataFrame. | Another option is to use numba (you can even easily parallelize the task): import numba @numba.njit(parallel=True) def search(price, n, out): for idx in numba.prange(len(price)): p = price[idx] search_for = p * n for idx2, v in enumerate(price[idx:]): if v >= search_for: out[idx] = idx2 break df["out"] = np.nan search(df["price"].values, 1.01, df["out"].values) print(df) Prints: price out time 0 1490.47 9.0 1 1492.98 8.0 2 1494.69 8.0 3 1497.43 9.0 4 1499.02 NaN 5 1503.29 NaN 6 1501.60 NaN 7 1502.80 NaN 8 1502.30 NaN 9 1509.38 NaN 10 1512.01 NaN 11 1508.98 NaN 12 1512.63 NaN On my AMD 5700x it took ~0.8seconds to compute dataframe with 1_000_000 np.random.uniform(1000, 2000, 1_000_000) values. | 1 | 2 |
79,078,287 | 2024-10-11 | https://stackoverflow.com/questions/79078287/groupby-a-df-column-based-on-other-column-and-add-a-default-value-to-everylist | I have an df which has 2 columns lets say Region and Country. Region Country ================================ AMER US AMER CANADA APJ INDIA APJ CHINA I have grouped the unique Country list for each Region using the code and o/p like below: df.drop_duplicates().groupby("Region")['Country'].agg(lambda x: sorted(x.unique().tolist())).to_dict() OUTPUT { 'AMER': ['US', 'CANADA'], 'APJ': ['INDIA', 'CHINA'] } Is there way with to add a default value "ALL" to every list? ============================================================= EDIT: I have a similar situation again and really would need some help here. I have an df which has 3 columns lets say Region, Country and AREA_CODE. Region Country AREA_CODE =================================== AMER US A1 AMER CANADA A1 AMER US B1 AMER US A1 I want to get the output like list of AREA_CODE for each country under each Region with 'ALL' as list value as well. something like { "AMER": { "US": ["ALL", "A1", "B1"], "CANADA": ["ALL", "A1"] } } So far i have tried to groupby both region and country column and then tried to group & agg it by AREA_CODE, it is throwing error df.drop_duplicates().groupby(["Region", "Country"]).groupby("Country")['AREA_CODE'].agg(lambda x: ["ALL"]+sorted(x.unique().tolist())).to_dict() Could someone kindly help me with this. Thanks, | You can add ['ALL'] in your agg: (df.drop_duplicates().groupby("Region")['Country'] .agg(lambda x: sorted(x.unique().tolist())+['ALL']) .to_dict() ) Also note you you don't really need agg and could use a dictionary comprehension: out = {k: sorted(v.unique().tolist())+['ALL'] for k, v in df.drop_duplicates().groupby("Region")['Country']} Output: {'AMER': ['CANADA', 'US', 'ALL'], 'APJ': ['CHINA', 'INDIA', 'ALL']} | 2 | 4 |
79,077,573 | 2024-10-11 | https://stackoverflow.com/questions/79077573/sort-all-rows-with-a-certain-value-in-a-group-to-the-last-place-i-the-group | I try to sort all rows with a certain value in a group to the last place in every group. data = {'a':[1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3], 'b':[100, 300, 200, 222, 500, 300, 222, 100, 200, 222, 300, 500, 400, 100], 'c':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]} df1 = pd.DataFrame(data) df1 Out[29]: a b c 0 1 100 1 1 1 300 2 2 1 200 3 3 1 222 4 4 1 500 5 5 2 300 6 6 2 222 7 7 2 100 8 8 3 200 9 9 3 222 10 10 3 300 11 11 3 500 12 12 3 400 13 13 3 100 14 should be: Out[31]: a b c 0 1 100 1 1 1 300 2 2 1 200 3 3 1 500 5 4 1 222 4 5 2 300 6 6 2 100 8 7 2 222 7 8 3 200 9 9 3 300 11 10 3 500 12 11 3 400 13 12 3 100 14 13 3 222 10 one of my attempts is: df1 = df1['b'].eq[222].sort(position='last').groupby(df1['a']) But I haven't found a solution yet | Use double DataFrame.sort_values - first by b with key parameter and then by a column with kind parameter: out = (df1.sort_values('b', key = lambda x: x==222) .sort_values('a', ignore_index=True, kind='stable')) print (out) a b c 0 1 100 1 1 1 300 2 2 1 200 3 3 1 500 5 4 1 222 4 5 2 300 6 6 2 100 8 7 2 222 7 8 3 200 9 9 3 300 11 10 3 500 12 11 3 400 13 12 3 100 14 13 3 222 10 Solution with helper column should be faster - added by DataFrame.assign and removed by DataFrame.drop: out = df1.assign(tmp = df1['b'].eq(222)).sort_values(['a','tmp']).drop('tmp', axis=1) print (out) a b c 0 1 100 1 1 1 300 2 2 1 200 3 4 1 500 5 3 1 222 4 5 2 300 6 7 2 100 8 6 2 222 7 8 3 200 9 10 3 300 11 11 3 500 12 12 3 400 13 13 3 100 14 9 3 222 10 Or use np.lexsort for positions and change order by DataFrame.iloc: out = df1.iloc[np.lexsort([df1['b'].eq(222), df1.a])] print (out) a b c 0 1 100 1 1 1 300 2 2 1 200 3 4 1 500 5 3 1 222 4 5 2 300 6 7 2 100 8 6 2 222 7 8 3 200 9 10 3 300 11 11 3 500 12 12 3 400 13 13 3 100 14 9 3 222 10 For default index add DataFrame.reset_index with drop=True: out = out.reset_index(drop=True) print (out) a b c 0 1 100 1 1 1 300 2 2 1 200 3 3 1 500 5 4 1 222 4 5 2 300 6 6 2 100 8 7 2 222 7 8 3 200 9 9 3 300 11 10 3 500 12 11 3 400 13 12 3 100 14 13 3 222 10 | 3 | 4 |
79,076,480 | 2024-10-11 | https://stackoverflow.com/questions/79076480/groupby-and-aggregate-based-on-condition | My input data: df=pd.DataFrame({'ID':['A','B','C','D'], 'Group':['group1','group1','group2','group2'], 'Flag_1':[1,0,0,1], 'Flag_2':[1,1,0,1], 'Value':[30,40,60,70] }) I am trying to add up "Value" per group when flag is equal to 1. My expected output is: df_value_group=pd.DataFrame({ 'Flag_1 Sum':[1,1], 'Flag_2 Sum':[2,1], 'Value_1 Sum':[30,70], 'Value_2 Sum':[70,70]}, index=['group1','group2']) I have tried this but it throws me an AssertionError error primarly due to the latter two lambda function. df.groupby('Group').agg( **{ 'Flag_1 Sum': ('Flag_1','sum'), 'Flag_2 Sum': ('Flag_2','sum'), 'Value_1 Sum': ('Flag_1', lambda col: df.loc[col.eq(1), 'Value'].sum()), 'Value_2 Sum': ('Flag_2', lambda col: df.loc[col.eq(1), 'Value'].sum()) }) | For a generic approach, you could use a custom groupby.agg (named aggregation): cols = df.columns[df.columns.str.startswith('Flag_')] val = df['Value'] out = (df.groupby('Group', as_index=False) .agg(**({f'{c} Sum': (c, lambda x: x.sum()) for c in cols} |{f'Value{c[4:]} Sum': (c, lambda x: val[x.index][x==1].sum()) for c in cols} ) ) ) NB. lambda x: val[x.index][x==1].sum() could be replaced by lambda x: val.where(x==1).sum(). Or reshape with melt and aggregate with pivot_table: tmp = (df .melt(['ID', 'Group', 'Value'], var_name='flag', value_name='bool') .query('bool == 1') .pivot_table(index='Group', columns='flag', aggfunc='sum', fill_value=0, ) ) out = (pd.concat([tmp['bool'], tmp['Value'].rename(columns=lambda x: x.replace('Flag', 'Value')) ], axis=1) .reset_index() .rename_axis(columns=None) ) Output: Group Flag_1 Flag_2 Value_1 Value_2 0 group1 1 2 30 70 1 group2 1 1 70 70 | 1 | 1 |
79,075,564 | 2024-10-10 | https://stackoverflow.com/questions/79075564/what-is-the-best-way-to-fit-a-quadratic-polynomial-to-p-dimensional-data-and-com | I have been trying to use the scikit-learn library to solve this problem. Roughly: from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression # Make or load an n x p data matrix X and n x 1 array y of the corresponding # function values. poly = PolynomialFeatures(degree=2) Xp = poly.fit_transform(X) model = LinearRegression() model.fit(Xp, y) # Approximate the derivatives of the gradient and Hessian using the relevant # finite-difference equations and model.predict. As the above illustrates, sklearn makes the design choice to separate polynomial regression into PolynomialFeatures and LinearRegression rather than combine these into a single function. This separation has conceptual advantages but also a major drawback: it effectively prevents model from offering the methods gradient and hessian, and model would be significantly more useful if it did. My current work-around uses finite-difference equations and model.predict to approximate the elements of the gradient and Hessian (as described here). But I don't love this approach — it is sensitive to floating-point error and the "exact" information needed to build the gradient and Hessian is already contained in model.coef_. Is there any more elegant or accurate method to fit a p-dimensional polynomial and find its gradient and Hessian within Python? I would be fine with one that uses a different library. | To compute the gradient or the Hessian of a polynomial, one needs to know exponents of variables in each monomial and the corresponding monomial coefficients. The first piece of this information is provided by poly.powers_, the second by model.coef_: from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression import numpy as np np.set_printoptions(precision=2, suppress=True) X = np.arange(6).reshape(3, 2) y = np.arange(3) poly = PolynomialFeatures(degree=2) Xp = poly.fit_transform(X) model = LinearRegression() model.fit(Xp, y) print("Exponents:") print(poly.powers_.T) print("Coefficients:") print(model.coef_) This gives: Exponents: [[0 1 0 2 1 0] [0 0 1 0 1 2]] Coefficients: [ 0. 0.13 0.13 -0.12 -0. 0.13] The following function can be then used to compute the gradient at a point given by an array x: def gradient(x, powers, coeffs): x = np.array(x) gp = np.maximum(0, powers[:, np.newaxis] - np.eye(powers.shape[1], dtype=int)) gp = gp.transpose(1, 2, 0) gc = coeffs * powers.T return (((x[:, np.newaxis] ** gp).prod(axis=1)) * gc).sum(axis=1) For example, we can use it to compute the gradient at the point [0, 1]: print(gradient([0, 1], poly.powers_, model.coef_)) This gives: [0.13 0.38] The Hessian at a given point can be computed in a similar way. | 6 | 3 |
79,075,204 | 2024-10-10 | https://stackoverflow.com/questions/79075204/qt-pyside6-why-is-worker-signal-stop-not-received-in-worker-thread | I've read the following material to get a better understanding what things must be considered when working with threads in Qt: https://doc.qt.io/qt-6/thread-basics.html https://wiki.qt.io/Threads_Events_QObjects https://www.haccks.com/posts/how-to-use-qthread-correctly-p1/ https://www.haccks.com/posts/how-to-use-qthread-correctly-p2/ https://www.kdab.com/wp-content/uploads/stories/multithreading-with-qt-1.pdf This gave me a good inside how Qt's event loop work, under which circumstances it is blocked and how threads can be used by either subclassing QThread() or by using QObject's moveToThread(). Now, I am converting this PySide6 example which uses an endless loop in worker's run() method (and therefore blocking the event loop of the thread) to use a QTimer so that the event loop is not blocked anymore. In the example, a counter counts from one to five and prints the progress to stdout: I have two questions: When the QTimer() is instantiated in the constructor, I get a QObject::startTimer: Timers cannot be started from another thread error (see the whole code at the end of the post). Why? According to moveToThread() doc also all children are moved to the new thread and therefore also the timer from the constructor (and Start counting sends a signal to the run() slot which should live in the newly created thread). What's going wrong here? class WorkerQTimerBased(QObject): finished = Signal() signal_stop = Signal() def __init__(self, parent: QObject = None): super().__init__(parent) self.counter = 1 # Question 1: Why is it not allowed to create a QTimer object in the constructor? # This provokes a "QObject::startTimer: Timers cannot be started from another thread"! self.timer = QTimer() self.timer.setInterval(1000) self.timer.timeout.connect(self.timer_timeout) self.signal_stop.connect(self.stop) ... The stop_counting() slot is invoked when clicking on Stop. The slot shall send a self.worker.signal_stop.emit() to the worker. But, the signal is received only when the print() statement is uncommented. Why? @Slot() def stop_counting(self): if self.worker and self.worker_thread: self.worker.signal_stop.emit() # print("emitted()") # Question 2: Why is the signal not emitted/received? It is emitted/received when this line is uncommented. self.worker_thread.quit() self.worker_thread.wait() The following code is a minimal (partially) working example: Clicking "Start Counting" works. Clicking "Stop" does not work. The code contains a Question 1 and a Question 2 marker to denote the code part which is relevant for the respective question. import sys from PySide6.QtCore import QObject, QThread, QTimer, Signal, Slot from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton, QVBoxLayout, QWidget class WorkerQTimerBased(QObject): finished = Signal() signal_stop = Signal() def __init__(self, parent: QObject = None): super().__init__(parent) self.counter = 1 # Question 1: Why is it not allowed to create a QTimer object in the constructor? # This provokes a "QObject::startTimer: Timers cannot be started from another thread"! # self.timer = QTimer() # self.timer.setInterval(1000) # self.timer.timeout.connect(self.timer_timeout) self.timer = None self.signal_stop.connect(self.stop) def run(self): self.timer = QTimer() self.timer.setInterval(1000) self.timer.timeout.connect(self.timer_timeout) self.timer.start() @Slot() def timer_timeout(self) -> None: print(self.counter) self.counter += 1 if self.counter > 5: self.stop() @Slot() def stop(self): print("signal_stop received") if self.timer: self.timer.stop() self.finished.emit() class MainWindowQTimerBased(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Counting Application (QTimer-based)") self.start_button = QPushButton("Start Counting") self.start_button.clicked.connect(self.start_counting) self.stop_button = QPushButton("Stop") self.stop_button.clicked.connect(self.stop_counting) layout = QVBoxLayout() layout.addWidget(self.start_button) layout.addWidget(self.stop_button) widget = QWidget() widget.setLayout(layout) self.setCentralWidget(widget) self.worker = None self.worker_thread = None @Slot() def start_counting(self): self.start_button.setEnabled(False) self.worker_thread = QThread() self.worker = WorkerQTimerBased() self.worker.moveToThread(self.worker_thread) self.worker.finished.connect(self.counting_finished) self.worker.finished.connect(self.worker_thread.quit) self.worker_thread.started.connect(self.worker.run) # self.worker_thread.finished.connect(self.worker.deleteLater) # self.worker_thread.finished.connect(self.worker_thread.deleteLater) self.worker_thread.finished.connect(self.clean_up_threading_resources) self.worker_thread.start() @Slot() def clean_up_threading_resources(self): self.worker.deleteLater() self.worker_thread.deleteLater() self.worker = None self.worker_thread = None @Slot() def stop_counting(self): if self.worker and self.worker_thread: self.worker.signal_stop.emit() # print("emitted()") # Question 2: Why is the signal not emitted/received? self.worker_thread.quit() self.worker_thread.wait() @Slot() def counting_finished(self): print("Counting finished") self.start_button.setEnabled(True) @Slot() def closeEvent(self, event): self.stop_counting() event.accept() if __name__ == "__main__": app = QApplication(sys.argv) window = MainWindowQTimerBased() window.show() sys.exit(app.exec()) | First of all, you have to remember that PySide (as PyQt, on which it is based) is a Python binding around the Qt library, which is a compiled library written in C++: this means that the "Qt side" has absolutely no knowledge of what Python does on its own, unless it directly interacts with the library. According to moveToThread() doc also all children are moved to the new thread and therefore also the timer from the constructor That's true, but you're considering that aspect based on a wrong assumption, because of this: class WorkerQTimerBased(QObject): def __init__(self, parent: QObject = None): ... self.timer = QTimer() self.timer is just a Python reference to a wrapped QTimer object. In Python terms, it's a member of a WorkerQTimerBased instance, but Qt has absolutely no way to know that relation (nor it should): for Qt, that's just an "orphan" (not parented) object that lives in the thread in which it was created; to it, the QTimer and the WorkerQTimerBased instance are two completely separated objects with absolutely no relation between them. Since self.worker = WorkerQTimerBased() is called from the main thread, this means that the above QTimer exists in that same thread (the main one, in this case). Calling moveToThread() on the worker will have absolutely no effect on the QTimer, because Qt knows absolutely nothing about that relation. Doing self.timer = QTimer() does NOT make the QTimer an actual child of WorkerQTimerBased. The only way to make a Qt object as a child of another is by creating it along with the parent in the constructor (or, eventually, using setParent()). If you want the QTimer to work within its thread (which is required in order to properly call its methods), then there are only three options: create it with the parent in the constructor (self.timer = QTimer(self)); create it in the function that is actually run first in the thread (run() in this case); create it in the __init__ (or any different thread context), add custom signals for the worker, connect them to the QTimer slots (eg: start() or stop()) and emit those signals when required, even in "threaded" functions: Qt's thread affinity will automatically queue those signals to the timer, if its thread if it's not the same of the object emitting the signal; Finally, keep in mind that: "event loop" is an abstract concept: it fundamentally is something that waits for some other thing to happen (eg: "continuously check if variable x is equal to y", or "wait w time for x to be something, otherwise do z"); in Qt there can be more than one event loop, even in the same thread; a real Qt event loop uses QEventLoop, which only works with actual Qt events, and doesn't normally block the program flow of other loops (no matter if they're in the same thread or not); for instance: QDialog's exec() has its own event loop, it's still in the main thread and, theoretically, you can also have "nested" event loops (a child dialog's exec() run within a parent one, even though that practice is often discouraged); not only QThread has its own QEventLoop (which is what allows it to receive signals from other threads), but it's also the foundation of many loop-based ("exec() based") Qt objects, most importantly including the "Qt application" (QCoreApplication, QGuiApplication and QApplication); QThread, on its own, never blocks anything, unless something run within it is: for instance, an infinite and never-releasing while loop in a function connected to its started signal; one of the benefits and drawbacks of Python is the GIL, which prevents possible threading concurrency that Qt could eventually allow; this means that while a "pure Qt" program could have blocking threads that would still allow others to work, that's completely impossible with Qt Python bindings such as PySide; I have to stress this again: PySide (and PyQt) are bindings around the Qt library. The "Qt side" knows absolutely nothing about what done on the Python side, unless it actually interacts with the binding. The limitations of Python (performance, object persistence, GIL, object referencing etc.) must be considered as possible bottlenecks, even in light of the possible the interaction/relation with the Qt library is explicitly documented (eg. "virtual" method overriding). | 2 | 2 |
79,072,098 | 2024-10-9 | https://stackoverflow.com/questions/79072098/solving-leetcodes-1813-sentence-similarity-iii-using-regexes | I'm trying to solve this problem from Leetcode using regexes (just for fun): You are given two strings sentence1 and sentence2, each representing a sentence composed of words. A sentence is a list of words that are separated by a single space with no leading or trailing spaces. Each word consists of only uppercase and lowercase English characters. Two sentences s1 and s2 are considered similar if it is possible to insert an arbitrary sentence (possibly empty) inside one of these sentences such that the two sentences become equal. Note that the inserted sentence must be separated from existing words by spaces. For example, s1 = "Hello Jane" and s2 = "Hello my name is Jane" can be made equal by inserting "my name is" between "Hello" and "Jane" in s1. s1 = "Frog cool" and s2 = "Frogs are cool" are not similar, since although there is a sentence "s are" inserted into s1, it is not separated from "Frog" by a space. Given two sentences sentence1 and sentence2, return true if sentence1 and sentence2 are similar. Otherwise, return false. Assuming small and big are the smaller and bigger sentences respectively, it's easy to check whether small is a prefix/suffix of big: if re.match(f'^{small} .*$', big) or re.match(f'^.* {small}$', big): print('similar') How to best check if the sentences can be made equal by inserting a new sentence in the middle of small using a regex? | My idea needs the strings to be sorted descending by length before using regex. Then concatenate s1 and s2 by newline and check what single part of s1 could be omitted to match s2 (in next line). import re def isSimilar(s1, s2): regex = r'(?is)\A(?!.* )(\b[a-z ]*)(\b[a-z ]*)(\b[a-z ]*)\n\1\b\3\Z' # sort strings by length desc s = sorted([s1,s2], key=len, reverse=True) # concat by newline and match pattern m = re.match(regex, s[0]+'\n'+s[1]) if(m): return 'similar -> ' + m.group(2).strip() else: return 'not similar!' See this Python demo at tio.run or a regex demo at regex101 (differs a bit for showcase) s1 = "Hello Jane" s2 = "Hello my name is Jane" print(isSimilar(s1, s2)) will output: similar -> my name is Note that this was just playing around, the regex pattern is not efficient. It uses three capture groups and checks if the line after the newline can be completed by the captures of the first and third group (omitting second group). The allowed character-set inside each capture group is set to [a-z ] starting with a word boundary. The lookahead at start checks for not more than one space. | 2 | 2 |
79,071,739 | 2024-10-9 | https://stackoverflow.com/questions/79071739/optimizing-variable-combinations-to-maximize-a-classification | I am working with a dataset where users interact via an app or a website, and I need to determine the optimal combination of variables (x1, x2, ... xn) that will maximize the number of users classified as "APP Lovers." According to the business rule, a user is considered an "APP Lover" if they use the app more than 66% of the time. Here’s a simplified example of the data structure: import polars as pl df = pl.DataFrame({ "ID": [1, 2, 3, 1, 2, 3, 1, 2, 3], "variable": ["x1", "x1", "x1", "x2", "x2", "x2", "x3", "x3", "x3"], "Favourite": ["APP", "APP", "WEB", "APP", "WEB", "APP", "APP", "APP", "WEB"] }) In this dataset, each ID represents a user, and variable refers to the function (e.g., x1, x2, x3), with Favourite indicating whether the function was executed via the app or the website. I pivot the data to count how many actions were performed via APP or WEB: ( df .pivot( index=["ID"], on="Favourite", values=["variable"], aggregate_function=pl.col("Favourite").len() ).fill_null(0) ) Output: shape: (3, 3) ┌─────┬─────┬─────┐ │ ID ┆ APP ┆ WEB │ │ --- ┆ --- ┆ --- │ │ i64 ┆ u32 ┆ u32 │ ╞═════╪═════╪═════╡ │ 1 ┆ 3 ┆ 0 │ │ 2 ┆ 2 ┆ 1 │ │ 3 ┆ 1 ┆ 2 │ └─────┴─────┴─────┘ Next, I calculate the proportion of app usage for each user and classify them: ( df2 .with_columns( Total = pl.col("APP") + pl.col("WEB") ) .with_columns( Proportion = pl.col("APP") / pl.col("Total") ) .with_columns( pl .when(pl.col("Proportion") >= 0.6).then(pl.lit("APP Lover")) .when(pl.col("Proportion") > 0.1).then(pl.lit("BOTH")) .otherwise(pl.lit("Inactive")) ) ) shape: (3, 6) ┌─────┬─────┬─────┬───────┬────────────┬───────────┐ │ ID ┆ APP ┆ WEB ┆ Total ┆ Proportion ┆ literal │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ u32 ┆ u32 ┆ u32 ┆ f64 ┆ str │ ╞═════╪═════╪═════╪═══════╪════════════╪═══════════╡ │ 1 ┆ 3 ┆ 0 ┆ 3 ┆ 1.0 ┆ APP Lover │ │ 2 ┆ 2 ┆ 1 ┆ 3 ┆ 0.666667 ┆ APP Lover │ │ 3 ┆ 1 ┆ 2 ┆ 3 ┆ 0.333333 ┆ BOTH │ └─────┴─────┴─────┴───────┴────────────┴───────────┘ The challenge: In my real dataset, I have at least 19 different x variables. Yesterday asked, I tried iterating over all possible combinations of these variables to filter out the ones that result in the highest number of "APP Lovers," but the number of combinations (2^19) is too large to compute efficiently. Question: How can I efficiently determine the best combination of xn variables that maximizes the number of "APP Lovers"? I'm looking for guidance on how to approach this in terms of algorithmic optimization or more efficient iterations. | Here's my suggestion, take the data: df = pl.DataFrame({ "id": [1, 2, 3, 1, 2, 3, 1, 2, 3], "variable": ["x1", "x1", "x1", "x2", "x2", "x2", "x3", "x3", "x3"], "favorite": ["APP", "APP", "WEB", "APP", "WEB", "APP", "APP", "APP", "WEB"] }) and pivot it such that column xi is true if user id uses that action primarily through the app: action_through_app = ( df .with_columns(pl.col.favorite == "APP") .pivot(index="id", on="variable", values="favorite") ) For example: shape: (3, 4) ┌─────┬───────┬───────┬───────┐ │ id ┆ x1 ┆ x2 ┆ x3 │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ bool ┆ bool ┆ bool │ ╞═════╪═══════╪═══════╪═══════╡ │ 1 ┆ true ┆ true ┆ true │ │ 2 ┆ true ┆ false ┆ true │ │ 3 ┆ false ┆ true ┆ false │ └─────┴───────┴───────┴───────┘ Now we can efficiently query if given some combination variables how many users would be app lovers by summing the relevant columns and checking if their sums are >= 0.6 * the number of columns. def num_app_lovers(combination): return (pl.sum_horizontal(combination) >= 0.6*len(combination)).sum() action_through_app.select( num_app_lovers([pl.col.x1]).alias("x1"), num_app_lovers([pl.col.x2]).alias("x2"), num_app_lovers([pl.col.x3]).alias("x3"), num_app_lovers([pl.col.x1, pl.col.x2]).alias("x12"), num_app_lovers([pl.col.x2, pl.col.x3]).alias("x23"), num_app_lovers([pl.col.x1, pl.col.x3]).alias("x13"), num_app_lovers([pl.col.x1, pl.col.x2, pl.col.x3]).alias("x123"), ) shape: (1, 7) ┌─────┬─────┬─────┬─────┬─────┬─────┬──────┐ │ x1 ┆ x2 ┆ x3 ┆ x12 ┆ x23 ┆ x13 ┆ x123 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ u32 ┆ u32 ┆ u32 ┆ u32 ┆ u32 ┆ u32 │ ╞═════╪═════╪═════╪═════╪═════╪═════╪══════╡ │ 2 ┆ 2 ┆ 2 ┆ 1 ┆ 1 ┆ 2 ┆ 2 │ └─────┴─────┴─────┴─────┴─────┴─────┴──────┘ Now this lets you query combinations in bulk, but this still doesn't scale well to 2^19 possible combinations. For that problem I'd suggest using evolutionary programming. Initialize a pool of possible combinations with x1, x2, x3, ... xn. Then, randomly add or remove a column (if > 1 column) to each combination in your pool, and test them with the above query. Keep the top, say, 100 combinations. Repeat this process for a bunch of iterations until the result no longer improves, and return that. | 1 | 2 |
79,074,775 | 2024-10-10 | https://stackoverflow.com/questions/79074775/issue-with-latex-rendering-in-the-title-of-colorbar-plots-using-pythons-matplot | I am facing an issue with the title of the following colorbar plots using Python's Matplotlib library. There are two subplots. The title of the first one works well, i.e., LaTeX rendering is done successfully. However, it returns an error for the second one. fig, axs = plt.subplots(1, 2, figsize=(24, 10)) c1 = axs[0].contourf(x, y, zT, 60, cmap='seismic', vmin=-2.5, vmax=2.5) axs[0].set_title(r'$\zeta_T$ at t={}'.format(l)) fig.colorbar(c1, ax=axs[0]) c2 = axs[1].contourf(x, y, bcv, 40, cmap='seismic', vmin=-0.25, vmax=0.25) axs[1].set_title(r'$\sqrt{U_c^2 + V_c^2}$ at t={}'.format(l)) fig.colorbar(c2, ax=axs[1]) The error is as follows: KeyError Traceback (most recent call last) Cell In[27], line 80 79 c2 = axs[1].contourf(x, y, bcv, 40, cmap='seismic', vmin=-0.25, vmax=0.25) ---> 80 axs[1].set_title(r'$\sqrt{U_c^2 + V_c^2}$ at t={}'.format(l)) 81 fig.colorbar(c2, ax=axs[1]) KeyError: 'U_c^2 + V_c^2' I wonder why the LaTeX command $\sqrt{U_c^2+V_c^2}$ does not work in the title. Why is the LaTeX rendering not working in Matplotlib? I tried to fix it in several ways, but I consistently get that "KeyError". I used plt.rc('text', usetex=True) to enable LaTeX rendering in Matplotlib. But it did not work either. I want to fix this LaTeX rendering issue in Matplotlib. | This is LaTeX syntax clashing with function of pythons .format() method. The latter looks for curly brackets {...} in the string and operates on them, but there are two sets of curly brackets in r'$\sqrt{U_c^2 + V_c^2}$ at t={}' and the first contains something that does not fit with the .format() syntax. I suggest you split this up as two strings, with just the format method on the second one r'$\sqrt{U_c^2 + V_c^2}$ ' + 'at t={}'.format(l) Here is a complete working example of similar character (I don't have your data). import matplotlib.pyplot as plt import numpy as np plt.rc('text', usetex=True) x = np.linspace(-np.pi, np.pi, 100) t = 3.0 fig, ax = plt.subplots() ax.plot(x, np.sqrt(x**2+1)) ax.set_title(r'$\sqrt{x^2 + 1}$' + ' at {}'.format(t)) | 1 | 2 |
79,074,215 | 2024-10-10 | https://stackoverflow.com/questions/79074215/how-to-draw-a-rectangle-at-x-y-in-a-pyqt-graphicsview | I'm learning Python and Qt, and as an exercise I designed a simple window in QT Designer with a QGraphicsView which would represent a stack data structure. It should hold the items on the stack as a rectangle with a label representing the item, but my problem is that I can't position the rectangle at(x,y). Google and the documentation didn't help. Maybe it's a simple oversight or it's just me being dumb. This code draws a rectangle in the center of the QGraphicsView, and it should(if I understand correctly)place a 10x10 rectangle at 10,10: class Ui(QMainWindow): def __init__(self): super().__init__() uic.loadUi('Stack.ui', self) self.scene = QGraphicsScene() self.graphicsView.setScene(self.scene) self.scene.addRect(QRectF(10,10,10,10)) app=QApplication([]) window=Ui() window.show() app.exec() Stack.ui: <?xml version="1.0" encoding="UTF-8"?> <ui version="4.0"> <class>MainWindow</class> <widget class="QMainWindow" name="MainWindow"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>800</width> <height>600</height> </rect> </property> <property name="windowTitle"> <string>MainWindow</string> </property> <widget class="QWidget" name="centralwidget"> <layout class="QGridLayout" name="gridLayout"> <item row="0" column="1"> <widget class="QLineEdit" name="lineEdit"> <property name="sizePolicy"> <sizepolicy hsizetype="Minimum" vsizetype="Fixed"> <horstretch>0</horstretch> <verstretch>0</verstretch> </sizepolicy> </property> </widget> </item> <item row="0" column="0" rowspan="3"> <widget class="QGraphicsView" name="graphicsView"/> </item> <item row="0" column="3"> <widget class="QPushButton" name="pushButton"> <property name="text"> <string>Push</string> </property> </widget> </item> <item row="1" column="3"> <widget class="QPushButton" name="pushButton_2"> <property name="text"> <string>Pop</string> </property> </widget> </item> </layout> </widget> <widget class="QMenuBar" name="menubar"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>800</width> <height>24</height> </rect> </property> </widget> <widget class="QStatusBar" name="statusbar"/> </widget> <resources/> <connections/> </ui> | from PyQt5 import uic from PyQt5.QtCore import * from PyQt5.QtWidgets import * from PyQt5.QtCore import Qt class Ui(QMainWindow): def __init__(self): super().__init__() uic.loadUi('Stack.ui', self) self.scene = QGraphicsScene() self.graphicsView.setScene(self.scene) self.graphicsView.setSceneRect(0, 0, 250, 250) # adjust as needed self.graphicsView.setAlignment(Qt.AlignTop | Qt.AlignLeft) self.scene.addRect(QRectF(10, 10, 10, 10)) app = QApplication([]) window = Ui() window.show() app.exec() | 1 | 1 |
79,074,164 | 2024-10-10 | https://stackoverflow.com/questions/79074164/is-there-a-way-to-run-a-file-that-is-in-a-directory-with-a-special-character-in | My directory structure is C:\Users\...\[MATH] foldername So to change directories, as per some SE post I saw, I have to use: cd 'C:\Users\...\`[MATH`] foldername' which indeed changes to the required directory. Once here, I use py -m venv Project, and it creates the Project folder with the venv stuff correctly. When I try to run Scripts\activate, I get the following error: Line | 185 | $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | You cannot call a method on a null-valued expression. Line | 138 | $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPat … | ~~~~~~~~~~ | Cannot bind argument to parameter 'Path' because it is an empty string. Line | 206 | … ose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)' … | ~~~~~~~~ | Cannot bind argument to parameter 'Path' because it is an empty string.Line | 207 | $Prompt = Split-Path -Path $venvDir -Leaf | ~~~~~~~~ | Cannot bind argument to parameter 'Path' because it is an empty string. I don't really know how to proceed. Thanks in advance for help. If it helps I'm on a windows system. Basically, I was able to narrow down the problem to the fact that my directory has a special character in it. Using the same procedure for other paths that don't have this causes no issues. | Python's bundled venv module (v3.4+), as of v3.12.3, has a bug that prevents it from working properly in directories whose names contain [ / ] in PowerShell. These characters are metacharacters (characters with special meaning) in PowerShell wildcard expressions, and arguments passed to the -Path parameter (which is also the first positional parameter) of file system-related cmdlets such as Get-Item and Get-ChildItem are interpreted as wildcard expressions. To ensure verbatim interpretation of a path, use the -LiteralPath parameter instead. You have two workaround options: Bypass the problem by avoiding [ / ] in your project directory names. Manually fix ./Scripts/bin/Activate.ps1 once your project has been initialized with python -m venv ./Scripts. To do so, replace the line $VenvExecDir = Get-Item -Path $VenvExecPath with $VenvExecDir = Get-Item -LiteralPath $VenvExecPath | 3 | 5 |
79,072,235 | 2024-10-9 | https://stackoverflow.com/questions/79072235/plot-a-partially-transparent-plane-in-matplotlib | I want to plot a sequence of three colormaps in a 3D space, with a line crossing all the planes of the colormaps, as shown in the figure below. https://i.sstatic.net/65yOib6B.png To do that, I am using mpl.plot_surface to generate the planes and LinearSegmentedColormap to create a colormap that transitions from transparent to a specific color. However, when I plot the figure, a gray grid appears on my plot. How can I remove it? Ideally, the blue shade would appear on a completely transparent plane, but a lighter color could also work. Here is the code I used to generate the plot: import matplotlib.pyplot as plt import numpy as np from matplotlib.colors import LinearSegmentedColormap # Testing Data sigma = 1.0 mu = np.linspace(0,2, 10) x = np.linspace(-5, 5, 100) y = np.linspace(-5, 5, 100) X, Y = np.meshgrid(x, y) Z = [] for m in mu: Z.append(np.exp(-((X - m)**2 + (Y - m)**2) / (2 * sigma**2))) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for i in [0, 5, -1]: cmap = LinearSegmentedColormap.from_list('custom_blue', [(1, 1, 1, 0), (0, 0, 1, 1)]) wmap = cmap(Z[i]/Z[i].max()) ax.plot_surface(mu[i] * np.ones(X.shape), X, Y,facecolors=wmap, alpha=1, antialiased=True, edgecolor='none') loc_max_x = [] loc_max_y = [] for i in range(len(mu)): loc_x = np.where(Z[i] == Z[i].max())[0][0] loc_y = np.where(Z[i] == Z[i].max())[1][0] loc_max_x.append(loc_x) loc_max_y.append(loc_y) ax.plot(mu, x[loc_max_x], y[loc_max_y], color='r') ax.set_box_aspect((3.4, 1, 1)) plt.savefig('3dplot.png', dpi=300) plt.show() | I think there's nothing you could have done better in matplotlib, great job! I think to solve your problem, it is better to change the library and approach your problem using plotly. Please see my code: import plotly.graph_objects as go import numpy as np # Testing Data sigma = 1.0 mu = np.linspace(0, 2, 10) x = np.linspace(-5, 5, 100) y = np.linspace(-5, 5, 100) X, Y = np.meshgrid(x, y) Z = [] for m in mu: Z.append(np.exp(-((X - m)**2 + (Y - m)**2) / (2 * sigma**2))) fig = go.Figure() colorscale = [[0, 'rgba(255, 255, 255, 0)'], [1, 'rgba(0, 0, 255, 1)']] # colorscale = transparent to blue #plot the surfaces for i in [0, 5, -1]: fig.add_trace(go.Surface( x=mu[i] * np.ones(X.shape), y=X, z=Y, surfacecolor=Z[i], colorscale=colorscale, cmin=0, cmax=Z[i].max(), showscale=False, opacity=1)) #plot the line crossing the surfaces loc_max_x = [] loc_max_y = [] for i in range(len(mu)): loc_x = np.where(Z[i] == Z[i].max())[0][0] loc_y = np.where(Z[i] == Z[i].max())[1][0] loc_max_x.append(loc_x) loc_max_y.append(loc_y) #add the line trace fig.add_trace(go.Scatter3d( x=mu, y=x[loc_max_x], z=y[loc_max_y], mode='lines', line=dict(color='red', width=5))) fig.update_layout(scene_aspectmode='manual', scene_aspectratio=dict(x=3.4, y=1, z=1), scene=dict(xaxis_title='mu', yaxis_title='X', zaxis_title='Y')) fig.show() which results this plot: | 1 | 2 |
79,072,427 | 2024-10-10 | https://stackoverflow.com/questions/79072427/using-re-to-match-a-digit-any-contiguous-duplicates-and-storing-the-duplicates | I'm trying to use re.findall(pattern, string) to match all numbers and however many duplicates follow in a string. Eg. "1222344" matches "1", "222", "3", "44". I can't seem to find a pattern to do so though. I tried using the pattern "(\d)\1+" to match a digit 1 or more times but it doesn't seem to be working. But when I print the result, it shows up as an empty array []. | You're on the right track but your pattern (\d)\1+ actually matches two or more contiguous digits (the first digit is matched by \d and then the + quantifier says match one or more of that digit. So what you want is (\d)\1* where the * says match zero or more of that previous digit The other thing that is perhaps confusing is that re.findall() only returns a list of the matched subexpressions (in this case the individual digit) to see the entire string matched you can use re.search() or re.finditer() to get a match object then access the entire matched string using mo.group(0) import re text = "122333444455555666666" patt = re.compile(r"(\d)\1*") print() print(patt.findall(text)) # print list of JUST first digit in each run print() for mo in patt.finditer(text): # iterate over all the Match Objects print(mo.group(0)) # group(0) is the entire matched string Output is: ['1', '2', '3', '4', '5', '6'] 1 22 333 4444 55555 666666 | 2 | 3 |
79,072,381 | 2024-10-10 | https://stackoverflow.com/questions/79072381/python-if-statements-containing-multiple-boolean-conditions-how-is-flow-hand | I'm curious about how Python handles "if" statements with multiple conditions. Does it evaluate the total boolean expression, or will it "break" from the expression with the first False evaluation? So, for example, if I have: if (A and B and C): Do_Something() Will it evaluate "A and B and C" to be True/False (obviously), and then apply the "if" to either enter or not enter Do_Something()?, or will it evaluate each sequentially, and if any turn out to be false it will break. So, say A is True, B is False. Will it go: A is true --> keep going, B is false - now break out and not do Do_Something()? The reason I ask is that in the function I'm working on, I've organised A, B, and C to be functions of increasing computational load, and it (of course) would be a complete waste to run B and C if A is False; and equally C is A is True and B is False). Now, of course, I could simple restructure the code to the following, but I was hoping to use the former if possible: if (A): if (B): if (C): Do_Something() Of course, this equally applies to while statements as well. Any input would be greatly appreciated. | It's the latter. if A and B and C: Do_Something() is equivalent to if A: if B: if C: Do_Something() This behavior is called short-circuit evaluation. | 3 | 3 |
79,072,035 | 2024-10-9 | https://stackoverflow.com/questions/79072035/python-datetime-format-utc-time-zone-offset-with-colon | I'm trying to format a datetime object in Python using either the strftime method or an f-string. I would like to include the time zone offset from UTC with a colon between the hour and minute. According to the documentation, a format code, %:z, may be available to do exactly what I want. The documentation does warn, however, that this may not be available on all platforms. I am running Python 3.10 on Windows, and it doesn't seem to work for me. I guess I'm wondering if I just have the syntax mixed up or if indeed this format code isn't available to me. Anyone else have experience with this? The following statement raises a ValueError: Invalid format string: print(f"{datetime.now().astimezone():%Y-%m-%d %H:%M:%S%:z}") Using the %z format code instead of the %:z code does work, however, giving me something close to what I want, namely 2024-10-09 13:17:21-0700 at the time I ran it: print(f"{datetime.now().astimezone():%Y-%m-%d %H:%M:%S%z}") | Per documentation: Added in version 3.12: %:z was added. Example on Windows 10: Python 3.12.6 (tags/v3.12.6:a4a2d2b, Sep 6 2024, 20:11:23) [MSC v.1940 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import datetime as dt >>> print(f"{dt.datetime.now().astimezone():%Y-%m-%d %H:%M:%S%:z}") 2024-10-09 15:46:13-07:00 Pre 3.12, you could make a helper function: import datetime as dt def now(): s = f'{dt.datetime.now().astimezone():%Y-%m-%d %H:%M:%S%z}' return s[:-2] + ':' + s[-2:] print(now()) Output: 2024-10-09 15:52:51-07:00 | 1 | 2 |
79,069,211 | 2024-10-9 | https://stackoverflow.com/questions/79069211/pandas-groupby-show-non-matching-values | I have the following dataframe: data = [['123456ABCD234567', 'A'], ['8502', 'A'], ['74523654894WRZI3', 'B'], ['85CGNK6987541236', 'B'], ['WF85Z4HJ95R4CF2V', 'C'], ['VB52FG85RT74DF96', 'C'], ['WERTZ852146', 'D'], ['APUNGF', 'D'] ] df = pd.DataFrame(data, columns=['CODE', 'STOCK']) df CODE STOCK 0 123456ABCD234567 A 1 8502 A 2 74523654894WRZI3 B 3 85CGNK6987541236 B 4 WF85Z4HJ95R4CF2V C 5 VB52FG85RT74DF96 C 6 WERTZ852146 D 7 APUNGF D Each stock is part of various codes. The code should have a length of 16 characters. My objective is to filter out any stocks that have no codes attached which are not composed of 16 characters. In this example, stock A has at least one code with a length of 16 characters, so it should be kept. However, stock D has no codes with a length of 16 characters. I believe this can be accomplished using the groupby-function in Pandas. Ultimately, I aim at obtaining below output: CODE STOCK 6 WERTZ852146 D 7 APUNGF D Many thanks for any suggestions in advance! | You could create a boolean column for values not matching the condition and groupby.transform with all to identify the STOCK with all non matching rows: out = df[df['CODE'].str.len().ne(16).groupby(df['STOCK']).transform('all')] Output: CODE STOCK 6 WERTZ852146 D 7 APUNGF D Intermediates: CODE STOCK str.len ne(16) transform('all') 0 123456ABCD234567 A 16 False False 1 8502 A 4 True False 2 74523654894WRZI3 B 16 False False 3 85CGNK6987541236 B 16 False False 4 WF85Z4HJ95R4CF2V C 16 False False 5 VB52FG85RT74DF96 C 16 False False 6 WERTZ852146 D 11 True True 7 APUNGF D 6 True True Using DeMorgan's law you could also run: out = df[~df['CODE'].str.len().eq(16).groupby(df['STOCK']).transform('any')] Intermediates: CODE STOCK str.len eq(16) transform('any') ~ 0 123456ABCD234567 A 16 True True False 1 8502 A 4 False True False 2 74523654894WRZI3 B 16 True True False 3 85CGNK6987541236 B 16 True True False 4 WF85Z4HJ95R4CF2V C 16 True True False 5 VB52FG85RT74DF96 C 16 True True False 6 WERTZ852146 D 11 False False True 7 APUNGF D 6 False False True And without groupby you could identify all the STOCK that have at least one match, and reverse select the others with isin: out = df[~df['STOCK'].isin(df.loc[df['CODE'].str.len().eq(16), 'STOCK'].unique())] Intermediates: df.loc[df['CODE'].str.len().eq(16), 'STOCK'].unique() # array(['A', 'B', 'C'], dtype=object) ~df['STOCK'].isin(df.loc[df['CODE'].str.len().eq(16), 'STOCK'].unique()) # 0 False # 1 False # 2 False # 3 False # 4 False # 5 False # 6 True # 7 True # Name: STOCK, dtype: bool | 2 | 2 |
79,049,807 | 2024-10-3 | https://stackoverflow.com/questions/79049807/genetic-algorithm-for-kubernetes-allocation | I am trying to allocate Kubernetes pods to nodes using a genetic algorithm, where each pod is assigned to one node. Below is my implementation: from string import ascii_lowercase import numpy as np import random from itertools import compress import math import pandas as pd import random def create_pods_and_nodes(n_pods=40, n_nodes=15): # Create pod and node names pod = ['pod_' + str(i+1) for i in range(n_pods)] node = ['node_' + str(i+1) for i in range(n_nodes)] # Define CPU and RAM options cpu = [2**i for i in range(1, 8)] # 2, 4, 8, 16, 32, 64, 128 ram = [2**i for i in range(2, 10)] # 4, 8, 16, ..., 8192 # Create the pods DataFrame pods = pd.DataFrame({ 'pod': pod, 'cpu': random.choices(cpu[0:3], k=n_pods), # Small CPU for pods 'ram': random.choices(ram[0:4], k=n_pods), # Small RAM for pods }) # Create the nodes DataFrame nodes = pd.DataFrame({ 'node': node, 'cpu': random.choices(cpu[4:len(cpu)-1], k=n_nodes), # Larger CPU for nodes 'ram': random.choices(ram[4:len(ram)-1], k=n_nodes), # Larger RAM for nodes }) return pods, nodes # Example usage pods, nodes = create_pods_and_nodes(n_pods=46, n_nodes=6) # Display the results print("Pods DataFrame:\n", pods.head()) print("\nNodes DataFrame:\n", nodes.head()) print(f"total CPU pods: {np.sum(pods['cpu'])}") print(f"total RAM pods: {np.sum(pods['ram'])}") print('\n') print(f"total CPU nodes: {np.sum(nodes['cpu'])}") print(f"total RAM nodes: {np.sum(nodes['ram'])}") # Genetic Algorithm Parameters POPULATION_SIZE = 100 GENERATIONS = 50 MUTATION_RATE = 0.1 TOURNAMENT_SIZE = 5 def create_individual(): return [random.randint(0, len(nodes) - 1) for _ in range(len(pods))] def create_population(size): return [create_individual() for _ in range(size)] def fitness(individual): total_cpu_used = np.zeros(len(nodes)) total_ram_used = np.zeros(len(nodes)) unallocated_pods = 0 for pod_idx, node_idx in enumerate(individual): pod_cpu = pods.iloc[pod_idx]['cpu'] pod_ram = pods.iloc[pod_idx]['ram'] if total_cpu_used[node_idx] + pod_cpu <= nodes.iloc[node_idx]['cpu'] and total_ram_used[node_idx] + pod_ram <= nodes.iloc[node_idx]['ram']: total_cpu_used[node_idx] += pod_cpu total_ram_used[node_idx] += pod_ram else: unallocated_pods += 1 # Count unallocated pods # Reward for utilizing resources and penalize for unallocated pods return (total_cpu_used.sum() + total_ram_used.sum()) - (unallocated_pods * 10) def select(population): tournament = random.sample(population, TOURNAMENT_SIZE) return max(tournament, key=fitness) def crossover(parent1, parent2): crossover_point = random.randint(1, len(pods) - 1) child1 = parent1[:crossover_point] + parent2[crossover_point:] child2 = parent2[:crossover_point] + parent1[crossover_point:] return child1, child2 def mutate(individual): for idx in range(len(individual)): if random.random() < MUTATION_RATE: individual[idx] = random.randint(0, len(nodes) - 1) def genetic_algorithm(): population = create_population(POPULATION_SIZE) for generation in range(GENERATIONS): new_population = [] for _ in range(POPULATION_SIZE // 2): parent1 = select(population) parent2 = select(population) child1, child2 = crossover(parent1, parent2) mutate(child1) mutate(child2) new_population.extend([child1, child2]) population = new_population # Print the best fitness of this generation best_fitness = max(fitness(individual) for individual in population) print(f"Generation {generation + 1}: Best Fitness = {best_fitness}") # Return the best individual found best_individual = max(population, key=fitness) return best_individual # Run the genetic algorithm print("Starting Genetic Algorithm...") best_allocation = genetic_algorithm() print("Genetic Algorithm completed.\n") # Create the allocation DataFrame allocation_df = pd.DataFrame({ 'Pod': pods['pod'], 'Node': [nodes.iloc[best_allocation[i]]['node'] for i in range(len(best_allocation))], 'Pod_Resources': [list(pods.iloc[i][['cpu', 'ram']]) for i in range(len(best_allocation))], 'Node_Resources': [list(nodes.iloc[best_allocation[i]][['cpu', 'ram']]) for i in range(len(best_allocation))] }) # Print the allocation DataFrame print("\nAllocation DataFrame:") print(allocation_df) # Summarize total CPU and RAM utilization for each node node_utilization_df = allocation_df.groupby('Node').agg( Total_CPU_Used=pd.NamedAgg(column='Pod_Resources', aggfunc=lambda x: sum([res[0] for res in x if res])), Total_RAM_Used=pd.NamedAgg(column='Pod_Resources', aggfunc=lambda x: sum([res[1] for res in x if res])), Node_CPU=pd.NamedAgg(column='Node_Resources', aggfunc=lambda x: x.iloc[0][0] if x.iloc[0] is not None else 0), Node_RAM=pd.NamedAgg(column='Node_Resources', aggfunc=lambda x: x.iloc[0][1] if x.iloc[0] is not None else 0) ) # Calculate CPU and RAM utilization percentages for each node node_utilization_df['CPU_Utilization'] = (node_utilization_df['Total_CPU_Used'] / node_utilization_df['Node_CPU']) * 100 node_utilization_df['RAM_Utilization'] = (node_utilization_df['Total_RAM_Used'] / node_utilization_df['Node_RAM']) * 100 # Print the total CPU and RAM utilization for each node print("\nTotal CPU and RAM utilization for each node:") print(node_utilization_df) My implementation works if the total number of CPU and/or RAM of the pods is smaller than the total CPU and/or RAM of the nodes. However, I want to make it work even if the total CPU and/or RAM of the pods exceeds the total CPU and/or RAM of the nodes, allowing for unallocated pods if they cannot be assigned. How can I achieve this? Any suggestions or improvements would be greatly appreciated! | This is a straightforward bin packing problem. https://en.wikipedia.org/wiki/Bin_packing_problem Why tackle it with a genetic algorithm!?! That is going to be horribly slow, especially if you use python. Implementing a standard bin packing algorithm in a native language with a decent optimizing compiler will give performance many orders of magnitude faster - a millisecond or two for your sample problem Here is the C++ code for the implementation of the first-fit decreasing bin packing algorithm with dual resources void pack() { // sort items in order of decreasing largest resource requirement sum std::sort( theItems.begin(), theItems.end(), [](const cThing &a, const cThing &b) -> bool { int sa = a.myRes1 + a.myRes2; int sb = b.myRes1 + b.myRes2; return sa > sb; }); // sort bins in order of increasing capacity sum std::sort( theBins.begin(), theBins.end(), [](const cThing &a, const cThing &b) -> bool { int sa = a.myRes1 + a.myRes2; int sb = b.myRes1 + b.myRes2; return sa < sb; }); // fit each item into the smallest bin that fits for (cThing &item : theItems) { for (cThing &bin : theBins) { if (item.myRes1 > bin.myRes1 || item.myRes2 > bin.myRes2) continue; bin.pack( item ); break; } } } The output for a run: All iems packed node_3 contains: pod_1 pod_21 node_13 contains: pod_19 pod_15 node_11 contains: pod_31 pod_7 node_10 contains: pod_8 pod_28 pod_24 pod_17 pod_32 node_14 contains: pod_16 pod_36 pod_30 pod_29 pod_6 pod_34 pod_22 pod_9 pod_20 pod_38 node_15 contains: pod_37 pod_12 pod_13 pod_25 pod_26 pod_4 pod_3 pod_33 pod_27 pod_35 pod_2 pod_39 pod_23 node_4 contains: pod_5 pod_18 pod_14 pod_11 pod_10 pod_40 node_9 is empty node_1 is empty node_8 is empty node_7 is empty node_5 is empty node_12 is empty node_2 is empty node_6 is empty The above run uses the setup posted in the question. You added: I want to make it work even if the total CPU and/or RAM of the pods exceeds the total CPU and/or RAM of the nodes, allowing for unallocated pods if they cannot be assigned. How can I achieve this? So I reduce the number of nodes created ( from 15 to 4 ) so that not all pods can be fitted. Below is the result of this run showing that the code handles this naturally pod_12 pod_25 pod_26 pod_4 pod_3 pod_33 pod_27 pod_35 pod_2 pod_39 pod_5 pod_23 pod_18 pod_14 pod_11 pod_10 pod_40 17 items did not fit node_3 contains: pod_1 pod_21 node_4 contains: pod_19 pod_15 pod_31 pod_7 node_1 contains: pod_8 pod_28 pod_24 pod_17 pod_32 pod_16 pod_36 pod_30 pod_29 pod_6 pod_34 node_2 contains: pod_22 pod_9 pod_20 pod_38 pod_37 pod_13 Complete application code at https://github.com/JamesBremner/so79049807 | 2 | 3 |
79,057,745 | 2024-10-5 | https://stackoverflow.com/questions/79057745/cant-create-objects-on-start | I created a function and want to run it automatically on start. The function creates several objects I have an error AppRegistryNotReady("Apps aren't loaded yet.") Reason is clear - the function imports objects from another application (parser_app) I am starting app like this gunicorn --bind 0.0.0.0:8000 core_app.wsgi # project/core_app/wsgi.py import os from django.core.wsgi import get_wsgi_application from django.core.management import call_command from scripts.create_schedules import create_cron_templates os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core_app.settings') application = get_wsgi_application() call_command("migrate") call_command("collectstatic", interactive=False) create_cron_templates() Full error: django_1 | File "/project/core_app/wsgi.py", line 14, in <module> django_1 | from scripts.create_schedules import create_cron_templates django_1 | File "/project/scripts/create_schedules.py", line 1, in <module> django_1 | from parser_app.models import Schedule django_1 | File "/project/parser_app/models.py", line 7, in <module> django_1 | class TimeBase(models.Model): django_1 | File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 127, in __new__ django_1 | app_config = apps.get_containing_app_config(module) django_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 260, in get_containing_app_config django_1 | self.check_apps_ready() django_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 138, in check_apps_ready django_1 | raise AppRegistryNotReady("Apps aren't loaded yet.") django_1 | django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. Code of the function # project/scripts/create_schedules.py from parser_app.models import Schedule def create_cron_templates(): Schedule.objects.get_or_create( name="1", cron="0 9-18/3 * * 1-5#0 19-23/2,0-8/2 * * 1-5#0 */5 * * 6-7" ) | Solved with help of ready() function in apps.py My solution class ParserAppConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'parser_app' def ready(self): from scripts.create_schedules import create_cron_templates create_cron_templates() | 3 | 1 |
79,064,048 | 2024-10-8 | https://stackoverflow.com/questions/79064048/issues-with-using-extra-index-url-in-uv-with-google-cloud-artifact-registr | I'm trying to create a uv project that uses an --extra-index-url with Google Cloud Artifact Registry. According to the uv documentation, this should be possible. I am using uv 0.4.18. Here's what I've tried so far: gcloud auth application-default login --project ${PROJECT_ID} uv venv source .venv/bin/activate uv pip install keyring keyrings.google-artifactregistry-auth uv pip install --keyring-provider subprocess ${MY_PACKAGE} --extra-index-url https://${REGION}-python.pkg.dev/${PROJECT_ID}/${REPOSITORY_ID}/simple However, it returns an error indicating that my package can't be found. Interestingly, when I use standard Python, I can install my private package without any issues. Here's the code that works: gcloud auth application-default login --project ${PROJECT_ID} python -m venv .venv source .venv/bin/activate pip install keyring keyrings.google-artifactregistry-auth pip install ${MY_PACKAGE} --extra-index-url https://${REGION}-python.pkg.dev/${PROJECT_ID}/${REPOSITORY_ID}/simple It seems like others have faced this issue before, as mentioned in this closed GitHub issue. Has anyone else encountered this problem or found a workaround? Any help would be appreciated! | Pointing to the solution of the issue here: The code that works is: gcloud auth application-default login --project ${PROJECT_ID} uv venv source .venv/bin/activate uv pip install keyring keyrings.google-artifactregistry-auth uv pip install ${MY_PACKAGE} --keyring-provider subprocess --extra-index-url https://oauth2accesstoken@${REGION}-python.pkg.dev/${PROJECT_ID}/${REPOSITORY_ID}/simple The addition of the oauth2accesstoken string in the URL is necessary for the keyring CLI to retrieve a password, as pointed out in uv's GitHub issue 1520. | 2 | 1 |
79,065,461 | 2024-10-8 | https://stackoverflow.com/questions/79065461/typing-polars-dataframe-with-pandera-and-mypy-validation | I am considering pandera to implement strong typing of my project uses polars dataframes. I am puzzled on how I can type my functions correctly. As an example let's have: import polars as pl import pandera.polars as pa from pandera.typing.polars import LazyFrame as PALazyFrame class MyModel(pa.DataFrameModel): a: int class Config: strict = True def foo( f: pl.LazyFrame ) -> PALazyFrame[MyModel]: # Our input is unclean, probably coming from pl.scan_parquet on some files # The validation is dummy here return MyModel.validate(f.select('a')) If I'm calling mypy it will return the following error error: Incompatible return value type (got "DataFrameBase[MyModel]", expected "LazyFrame[MyModel]") Sure, I can modify my signature to specify the return Type DataFrameBase[MyModel], but I'll lose the precision that I'm returning a LazyFrame. Further more LazyFrame is defined as implementing DataFrameBase in pandera code. How can I fix my code so that the return type LazyFrame[MyModel] works? | It's quite often an issue when underlying libraries maybe don't express types as well described as they could - fortunately there are a few ways around it: 1. The Cast Way As discussed in the comments, using typing.cast is always an option. If an external library does not produce a specific enough type this is often what I opt for - it's a lot better than using type:ignore, and allows you to "bridge the gap" in an otherwise well-typed codebase. E.g. import polars as pl import pandera.polars as pa from pandera.typing.polars import LazyFrame as PALazyFrame import typing class MyModel(pa.DataFrameModel): a: int class Config: strict = True def foo( f: pl.LazyFrame ) -> PALazyFrame[MyModel]: # Our input is unclean, probably coming from pl.scan_parquet on some files # The validation is dummy here return typing.cast(PALazyFrame[MyModel],MyModel.validate(f.select('a'))) As mentioned though, there are times when the cast type has to be manually adjusted - and also this would have to be done in potentially multiple places for validate. 2. The Method Way Just supposing we need to use this model in lots of places in the code, you may wish to push the "cast" a little further from an end user. The cast doesn't really go away, but it allows us to put it in a place that could be highly reused, and reduce the number of casts in the codebase (always a good aim!). Note that the underlying code from the original library does use a cast for this method, so we're effectively just recasting to something slightly different. In the below example, there is a new method specifically for validating lazy frames - it operates in all the same ways as regular validate, except that it takes a LazyFrame and outputs a PALazyFrame: import polars as pl import pandera.polars as pa from pandera.typing.polars import LazyFrame as PALazyFrame from typing import Optional, Self, cast class MyDataFrameModel(pa.DataFrameModel): @classmethod def validate_lazy( cls, check_obj: pl.LazyFrame, head: Optional[int]=None, tail: Optional[int]=None, sample: Optional[int]=None, random_state: Optional[int]=None, lazy: bool=False, inplace: bool=False ) -> PALazyFrame[Self]: return cast(PALazyFrame[Self], cls.validate( check_obj, head, tail, sample, random_state, lazy, inplace )) class MyModel(MyDataFrameModel): a: int class Config: strict = True def foo( f: pl.LazyFrame ) -> PALazyFrame[MyModel]: # Our input is unclean, probably coming from pl.scan_parquet on some files # The validation is dummy here return MyModel.validate_lazy(f.select('a')) I originally considered simply overwriting the original validate method with one that another that was more generic and allowed for this use case but I found: a) It's difficult to express the right output type b) Overwriting methods in an incompatible manner from the inherited model is banned in modern Python anyway. 3. The PR Way Failing that, pretty much your only option is to request a change to the underlying library. Its possible there's a way to express this method in a more generic fashion that would allow for your use case, however keep in mind that there are some typing structures that simply cannot be expressed without the existence of "Higher Kinded Types", which currently don't exist in Python. I would suspect this may be one such use case. Hope this helps! | 3 | 3 |
79,058,740 | 2024-10-6 | https://stackoverflow.com/questions/79058740/how-to-process-data-internally-so-that-it-becomes-equivalent-to-what-it-would-be | I have this string: "birthday_balloons.\u202egpj" If I execute print("birthday_balloons.\u202egpj") it outputs birthday_balloons.jpg Note how the last three characters are reversed. I want to process the string "birthday_balloons.\u202egpj" in such a way that I get the string "birthday_balloons.jpg", with the order of the characters just like they were displayed. I'm looking for a way to internally process a piece of data so that it becomes equivalent to what it would appear as when outputting it to the terminal without doing anything like literally capturing the output from terminal. | U+202E is RIGHT-TO-LEFT OVERRIDE (RLO), it marks the start of a bidirectional override forcing the following text to be rendered right-to-left regardless of the direction of the characters. It is closed by U+202C POP DIRECTIONAL FORMATTING (PDF). Its presence in a filename would be indicative of malicious intent, in a terminal that supports bidirectional formatting, the string 'birthday_balloons.\u202egpj' would visually appear to be 'birthday_balloons.jpg', although most terminals do not have full bidi support. The override is more problematic within a web service or web page. The final five characters of the string are 002E 202E 0067 0070 006A, i.e. . RLO g p j. The simplest approach is to split the filename into components, test for an override, then clean components of the filename containing an override using a list comprehension: import re # Test for presence of an RLO character def override_exists(text): return re.search(r'\u202e', text) # Remove RLO and PDF characters and reverse string def repair_string(text): return re.sub(r'[\u202c\u202e]', '', text)[::-1] # Split file name and use list comprehension to test and repair string. def clean_file_name(file_name): components = file_name.split('.') cleaned = [repair_string(comp) if override_exists(comp) else comp for comp in components] return ".".join(cleaned) s = 'birthday_balloons.\u202egpj' print(clean_file_name(s)) # birthday_balloons.jpg Although, the repair mechanism is masking the problem and possibly creating a security vulnerability. A better approach would be for the repair functionality to just be def repair_string(text): return re.sub(r'[\u202c\u202e]', '', text) so: print(clean_file_name(s)) birthday_balloons.gpj This will remove The RLO, and display the filename in a way that will show the file extension is not .jpg and is suspect. Alternatively, the override detection could raise or log and exception. Update: Given the comments below, I'll add to my answer. Python stores bidi text in logical order. For the string 'birthday_balloons.\u202egpj' the order of codepoints is '0062 0069 0072 0074 0068 0064 0061 0079 005F 0062 0061 006C 006C 006F 006F 006E 0073 002E 202E 0067 0070 006A' so the final three characters are gjp, in that order. The corresponding bytes are passed to the console which renders the text, correctly or incorrectly. What you get from the print statement has little to do with Python's internals and everything to do with the console/terminal and how it implements bidi and font rendering. If you want to get a visual representation of the string, i.e. reorder the string so it is in the order it appears rather than the order it is stored you need to convert from logical to visual ordering. Using ptfribidi: Convert from logical to visual order, forcing base string direction to LTR. Strip out bidi formatting characters. s = 'birthday_balloons.\u202egpj' import pyfribidi import regex regex.sub(r'[\p{Cf}]', '', pyfribidi.log2vis(s, base_direction=pyfribidi.LTR)) # 'birthday_balloons.jpg' There is no internal mechanism for doing this in Python, Python's Unicode support is minimal and relies on third party packages for a more complete solution. If the base direction is RTL instead of LTR, the visually ordered string is 'jpg.birthday_balloons'. Using PyICU: Initiate a BidiTransform instance transform string, setting direction and order for source and target. Cast UnicodeString object to a Python string and remove bidi formatting override controls. from icu import BidiTransform, UBiDiDirection, UBiDiMirroring, UBiDiOrder import regex transformer = BidiTransform() input_text = 'birthday_balloons.\u202egpj' result = transformer.transform( input_text, UBiDiDirection.LTR, UBiDiOrder.LOGICAL, UBiDiDirection.LTR, UBiDiOrder.VISUAL, UBiDiMirroring.OFF) regex.sub(r'[\p{Cf}]', '', str(result)) # 'birthday_balloons.jpg' Using python-bidi 0.6.0: python-bidi V. 0.6.0 is a complete rewrite of the module, up until V. 0.6.0, the module was a pure python impelmentation of the UBA. V. 0.6.0 implemented a python wrapper around the unicode-bidi Rust crate. The module provides both the existing V5 API and the V6 Rust based API. For the scenario in the question they produce subtly different results. The key difference in for the question is the presence or absence of the override formatting characters in the visually ordered string. input_text = 'birthday_balloons.\u202egpj' # V5 API - Pure Python implementation from bidi.algorithm import get_display as get_display5 get_display5(input_text) # 'birthday_balloons.jpg' # V6 API - Wrapper for unicode-bidi Rust crate. from bidi import get_display as get_display6 import regex get_display6(input_text) # 'birthday_balloons.\u202ejpg' regex.sub(r'[\p{Cf}]', '', get_display6(input_text)) # 'birthday_balloons.jpg' | 3 | 1 |
79,063,494 | 2024-10-7 | https://stackoverflow.com/questions/79063494/how-to-accelerate-getting-points-within-distance-using-two-dataframes | I have two DataFrames (df and locations_df), and both have longitude and latitude values. I'm trying to find the df's points within 2 km of each row of locations_df. I tried to vectorize the function, but the speed is still slow when locations_df is a big DataFrame (nrows>1000). Any idea how to accelerate? import pandas as pd import numpy as np def select_points_for_multiple_locations_vectorized(df, locations_df, radius_km): R = 6371 # Earth's radius in kilometers # Convert degrees to radians df_lat_rad = np.radians(df['latitude'].values)[:, np.newaxis] df_lon_rad = np.radians(df['longitude'].values)[:, np.newaxis] loc_lat_rad = np.radians(locations_df['lat'].values) loc_lon_rad = np.radians(locations_df['lon'].values) # Haversine formula (vectorized) dlat = df_lat_rad - loc_lat_rad dlon = df_lon_rad - loc_lon_rad a = np.sin(dlat/2)**2 + np.cos(df_lat_rad) * np.cos(loc_lat_rad) * np.sin(dlon/2)**2 c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a)) distances = R * c # Create a mask for points within the radius mask = distances <= radius_km # Get indices of True values in the mask indices = np.where(mask) result = pd.concat([df.iloc[indices[0]].reset_index(drop=True), locations_df.iloc[indices[1]].reset_index(drop=True)], axis=1) return result def random_lat_lon(n=1, lat_min=-10., lat_max=10., lon_min=-5., lon_max=5.): """ this code produces an array with pairs lat, lon """ lat = np.random.uniform(lat_min, lat_max, n) lon = np.random.uniform(lon_min, lon_max, n) return np.array(tuple(zip(lat, lon))) df = pd.DataFrame(random_lat_lon(n=10000000), columns=['latitude', 'longitude']) locations_df = pd.DataFrame(random_lat_lon(n=20), columns=['lat', 'lon']) result = select_points_for_multiple_locations_vectorized(df, locations_df, radius_km=2) | You need to use a spatial index to make this fast... You can accomplish that like this: convert your locations_df to a GeoDataFrame with polygons the size of your search distance by buffering them with this distance. As you don't seem to be working in a projected crs, check out this post how to do this: buffer circle WGS84. determine which points intersect the buffered locations with a spatial join: geopandas.sjoin: result = df.sjoin(locations_buffered_df). This will use a spatial index under the hood so this will be fast. Performance comparison I added some sample code with a quick performance test, and using a spatial index indeed scales a lot better than brute-forcing all combinations. The performance is a lot better (not linear anymore with the number of combinations) and memory usage is a fraction as well. for 1 mio points with 300 locations: 2.46 s with spatial index versus 36 s for the current implementation, so ~ a factor 15 faster. This is the maximum number of combinations I can run without running out of memory for the current implementation. for 10 mio points with 1000 locations: 17.7 s for the implementation using a spatial index. I cannot run this on my desktop, but as this needs a factor 10.000/300 = 33 more distance calculations than the first test, I guess this will take the current implementation ~1200 s (20 minutes). Hence, using a spatial index is here probably ~ a factor 70 faster. If you want really accurate results Note that the buffers created in step 1 are polygons approximating circles, so the distance check is often good enough for most uses, but it is not perfect. If you want/need the distance check to be "perfect", or if you need the distance from the closest location in your result anyway, you can use a two-step filter approach. Use a slightly larger buffer distance for the primary, spatial index filtering so you are sure to retain all points that might be within distance. Then calculate the "exact" distance for that result (a small subset of the original input) to do a second filtering, e.g. with the vectorized haversine function you already have. The results of the spatial join will also contain a reference (the dataframe index) to the "locations" they are ~within distance of, so if needed/wanted you can do an extra optimization to only calculate the distance to the relevant location(s). Sample code with performance comparison from time import perf_counter import geopandas as gpd import numpy as np import pandas as pd from pyproj import CRS, Transformer from shapely.geometry import Point from shapely.ops import transform def select_points_for_multiple_locations_vectorized(df, locations_df, radius_km): R = 6371 # Earth's radius in kilometers # Convert degrees to radians df_lat_rad = np.radians(df['latitude'].values)[:, np.newaxis] df_lon_rad = np.radians(df['longitude'].values)[:, np.newaxis] loc_lat_rad = np.radians(locations_df['lat'].values) loc_lon_rad = np.radians(locations_df['lon'].values) # Haversine formula (vectorized) dlat = df_lat_rad - loc_lat_rad dlon = df_lon_rad - loc_lon_rad a = np.sin(dlat/2)**2 + np.cos(df_lat_rad) * np.cos(loc_lat_rad) * np.sin(dlon/2)**2 c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a)) distances = R * c # Create a mask for points within the radius mask = distances <= radius_km # Get indices of True values in the mask indices = np.where(mask) result = pd.concat([df.iloc[indices[0]].reset_index(drop=True), locations_df.iloc[indices[1]].reset_index(drop=True)], axis=1) return result def random_lat_lon(n=1, lat_min=-10., lat_max=10., lon_min=-5., lon_max=5.): """ this code produces an array with pairs lat, lon """ lat = np.random.uniform(lat_min, lat_max, n) lon = np.random.uniform(lon_min, lon_max, n) return np.array(tuple(zip(lat, lon))) def geodesic_point_buffer(lat, lon, km): # Azimuthal equidistant projection aeqd_proj = CRS.from_proj4( f"+proj=aeqd +lat_0={lat} +lon_0={lon} +x_0=0 +y_0=0") tfmr = Transformer.from_proj(aeqd_proj, aeqd_proj.geodetic_crs) buf = Point(0, 0).buffer(km * 1000) # distance in metres return transform(tfmr.transform, buf) df = pd.DataFrame(random_lat_lon(n=1_000_000), columns=['latitude', 'longitude']) locations_df = pd.DataFrame(random_lat_lon(n=300), columns=['lat', 'lon']) # Current implementation start = perf_counter() result = select_points_for_multiple_locations_vectorized(df, locations_df, radius_km=2) print(f"{len(result)=}") print(f"Took {perf_counter() - start}") # Implementation using a spatial index start = perf_counter() gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.longitude, df.latitude)) locations_buffer_gdf = gpd.GeoDataFrame( locations_df, geometry=locations_df.apply(lambda row : geodesic_point_buffer(row.lat, row.lon, 2), axis=1), ) result = gdf.sjoin(locations_buffer_gdf) print(f"{len(result)=}") print(f"Took {perf_counter() - start}") Output: len(result)=1598 Took 36.21813579997979 len(result)=1607 Took 2.4568299000384286 | 2 | 2 |
79,067,886 | 2024-10-8 | https://stackoverflow.com/questions/79067886/inner-kws-having-no-effect-on-seaborn-violin-plot | I generated a bunch of violin plots, here is an example of one and the code that generates it: plt.figure(figsize=(8, 4)) ax = sns.violinplot( x=data, # `data` is a few thousand float values between 0 and 1 orient='h', color=get_color(ff), # `get_color` returns a color based on the dataset, #FFBE0B in this case cut=0 ) I want to make the black box in the middle quite a bit bigger. According to the documentation from Seaborn at https://seaborn.pydata.org/generated/seaborn.violinplot.html, I should be able to do this with the inner_kws parameter. I added this argument to the above code: plt.figure(figsize=(8, 4)) ax = sns.violinplot( x=data, # `data` is a few thousand float values between 0 and 1 orient='h', color=get_color(ff), # `get_color` returns a color based on the dataset, #FFBE0B in this case inner_kws=dict(box_width=150, whis_width=20), cut=0 ) Above, the box and whisker width are 150 and 20 respectively. I've also tried 15 and 2, and 1500 and 200. No matter what values I enter here, the figure does not change at all. What am I doing wrong? | The inner_kws argument was introduced in version 0.13.0; if you have an older version of seaborn installed it has no effect. I had seaborn v0.12.2 (installed via conda) and your example printed with normal boxplot dimensions until I upgraded seaborn to v0.13.2, E.g. #!/usr/bin/env python import random import seaborn as sns import matplotlib.pyplot as plt data = [] for i in range(0, 1000): x = round(random.uniform(0, 1), 4) data.append(x) ax = sns.violinplot( x=data, # `data` is a few thousand float values between 0 and 1 orient='h', color='#FFBE0B', inner_kws=dict(box_width=100, whis_width=20), cut=0 ) plt.savefig("example.png") Does that solve your problem? Or is there something else causing an issue? | 1 | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.