question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,024,431
2024-9-25
https://stackoverflow.com/questions/79024431/subclass-of-pathlib-path-doesnt-support-operator
I'm attempting to create a subclass of pathlib.Path that will do some manipulation to the passed string path value before passing it along to the base class. class MyPath(Path): def __init__(self, str_path): str_path = str_path.upper() # just representative, not what I'm actually doing super().__init__(str_path) However, when I try to use this: foo = MyPath("/path/to/my/file.txt") bar = foo / "bar" I get the following error: TypeError: unsupported operand type(s) for /: 'MyPath' and 'str' I am using Python 3.12 which I understand to have better support for subclassing Path
MyPath("foo") / "bar' goes through several steps: It invokes MyPath("foo").__truediv__("bar"), which Invokes MyPath("foo").with_segments("bar"), which Invokes MyPath(MyPath("foo"), "bar"), which Raises a TypeError because you overrode MyPath.__init__ to only accept a single additional positional argument, which Causes MyPath.__truediv__ to catch a TypeError and subsequently return NotImplemented Your __init__ method must be careful to accept multiple path components, not just a single string, and it must not assume that each argument is an actual string, but rather objects that implement the os.PathLike interface. (In particular, don't assume that a path component has an upper method. Call os.fspath on the component first to retrieve a str representation that does have an upper method.) Something like import os class MyPath(Path): def __init__(self, *paths): super().__init__(*(os.fspath(x).upper() for x in paths)) and thus % py312/bin/python -i tmp.py >>> bar MyPath('/PATH/TO/MY/FILE.TXT/BAR') Alternately, as others have alluded to, you can override with_segments to, for example, combine the segments yourself and pass the single result to MyPath, but I see no reason to restrict MyPath.__init__'s signature as you currently do.
1
6
79,024,212
2024-9-25
https://stackoverflow.com/questions/79024212/scale-pygame-button-size-based-on-font-size
I want to take a font size (14) and using pygame_widgets.button.Button() scale it to match the font size up to a maximum almost every question I've seen on here has been the other way around at the moment I'm unfamiliar with the maths that would be used but I would greatly appreciate the help import pygame import pygame_widgets from pygame_widgets.button import Button pygame.init() font_size = 14 screen = pygame.display.set_mode((640, 480)) font = pygame.font.SysFont('Arial', font_size) Wide = 60 # Math High = 20 # Math button = Button( screen, # surface to display 10, # Button's top left x coordinate 10, # Button's top left y coordinate Wide, # Button's Width High, # Button's Height text="Button", fontSize=font_size, inactiveColour=(255, 255, 255), hoverColour=(245, 245, 245), pressedColour=(230, 230, 230), radius=2, # curve on the button corners onClick=lambda: print("Click") ) while True: events = pygame.event.get() for event in events: if event.type == pygame.QUIT: pygame.quit() sys.exit() screen.fill((200, 200, 200)) pygame_widgets.update(events) pygame.display.update()
For scaling a button's size dynamically based on its font size you can use a simple proportional scaling method. I've implemented it in code done below: import pygame import pygame_widgets from pygame_widgets.button import Button import sys pygame.init() screen = pygame.display.set_mode((640, 480)) base_font_size = 14 current_font_size = 14 max_font_size = 40 # Max font size for scaling # Button size at base font size (14) base_button_width = 60 base_button_height = 20 def scale_button_size(font_size, max_size): """ Scales the button dimensions proportionally to font size """ scale_factor = font_size / base_font_size width = min(base_button_width * scale_factor, max_size) height = min(base_button_height * scale_factor, max_size) return int(width), int(height) # Get scaled button dimensions Wide, High = scale_button_size(current_font_size, max_font_size) # Create button with scaled dimensions button = Button( screen, 10, # Button's top-left x coordinate 10, # Button's top-left y coordinate Wide, # Button's width High, # Button's height text="Button", fontSize=current_font_size, inactiveColour=(255, 255, 255), hoverColour=(245, 245, 245), pressedColour=(230, 230, 230), radius=2, onClick=lambda: print("Click") ) while True: events = pygame.event.get() for event in events: if event.type == pygame.QUIT: pygame.quit() sys.exit() screen.fill((200, 200, 200)) pygame_widgets.update(events) pygame.display.update()
2
1
79,024,010
2024-9-25
https://stackoverflow.com/questions/79024010/pandas-return-corresponding-column-based-on-date-being-between-two-values
I have a Pandas dataframe that is setup like so: Code StartDate EndDate A 2024-07-01 2024-08-03 B 2024-08-06 2024-08-10 C 2024-08-11 2024-08-31 I have a part of my code that iterates through each day (starting from 2024-07-01) and I am trying to return the corresponding Code given a date (with a fallback if the date does not fall within any StartDate/EndDate range). My original idea was to do something like: DAYS = DAY_DF['Date'].tolist() # Just a list of each day for DAY in DAYS: code = False for i,r in df.iterrows(): if r['StartDate'] <= DAY <= r['EndDate']: code = r['Code'] break if not code: # `Code` is still False code = 'Fallback_Code' But this seems very inefficient to iterate over each row in the dataframe especially because I have a lot of records in my dataframe. Here are some example inputs and the resulting code output: 2024-07-03 -> 'A' 2024-08-04 -> 'Fallback_Code' 2024-08-10 -> 'B' 2024-08-11 -> 'C'
A possible solution, which converts the StartDate and EndDate columns to datetime format (to allow for comparison of dates). It then checks if a specific date (e.g., 2024-07-03) falls within any of the date ranges defined by StartDate and EndDate. If it does, it retrieves the first corresponding Code from those rows; if not, it returns Fallback code. df['StartDate'] = pd.to_datetime(df['StartDate']) df['EndDate'] = pd.to_datetime(df['EndDate']) date = '2025-07-03' # input example m = df['StartDate'].le(date) & df['EndDate'].ge(date) df.loc[m, 'Code'].iloc[0] if m.any() else 'Fallback code' To get the codes for a list of dates, we might use the following: dates = ['2024-07-03', '2024-08-04', '2024-08-10', '2024-08-11'] m = lambda x: df['StartDate'].le(x) & df['EndDate'].ge(x) {date: df.loc[m(date), 'Code'].iloc[0] if m(date).any() else 'Fallback code' for date in dates} Output: {'2024-07-03': 'A', '2024-08-04': 'Fallback code', '2024-08-10': 'B', '2024-08-11': 'C'}
1
3
79,023,481
2024-9-25
https://stackoverflow.com/questions/79023481/parsing-xml-document-with-namespaces-in-python
I am trying to parse xml with namespace and attributes. I'm using XML library in Python and since I'm new with this, cannot find solution even I checked over this forum, there are similar questions but not same structure of XML document as I have. This is my XML: <?xml version='1.0' encoding='UTF-8'?> <Invoice xmlns="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2" xmlns:cec="urn:oasis:names:specification:ubl:schema:xsd:CommonExtensionComponents-2" xmlns:cac="urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2" xmlns:cbc="urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sbt="http://mfin.gov.rs/srbdt/srbdtext" xmlns:urn="oasis:names:specification:ubl:schema:xsd:Invoice-2"> <cbc:ID>IF149-0111/24</cbc:ID> <cac:InvoiceLine> <cbc:ID>1</cbc:ID> <cbc:InvoicedQuantity unitCode="H87">3.00</cbc:InvoicedQuantity> <cbc:LineExtensionAmount currencyID="RSD">26574.00</cbc:LineExtensionAmount> <cac:TaxTotal> <cbc:TaxAmount currencyID="RSD">5314.80</cbc:TaxAmount> <cac:TaxSubtotal> <cbc:TaxAmount currencyID="RSD">5314.800</cbc:TaxAmount> <cbc:Percent>20.0</cbc:Percent> <cac:TaxCategory> <cbc:ID>S</cbc:ID> <cbc:Name>20%</cbc:Name> <cbc:Percent>20.0</cbc:Percent> </cac:TaxCategory> </cac:TaxSubtotal> </cac:TaxTotal> <cac:Item> <cbc:Description>[P11190420] Toner Cartridge Brother DCP5500/MFC L 5700/6800 TN3410/3480 Katun Select</cbc:Description> <cbc:Name>[P11190420] Toner Cartridge Brother DCP5500/MFC L 5700/6800 TN3410/3480 Katun Select</cbc:Name> <cac:ClassifiedTaxCategory> <cbc:ID>S</cbc:ID> <cbc:Name>20%</cbc:Name> <cbc:Percent>20.0</cbc:Percent> </cac:ClassifiedTaxCategory> </cac:Item> </cac:InvoiceLine> <cac:InvoiceLine> <cbc:ID>2</cbc:ID> <cbc:InvoicedQuantity unitCode="H87">1.00</cbc:InvoicedQuantity> <cbc:LineExtensionAmount currencyID="RSD">600.00</cbc:LineExtensionAmount> <cac:TaxTotal> <cbc:TaxAmount currencyID="RSD">120.00</cbc:TaxAmount> <cac:TaxSubtotal> <cbc:TaxAmount currencyID="RSD">120.000</cbc:TaxAmount> <cbc:Percent>20.0</cbc:Percent> <cac:TaxCategory> <cbc:ID>S</cbc:ID> <cbc:Name>20%</cbc:Name> <cbc:Percent>20.0</cbc:Percent> <cbc:TaxExemptionReason></cbc:TaxExemptionReason> <cac:TaxScheme> <cbc:ID schemeID="UN/ECE 5153" schemeAgencyID="6">VAT</cbc:ID> </cac:TaxScheme> </cac:TaxCategory> </cac:TaxSubtotal> </cac:TaxTotal> <cac:Item> <cbc:Description>[U11124116] Usluga transporta</cbc:Description> <cbc:Name>[U11124116] Usluga transporta</cbc:Name> <cac:SellersItemIdentification> <cbc:ID>U11124116</cbc:ID> </cac:SellersItemIdentification> <cac:ClassifiedTaxCategory> <cbc:ID>S</cbc:ID> <cbc:Name>20%</cbc:Name> <cbc:Percent>20.0</cbc:Percent> <cbc:TaxExemptionReason></cbc:TaxExemptionReason> <cac:TaxScheme> <cbc:ID schemeID="UN/ECE 5153" schemeAgencyID="6">VAT</cbc:ID> </cac:TaxScheme> </cac:ClassifiedTaxCategory> </cac:Item> <cac:Price> <cbc:PriceAmount currencyID="RSD">600.00</cbc:PriceAmount> <cbc:BaseQuantity unitCode="H87">1.00</cbc:BaseQuantity> </cac:Price> </cac:InvoiceLine> </Invoice> I tried with: import xml.etree.ElementTree as ET tree = ET.parse("test.xml") root = tree.getroot() for x in root.findall('.//'): print(x.tag, " ", x.get('InvoiceLine')) and I get some result {urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2}CustomizationID None {urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2}ID None {urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2}IssueDate None {urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2}DueDate None But I need to extract following values in "InvoiceLine" section: InvoicedQuantity LineExtensionAmount TaxAmount Percent ID Description Name cbc:ID ...etc
this worked namespaces = { 'cbc': 'urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2', 'cac': 'urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2', } # Extract the parameters you need for invoice_line in root.findall('.//cac:InvoiceLine', namespaces): invoiced_quantity = invoice_line.find('cbc:InvoicedQuantity', namespaces) line_extension_amount = invoice_line.find('cbc:LineExtensionAmount', namespaces) tax_total = invoice_line.find('cac:TaxTotal', namespaces) tax_amount = tax_total.find('cbc:TaxAmount', namespaces) tax_subtotal = tax_total.find('cac:TaxSubtotal', namespaces) percent = tax_subtotal.find('cbc:Percent', namespaces) Edit: I save your xml and ran this import xml.etree.ElementTree as ET # Define the file path file_path = 'input.xml' # Parse the XML file namespace = { 'cbc': 'urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2', 'cac': 'urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2' } # Load and parse the XML file tree = ET.parse(file_path) root = tree.getroot() # Extracting invoice ID invoice_id = root.find('cbc:ID', namespace).text # Extracting invoice lines invoice_lines = [] for line in root.findall('cac:InvoiceLine', namespace): line_id = line.find('cbc:ID', namespace).text quantity = line.find('cbc:InvoicedQuantity', namespace).text line_amount = line.find('cbc:LineExtensionAmount', namespace).text item_description = line.find('cac:Item/cbc:Description', namespace).text item_name = line.find('cac:Item/cbc:Name', namespace).text invoice_lines.append({ 'line_id': line_id, 'quantity': quantity, 'line_amount': line_amount, 'item_description': item_description, 'item_name': item_name }) # Output the extracted data print(f"Invoice ID: {invoice_id}") print("Invoice Lines:") for line in invoice_lines: print(line) gave following result nvoice ID: IF149-0111/24 Invoice Lines: {'line_id': '1', 'quantity': '3.00', 'line_amount': '26574.00', 'item_description': '[P11190420] Toner Cartridge Brother DCP5500/MFC L 5700/6800 TN3410/3480 Katun Select', 'item_name': '[P11190420] Toner Cartridge Brother DCP5500/MFC L 5700/6800 TN3410/3480 Katun Select'} {'line_id': '2', 'quantity': '1.00', 'line_amount': '600.00', 'item_description': '[U11124116] Usluga transporta', 'item_name': '[U11124116] Usluga transporta'}
2
2
79,023,460
2024-9-25
https://stackoverflow.com/questions/79023460/handling-circular-imports-in-pydantic-models-with-fastapi
I'm developing a FastAPI application organized with the following module structure. ... β”‚ β”œβ”€β”€ modules β”‚ β”‚ β”œβ”€β”€ box β”‚ β”‚ β”‚ β”œβ”€β”€ routes.py β”‚ β”‚ β”‚ β”œβ”€β”€ services.py β”‚ β”‚ β”‚ β”œβ”€β”€ models.py # the sqlalchemy classes β”‚ β”‚ β”‚ β”œβ”€β”€ schemas.py # the pydantic schemas β”‚ β”‚ β”œβ”€β”€ toy β”‚ β”‚ β”‚ β”œβ”€β”€ routes.py β”‚ β”‚ β”‚ β”œβ”€β”€ services.py β”‚ β”‚ β”‚ β”œβ”€β”€ models.py β”‚ β”‚ β”‚ β”œβ”€β”€ schemas.py Each module contains SQLAlchemy models, Pydantic models (also called schemas), FastAPI routes, and services that handle the business logic. In this example, I am using two modules that represent boxes and toys. Each toy is stored in one box, and each box contains multiple toys, following a classic 1 x N relationship. With SQLAlchemy everything goes well, defining relationships is straightforward by using TYPE_CHECKING to handle circular dependencies: # my_app.modules.box.models.py from sqlalchemy.orm import Mapped, mapped_column, relationship if TYPE_CHECKING: from my_app.modules.toy.models import Toy class Box(Base): __tablename__ = "box" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) toys: Mapped[list["Toy"]] = relationship(back_populates="box") # my_app.modules.toy.models.py from sqlalchemy.orm import Mapped, mapped_column, relationship if TYPE_CHECKING: from my_app.modules.box.models import Box class Toy(Base): __tablename__ = "toy" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) box: Mapped["Box"] = relationship(back_populates="toys") This setup works perfectly without raising any circular import errors. However, I encounter issues when defining the same relationships between Pydantic schemas. If I import directly the modules on my schemas.py, # my_app.modules.box.schemas.py from my_app.modules.toy.schemas import ToyBase class BoxBase(BaseModel): id: int class BoxResponse(BoxBase): toys: list[ToyBase] # my_app.modules.toy.schemas.py from my_app.modules.box.schemas import BoxBase class ToyBase(BaseModel): id: int class ToyResponse(ToyBase): box: BoxBase I recieve the circular import error: ImportError: cannot import name 'ToyBase' from partially initialized module 'my_app.modules.toy.schemas' (most likely due to a circular import)... I also try the SQLAlchemy approach of TYPE_CHECKING and string declaration: # my_app.modules.box.schemas.py if TYPE_CHECKING: from my_app.modules.toy.schemas import ToyBase class BoxBase(BaseModel): id: int class BoxResponse(BoxBase): toys: list["ToyBase"] # my_app.modules.toy.schemas.py if TYPE_CHECKING: from my_app.modules.box.schemas import BoxBase class ToyBase(BaseModel): id: int class ToyResponse(ToyBase): box: "BoxBase" But apparently, pydantic doesn't support this: raise PydanticUndefinedAnnotation.from_name_error(e) from e pydantic.errors.PydanticUndefinedAnnotation: name 'ToyBase' is not defined (Some answers) suggest that the issue comes from a poor module organization. (Others) suggest, too complex and hard to understand solutions. Maybe I'm wrong but I consider the relationship between Box and Toy something trivial and fundamental that should be manageable in any moderately complex project. For example, a straightforward use case would be to request a toy along with its containing box and vice versa, a box with all its toys. Aren't they legitimate requests? So, my question How can I define interrelated Pydantic schemas (BoxResponse and ToyResponse) that reference each other without encountering circular import errors? I'm looking for an clear and maintainable solution that preserves the independence of the box and toy modules, similar to how relationships are handled in SQLAlchemy models. Any suggestions or at least an explanation of why this is so difficult to achieve?
I had this same issue and spent hours trying to figure it out, in the end i ended up just not type annotating the specific circular imports and i've lived happily ever after(so far). Maybe you could benefit from doing this same ;) That being said, there are multiple ways of fixing circular imports. As highlighted here What you've tried so far is: Normal typing; doesnt work when a child import a parent. String literals such as toys:["ToyResponse"], this method still causes circular import errors because you are still importing the class to resolve the type. Conditionally importing using TYPE_CHECK. This method seems promising and i believe you've almost got it but were missing one small detail, the TYPE_CHECK boolean must be checked at every place where the circular import types are being used see below: As per the example you provided, you conditionally import your classes but you dont conditionally do the type checks on the class attributes which results in an undefined error when accessing the type. As highlighted in the mypy docs: The typing module defines a TYPE_CHECKING constant that is False at runtime but treated as True while type checking. Since code inside if TYPE_CHECKING: is not executed at runtime, it provides a convenient way to tell mypy something without the code being evaluated at runtime. This is most useful for resolving import cycles. # my_app.modules.box.schemas.py from pydantic import BaseModel from my_app.modules.toy.schemas import ToyResponse class BoxResponse(BaseModel): id: int toys: list["ToyResponse"] # Type check not required here since this is the parent class # my_app.modules.toy.schemas.py from typing import TYPE_CHECKING from pydantic import BaseModel if TYPE_CHECKING: from my_app.modules.box.schemas import BoxResponse class ToyResponse(BaseModel): id: int if TYPE_CHECKING: box: "BoxResponse" else: box Personally the above seems hackish. If you have Python 3.7 and up you could also use __future__ import annotations. This will take type hints and treat them as string literals during the initial import. Which should prevent the circular import error.
6
2
79,023,187
2024-9-25
https://stackoverflow.com/questions/79023187/enumerating-all-possible-lists-of-any-length-of-non-negative-integers
I would like to generate/enumerate all possible lists of non-negative integers such that the algorithm will generate lists like the following at some point [1] [24542,0] [245,904609,848,24128,350,999] In other words, for all possible non-negative integers, generate all possible lists which contain that many non-negative integers. I have figured that the trick for a list with two numbers is to enumerate their values diagonally like this first value\second value 0 1 2 3 0 0 (this will be generated first) 2 (this third etc.) 5 9 1 1 (this second) 4 8 2 3 7 3 6 def genpair(): x = 0 y = 0 yield x,y maxx = 0 while True: maxx += 1 x = maxx y = 0 while x >= 0: yield x,y x -= 1 y += 1 gen = genpair() for i in range(10): print(next(gen)) But does the same trick (or another) also make this work for lists of arbitrary length?
One of infinitely many ways to do this: Imagine a number line with cells 1, 2, 3, and up to infinity. Now think of a binary number representation, with bits indicating if there is a "break" at the cell border. So, 1 -> [1] 10 -> [2] 11 -> [1,1] 100 -> [3] 101 -> [2, 1] 110 -> [1, 2] Note how number of bits is the same as the sum of the list, and number of positive bits indicates the number of list elements. Code would look somewhat like: def list_gen(n): res = [] counter = 0 while n: counter += 1 n, flag = divmod(n, 2) if flag: res.append(counter) counter = 0 return res
3
4
79,023,519
2024-9-25
https://stackoverflow.com/questions/79023519/how-to-check-the-variable-is-of-certain-type
I have a python JSON data and need to iterate through the key pair values and check if the value is of certain type and perform an operation. here is the example: total = 0 grades = { 'Math': 90, 'Science': None, 'English': 85, 'History': 'A', 'Art': 88 } for grade in grades.values(): total += grade i need to check if the value is a number then add to the total.
You can check the variable type using isinstance() like this for grade in grades.values(): if isinstance(grade, (int, float)): # Check if the grade is a number total += grade This will add the values only if the variable is of type int or float
1
2
79,023,227
2024-9-25
https://stackoverflow.com/questions/79023227/how-to-split-pandas-series-into-two-based-on-whether-or-not-the-index-contains-a
I have pandas series that I want to split into two: one with all the entries of the original series where index contains a certain word and the other with all the remaining entries. Getting a series of entries which do contain a certain word in their index is easy: foo_series = original_series.filter(like = "foo") But how do I get the rest?
You could drop those indices from the original Series: foo_series = original_series.filter(like = "foo") non_foo_series = original_series.drop(foo_series.index) Or use boolean indexing: m = original_series.index.str.contains('foo') foo_series = original_series[m] non_foo_series = original_series[~m] Example: # input original_series = pd.Series([1, 2, 3, 4], index=['foo1', 'bar1', 'foo2', 'bar2']) # outputs # foo_series foo1 1 foo2 3 dtype: int64 # non_foo_series bar1 2 bar2 4 dtype: int64
3
3
79,022,782
2024-9-25
https://stackoverflow.com/questions/79022782/return-all-rows-that-have-at-least-one-null-in-one-of-the-columns-using-polars
I need all the rows that have null in one of the predefined columns. I basically need this but i have one more requirement that I cant seem to figure out. Not every column needs to be checked. I have a function that returns the names of the columns that need to be checked in a list. Assume this is my dataframe: data = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════β•ͺ═══════║ β”‚ abc ┆ null ┆ u ┆ true β”‚ β”‚ def ┆ abc ┆ v ┆ true β”‚ β”‚ ghi ┆ def ┆ null┆ true β”‚ β”‚ jkl ┆ uvw ┆ x ┆ true β”‚ β”‚ mno ┆ xyz ┆ y ┆ null β”‚ β”‚ qrs ┆ null ┆ z ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ """) Doing data.filter(polars.any_horizontal(polars.all().is_null())) gives me all rows where any of the columns contain null. Sometimes it's fine for column c to contain a null so let's not check it. what I want is this: β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════β•ͺ═══════║ β”‚ abc ┆ null ┆ u ┆ true β”‚ β”‚ mno ┆ xyz ┆ y ┆ null β”‚ β”‚ qrs ┆ null ┆ z ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Row 3 is not shown even though there is a null value in column c. columns = ["a", "b", "d"] data.filter(polars.any_horizontal(polars.all(*columns).is_null())) This gives me polars.exceptions.SchemaError: invalid series dtype: expected 'Boolean', got 'str' I thought maybe the columns aren't aligned or somethig because data has more columns than what the filter uses, so i did this. columns = ["a", "b", "d"] # notice `.select(columns)` here data.select(columns).filter(polars.any_horizontal(polars.all(*columns).is_null())) But is still get the same error. How do I get the full rows of data that contain a null in one of ["a", "b", "d"] columns
If you want exclude some columns you can use .exlude(): import polars as pl data.filter(pl.any_horizontal(pl.exclude("c").is_null())) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════β•ͺ══════║ β”‚ abc ┆ null ┆ u ┆ true β”‚ β”‚ mno ┆ xyz ┆ y ┆ null β”‚ β”‚ qrs ┆ null ┆ z ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Or you can just use column names by using .col(): import polars as pl cols = ["a","b","d"] data.filter(pl.any_horizontal(pl.col(cols).is_null())) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════β•ͺ══════║ β”‚ abc ┆ null ┆ u ┆ true β”‚ β”‚ mno ┆ xyz ┆ y ┆ null β”‚ β”‚ qrs ┆ null ┆ z ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ If you want to be really flexible, you can use selectors, for example .selectors.exclude(): import polars.selectors as cs data.filter(pl.any_horizontal(cs.exclude("c").is_null())) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════β•ͺ══════║ β”‚ abc ┆ null ┆ u ┆ true β”‚ β”‚ mno ┆ xyz ┆ y ┆ null β”‚ β”‚ qrs ┆ null ┆ z ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
2
79,022,245
2024-9-25
https://stackoverflow.com/questions/79022245/opencv-doesnt-read-images-from-some-directories
I'm trying to read a 16-bit grayscale PNG with OpenCV. This image is stored on a network share. When I try to load the image with cv.imread() nothing is returned: import cv2 as cv print(img_paths[0]) print(type(img_paths[0])) print(img_paths[0].exists()) img = cv.imread(img_paths[0].as_posix(), cv.IMREAD_ANYDEPTH) print(type(img)) >>> M:\path\to\Meß-ID 1\images\img1.png >>> <class 'pathlib.WindowsPath'> >>> True >>> <class 'NoneType'> As you can see the file certainly exists. It even is loadable with PIL: from PIL import Image img = Image.open(img_paths[0]) print(img) >>> <PIL.PngImagePlugin.PngImageFile image mode=I;16 size=2560x300 at 0x27F77E95940> And only when I copy the image to the directory of the .ipynb I'm working in, openCV is able to load the image: import shutil shutil.copy(img_paths[0], "./test_img.png") img = cv.imread("./test_img.png", cv.IMREAD_ANYDEPTH) print(type(img)) >>> <class 'numpy.ndarray'> Why is OpenCV refusing to load the image?
Yes, OpenCV on Windows has issues with encodings. Usually, anything non-ASCII in the path string can cause you trouble. Shortest possible solution: im = cv.imdecode(np.fromfile(the_path, dtype=np.uint8), cv.IMREAD_UNCHANGED)
1
4
79,021,052
2024-9-25
https://stackoverflow.com/questions/79021052/how-can-i-create-parameterized-matrices-and-generate-the-final-matrix-on-demand
I am in a situation where I need to work with parameterized matrices. For e.g., say I start with two matrices A and B, A = [1 2] B = [a b] [3 4] [5 6] Here matrix B is parameterized with the variables a and b. I might at some point need to combine the matrices, say using matrix multiplication, to get AB = C: C = AB = [a+10 b+12 ] [3a+20 3b+24] I would now like to store matrix C with those variables in there and have the ability to provide values for the variables and evaluate the elements. I.e. matrix C would be a matrix parameterized by the variables a and b. Instead of generating C on the fly any time I need it by providing the parameters to B and doing all the matrix multiplications, I would like to store the general structure of the matrix C by doing the matrix multiplications once, with C inheriting any of the variables of its constituents. I am hoping this will save run time since the structure of matrix C is cached, i.e. I don't need to do the matrix operations every time I change a parameter. In my case, there maybe many such matrices, each with their own variable elements and I might need to combine them arbitrarily (matrix products, Kronecker products etc.). I was wondering if there is an established way to achieve this in Python. It would also be helpful if the solution was performant as the there will be many matrix operations in the code I am trying to run. Unfortunately, I have to stick with Python for now, so solutions in other languages would be less helpful. I have tried achieving this result using the sympy module, and I think I have a very crude setup going that seems to work, but I wanted to know if there was a more canonical way of doing this in Python. Also, I am not sure if sympy would be the most performant, and I am not sure if such a system can be obtained using numpy. A simplified version of my sympy code is given below for reference: import sympy as s from sympy import Matrix from sympy.physics.quantum import TensorProduct as tp A = Matrix([[1,2], [3,4]]) a, b = s.symbols('a b') B = Matrix([[a,b], [5,6]]) # To obtain C, parameterized by a and b C = A*B display(C) # Finally, to evaluate C with parameters a=3 and b=0.5 C_eval = C.evalf(subs={a:3, b:0.5}) display(C_eval) # Or, to generate and evaluate a different combination of A and B D = tp(A,B) display(D) D_eval = D.evalf(subs={a:3, b:0.5}) display(D_eval)
I could see here several solutions. 1. SimPy (your current solution) SymPy great for this! It allows you cache expressions like matrix multiplications, but it's really primarily a symbolic library, so not well optimised for performance as other numeric libraries. 2. NumPy + numexpr NumPy is extremely fast and you could combine it with numexpr: import numpy as np import numexpr as ne A = np.array([[1, 2], [3, 4]]) B_expr = 'a * B1 + b * B2' B1 =np.array([[1, 0],[0, 5]]) B2 = np.array([[0, 1],[0, 6]]) AB_expr = 'A1 * (a * B1 + b * B2)' A1 = A a_val = 3 b_val = 0.5 ne.evaluate(AB_expr, local_dict={'a':a_val,'b':b_val, 'A1':A1, 'B1': B1, 'B2': B2}) 3. SymEngine SymEngine is C++ based symbolic library and its much faster than SymPy. You could use it similar as SymPy by python library import symengine as se a, b = se.symbols('a b') A = se.Matrix([[1, 2], [3, 4]]) B =se.Matrix([[a, b],[5, 6]]) C = A * B C_eval = C.subs({a: 3, b: 0.5}) So if you need couples symbolic logic and caching you could use SymPy or SymEngine If you want more performance, you could use NumPy + numexpr Also, for very complex operations you could read about Theano
2
1
79,021,772
2024-9-25
https://stackoverflow.com/questions/79021772/poor-precision-when-plotting-small-wedge-compared-to-axis-size
I am trying to recreate the charts on this website: https://bumps.live/torpids/2022. I am using matplotlib and am running into an issue when drawing the logos, which I have recreated with the MWE below. I am drawing two semicircles next to each other, and the result is as expected when they are around the same size as the axis, but when they are much smaller there is a loss of precision and the semicircles no longer take up the space of half a circle. The radius of both semicircles is 0.1 in the following two figures, but the first figure has axis limits from -0.15 to 0.15 and the second figure has limits from -10 to 10 (I have zoomed in on the right figure). When using plt.show and zooming in, this issue does not occur. I am guessing that matplotlib has defined the wedges to a suitable degree of accuracy assuming that no one is zooming in a lot, although as I am zooming in, this is not sufficient. I asked ChatGPT and it suggested adding theresolution=100 kwarg to my wedges, but this appears to be deprecated or something as that gives an error. I am using python 3.12.3 and matplotlib 3.9.1.post1. I will need to produce around 350 logos, and I would be willing to add them as SVGs if the performance is good enough as a backup plan, but ideally I would like to understand how to fix this issue. import matplotlib.pyplot as plt from matplotlib.patches import Wedge fig, ax = plt.subplots(figsize=(5, 5)) radius = 0.1 left_semicircle = Wedge((0, 0), radius, 0, 180, color="r") right_semicircle = Wedge((0, 0), radius, 180, 360, color="b") ax.add_patch(left_semicircle) ax.add_patch(right_semicircle) axis_limit = 0.15 axis_limit = 10 ax.set_xlim((-axis_limit, axis_limit)) ax.set_ylim((-axis_limit, axis_limit)) ax.set_aspect('equal') plt.savefig(f"TwoSemicircles_{axis_limit}.pdf")
When you set the color, you are setting both facecolor (which is the color of the inside of the shape) and the edgecolor (which is the color of the outline). Matplotlib then draws the outline with a default line width of 1 point. That linewidth is preserved for the interactive zoom in plt.show (so remains too small to notice), but in your pdf the linewidth is a fixed proportion of the page size. Instead you should just set the facecolor left_semicircle = Wedge((0, 0), radius, 0, 180, facecolor="r") right_semicircle = Wedge((0, 0), radius, 180, 360, facecolor="b") then Matplotlib does not draw the edge.
1
4
79,009,542
2024-9-21
https://stackoverflow.com/questions/79009542/python-3-13-with-free-thread-is-slow
I was trying this new free-thread version of the interpreter, but find out that it actually takes longer than the GIL enabled version. I did observe that the usage on the CPU increase a lot for the free-thread interpreter, is there something I misunderstand about this new interpreter? Version downloaded: python-3.13.0rc2-amd64 Code: from concurrent.futures import ThreadPoolExecutor from random import randint import time def create_table(size): a, b = size table = [] for i in range(0, a): row = [] for j in range(0, b): row.append(randint(0, 100)) table.append(row) return table if __name__ == "__main__": start = time.perf_counter() with ThreadPoolExecutor(4) as pool: result = pool.map(create_table, [(1000, 10000) for _ in range(10)]) end = time.perf_counter() print(end - start, *[len(each) for each in result]) python3.13t takes 56sec python3.13 takes 26sec python3.12 takes 25sec
The primary culprit appears to be the randint module, as it is a static import and appears to share a mutex between threads. Another problem is that you're only able to process 4 tables at a time. Since you want to create 10 tables in total, you'll be running batches of 4-4-2. Here is the code with the randint problem addressed by replacing it with a SystemRandom instance per thread: from concurrent.futures import ThreadPoolExecutor from random import SystemRandom import time def create_table(size): a, b = size table = [] random = SystemRandom() for i in range(0, a): row = [] for j in range(0, b): row.append(random.randint(0, 100)) table.append(row) return table if __name__ == "__main__": start = time.perf_counter() with ThreadPoolExecutor(4) as pool: result = pool.map(create_table, [(1000, 10000) for _ in range(10)]) end = time.perf_counter() print(end - start, *[len(each) for each in result]) And here is some code that achieves the same thing, but is more flexible with the thread creation and avoids unnecessary inter-thread communication: import threading from random import SystemRandom import time def create_table(obj, result: list[list[int]]): a, b = obj print(f"Starting thread {threading.current_thread().name}") random = SystemRandom() result[:] = [[random.randint(0, 100) for j in range(b)] for i in range(a)] print(f"Finished thread {threading.current_thread().name}") if __name__ == "__main__": start = time.perf_counter() obj = (1000, 10000) results: list[list[list[int]]] = [] threads: list[threading.Thread] = [] for _ in range(4): result: list[list[int]] = [] thread = threading.Thread(target=create_table, args=(obj, result)) thread.start() threads.append(thread) results.append(result) for thread in threads: thread.join() print([len(r) for r in results]) end = time.perf_counter() print(end - start)
2
2
79,006,642
2024-9-20
https://stackoverflow.com/questions/79006642/multiply-elements-of-list-column-in-polars-dataframe-with-elements-of-regular-py
I have a pl.DataFrame with a column comprising lists like this: import polars as pl df = pl.DataFrame( { "symbol": ["A", "A", "B", "B"], "roc": [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6], [0.7, 0.8]], } ) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ roc β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════║ β”‚ A ┆ [0.1, 0.2] β”‚ β”‚ A ┆ [0.3, 0.4] β”‚ β”‚ B ┆ [0.5, 0.6] β”‚ β”‚ B ┆ [0.7, 0.8] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Further, I have a regular python list weights = [0.3, 0.7] What's an efficient way to multiply pl.col("roc") with weights in a way where the first and second element of the column will be multiplied with the first and second element of weights, respectively? The expected output is like this: shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ roc β”‚ roc_wgt β”‚ β”‚ --- ┆ --- β”‚ --- β”‚ β”‚ str ┆ list[f64] β”‚ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════║══════════════║ β”‚ A ┆ [0.1, 0.2] β”‚ [0.03, 0.14] β”‚ = [0.1 * 0.3, 0.2 * 0.7] β”‚ A ┆ [0.3, 0.4] β”‚ [0.09, 0.28] β”‚ = [0.3 * 0.3, 0.4 * 0.7] β”‚ B ┆ [0.5, 0.6] β”‚ [0.15, 0.42] β”‚ = [0.5 * 0.3, 0.6 * 0.7] β”‚ B ┆ [0.7, 0.8] β”‚ [0.21, 0.56] β”‚ = [0.7 * 0.3, 0.8 * 0.7] β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Update: Broadcasting of literals/scalars for the List type was added in 1.10.0 df.with_columns(roc_wgt = pl.col.roc * weights) shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ roc ┆ roc_wgt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ list[f64] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ══════════════║ β”‚ A ┆ [0.1, 0.2] ┆ [0.03, 0.14] β”‚ β”‚ A ┆ [0.3, 0.4] ┆ [0.09, 0.28] β”‚ β”‚ B ┆ [0.5, 0.6] ┆ [0.15, 0.42] β”‚ β”‚ B ┆ [0.7, 0.8] ┆ [0.21, 0.56] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As of Polars 1.8.0 list arithmetic has been merged. Follow on work will add support for broadcasting of literals (and scalars). https://github.com/pola-rs/polars/issues/8006 It can be added as a column for now. (df.with_columns(wgt = weights) .with_columns(roc_wgt = pl.col.roc * pl.col.wgt) ) shape: (4, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ roc ┆ wgt ┆ roc_wgt β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ list[f64] ┆ list[f64] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ════════════β•ͺ══════════════║ β”‚ A ┆ [0.1, 0.2] ┆ [0.3, 0.7] ┆ [0.03, 0.14] β”‚ β”‚ A ┆ [0.3, 0.4] ┆ [0.3, 0.7] ┆ [0.09, 0.28] β”‚ β”‚ B ┆ [0.5, 0.6] ┆ [0.3, 0.7] ┆ [0.15, 0.42] β”‚ β”‚ B ┆ [0.7, 0.8] ┆ [0.3, 0.7] ┆ [0.21, 0.56] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Broadcasting of literals works for the Array datatype as of 1.8.0 dtype = pl.Array(float, 2) df.with_columns(roc_wgt = pl.col.roc.cast(dtype) * pl.lit(weights, dtype)) shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ roc ┆ roc_wgt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ list[f64] ┆ array[f64, 2] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════════════║ β”‚ A ┆ [0.1, 0.2] ┆ [0.03, 0.14] β”‚ β”‚ A ┆ [0.3, 0.4] ┆ [0.09, 0.28] β”‚ β”‚ B ┆ [0.5, 0.6] ┆ [0.15, 0.42] β”‚ β”‚ B ┆ [0.7, 0.8] ┆ [0.21, 0.56] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
4
79,003,772
2024-9-19
https://stackoverflow.com/questions/79003772/why-does-decoding-a-large-base64-string-appear-to-be-faster-in-single-threaded-p
I have a number of large base64 strings to decode, ranging from a few hundred of MB up to ~5 GB each. The obvious solution is a single call to base64.b64decode ("reference implementation"). I'm trying to speed up the process by using multiprocessing, but, surprisingly, it is much slower than the reference implementation. On my machine I get: reference_implementation decoding time = 7.37 implmementation1 Verify result Ok decoding time = 7.59 threaded_impl Verify result Ok decoding time = 13.24 mutiproc_impl Verify result Ok decoding time = 11.82 What I am doing wrong? (Warning: memory hungry code!) import base64 from time import perf_counter from binascii import a2b_base64 import concurrent.futures as fut from time import sleep from gc import collect from multiprocessing import cpu_count def reference_implementation(encoded): """This is the implementation that gives the desired result""" return base64.b64decode(encoded) def implmementation1(encoded): """Try to call the directly the underlying library""" return a2b_base64(encoded) def threaded_impl(encoded, N): """Try multi threading calling the underlying library""" # split the string into pieces d = len(encoded) // N # number of splits lbatch = (d // 4) * 4 # lenght of first N-1 batches, the last is len(source) - lbatch*N batches = [] for i in range(N-1): start = i * lbatch end = (i + 1) * lbatch # print(i, start, end) batches.append(encoded[start:end]) batches.append(encoded[end:]) # Decode ret = bytes() with fut.ThreadPoolExecutor(max_workers=N) as executor: # Submit tasks for execution and put pieces together for result in executor.map(a2b_base64, batches): ret = ret + result return ret def mutiproc_impl(encoded, N): """Try multi processing calling the underlying library""" # split the string into pieces d = len(encoded) // N # number of splits lbatch = (d // 4) * 4 # lenght of first N-1 batches, the last is len(source) - lbatch*N batches = [] for i in range(N-1): start = i * lbatch end = (i + 1) * lbatch # print(i, start, end) batches.append(encoded[start:end]) batches.append(encoded[end:]) # Decode ret = bytes() with fut.ProcessPoolExecutor(max_workers=N) as executor: # Submit tasks for execution and put pieces together for result in executor.map(a2b_base64, batches): ret = ret + result return ret if __name__ == "__main__": CPU_NUM = cpu_count() # Prepare a 4.6 GB byte string (with less than 32 GB ram you may experience swapping on virtual memory) repeat = 60000000 large_b64_string = b'VGhpcyBzdHJpbmcgaXMgZm9ybWF0dGVkIHRvIGJlIGVuY29kZWQgd2l0aG91dCBwYWRkaW5nIGJ5dGVz' * repeat # Compare implementations print("\nreference_implementation") t_start = perf_counter() dec1 = reference_implementation(large_b64_string) t_end = perf_counter() print('decoding time =', (t_end - t_start)) sleep(1) print("\nimplmementation1") t_start = perf_counter() dec2 = implmementation1(large_b64_string) t_end = perf_counter() print("Verify result", "Ok" if dec2==dec1 else "FAIL") print('decoding time =', (t_end - t_start)) del dec2; collect() # force freeing memory to avoid swapping on virtual mem sleep(1) print("\nthreaded_impl") t_start = perf_counter() dec3 = threaded_impl(large_b64_string, CPU_NUM) t_end = perf_counter() print("Verify result", "Ok" if dec3==dec1 else "FAIL") print('decoding time =', (t_end - t_start)) del dec3; collect() sleep(1) print("\nmutiproc_impl") t_start = perf_counter() dec4 = mutiproc_impl(large_b64_string, CPU_NUM) t_end = perf_counter() print("Verify result", "Ok" if dec4==dec1 else "FAIL") print('decoding time =', (t_end - t_start)) del dec4; collect()
TL;DR: Python parallelism sucks due to the global interpreter lock and inter-process communication. Data copies also introduce overheads making your parallel implementations even slower, especially since the operation tends to be memory-bound. A native CPython module can be written to overpass the CPython's limits and strongly speed up the computation. First things first, multi-threading in CPython is limited by the global interpreter lock (GIL) which prevents such a computation to be faster than a sequential one (like in nearly all cases except generally I/Os). This point has been pointed out by Barmar in comments. Moreover, multi-processing is limited by the inter-process communication (IPC) between workers which is generally slow. This is especially true here since the computation is rather memory intensive and IPC is done using relatively slow pickling internally. Not to mention this IPC operation is mostly done sequentially impacting even further the scalability of the operation if it is not completely memory-bound on the target platform. On top of that, operations like encoded[start:end] creates a new bytes which is a (partial) copy of encoded. This increase even further the memory bandwidth pressure which should be already an issue (it is clearly the case on my laptop). The same thing is true for ret = ret + result which create a new growing copy for every process resulting in a quadratic execution. With so many copies in a rather memory-bound operation, this is not surprising for the operation to be slower than the sequential part. The thing is you can hardly do better in Python! Without any convoluted tricks, there is no other way to parallelize the operation in pure-Python. I mean all module have to create either CPython threads (GIL bound) or CPython processes (IPC bound). The GIL cannot be released as long as you work on any CPython objects in multiple threads. The only solution is to use native threads operating on bytes' internal buffer (which does not require the GIL to be locked. This can be done either using native languages (e.g. C/C++/Rust), Numba or Cython. However, there is another big issue impacting performance to consider: bytes' copies. AFAIK, Numba and Cython prevent you to avoid that. The best you can do is to extract the input memory buffer, write the output in parallel in a native array (not limited by the GIL), and finally then creates a bytes object. The thing is creating this last object take >60% of the time on my machine and there is no way to make it faster because bytes objects are immutable. A native Python module written in native language can overpass this limit. Technically, this is also possible in Cython by directly calling C API functions and managing object yourself but this is pretty low-level, and in the end, this looks like more a C code than Python one (with additional Cython annotations). Indeed, the CPython API provides a PyBytes_FromStringAndSize function so to creates a bytes object and it allows developers to write in the bytes' buffer only after creating it without associated buffer (i.e. the first parameter must be NULL). This is the only way to avoid an expensive copy. That being said, note that a new bytes object needs to be created and filling its internal buffer results in a lot of pages faults slowing things a bit. AFAIK, there is no way to avoid that in CPython besides not creating huge buffers. In fact, the computation would be faster if you could process the whole string chunk by chunk (if possible) so to benefit from CPU cache and memory recycling of the standard allocator. Indeed, this can strongly reduce the DRAM pressure and avoid page-faults. The bad news is that if you do the chunk by chunk computation in CPython, then I think you cannot benefit from multiple threads anymore. Indeed, chunks will be too small for multiple threads to really worth it, especially on large-scale servers (where multiple threads are also required to saturate the RAM and the L3 cache). CPython's parallelism simply sucks (especially due to the GIL). I also found out that base64.b64decode is surprisingly not so efficient. I wrote a faster (but less safe) implementation. There are ways to write a fast and safe implementation (typically thanks to SIMD), but this is complicated and not the purpose of this post. Besides, using multiple threads is enough to make the computation memory-bound on most machines (including mine) so it does not worth it to optimize further the resulting (rather naive) sequential implementation. Note I used OpenMP so to parallelize the C loop with only 1 line (for large inputs). Here is the base64.c main file of the fast parallel CPython native module (assuming the input is correctly formatted): #define PY_SSIZE_T_CLEAN // Required for large bytes objects on 64-bit machines #include <Python.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <omp.h> int base64_table[256]; // Generate a conversion table for sake of performance static inline void init_table() { static const unsigned char base64_chars[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; for (int i = 0; i < 64; ++i) base64_table[i] = -1; for (int i = 0; i < 64; ++i) base64_table[base64_chars[i]] = i; base64_table['='] = 0; } static inline int decode_char(unsigned char c) { return base64_table[c]; } // Assume the input is correctly formatted static PyObject* decode(PyObject* self, PyObject* args) { PyObject* input_obj; // Extract the input parameter and check its type if(!PyArg_ParseTuple(args, "O!", &PyBytes_Type, &input_obj)) return NULL; char* input = PyBytes_AS_STRING(input_obj); Py_ssize_t input_length = PyBytes_GET_SIZE(input_obj); assert(input_length % 4 == 0); int padding = 0; padding += input_length >= 1 && input[input_length - 1] == '='; padding += input_length >= 2 && input[input_length - 2] == '='; // Assume there is enough memory Py_ssize_t output_length = (input_length / 4) * 3 - padding; PyObject* output_obj = PyBytes_FromStringAndSize(NULL, output_length); assert(output_obj != NULL); char* output = PyBytes_AS_STRING(output_obj); assert(output != NULL); #pragma omp parallel for schedule(guided) if(input_length >= 8*1024*1024) for(Py_ssize_t k = 0; k < input_length / 4; ++k) { const Py_ssize_t i = k * 4; const Py_ssize_t j = k * 3; const int a = decode_char(input[i]); const int b = decode_char(input[i + 1]); const int c = decode_char(input[i + 2]); const int d = decode_char(input[i + 3]); assert(a > 0 && b > 0 && c > 0 && d > 0); const int merged = (a << 18) + (b << 12) + (c << 6) + d; if(j < output_length) output[j] = (merged >> 16) & 0xFF; if(j < output_length) output[j + 1] = (merged >> 8) & 0xFF; if(j < output_length) output[j + 2] = merged & 0xFF; } return output_obj; } static PyMethodDef MyMethods[] = { {"decode", decode, METH_VARARGS, "Parallel base64 decoding function."}, {NULL, NULL, 0, NULL} }; static struct PyModuleDef parallel_base64 = { PyModuleDef_HEAD_INIT, "parallel_base64", NULL, -1, MyMethods }; PyMODINIT_FUNC PyInit_parallel_base64(void) { init_table(); return PyModule_Create(&parallel_base64); } and here is the setup.py file to build it: from setuptools import setup, Extension module = Extension( 'parallel_base64', sources=['base64.c'], extra_compile_args=['-fopenmp'], extra_link_args=['-fopenmp'] ) setup( name='parallel_base64', version='1.0', description='A parallel base64 module written in C', ext_modules=[module], ) You can test it and call it with python setup.py build_ext --inplace. Then you can import it with import parallel_base64 and just call parallel_base64.decode(large_b64_string). Performance results Using repeat = 30000000, on my Linux laptop with an AMD Ryzen 7 5700U (configured in performance mode) and Python 3.12.3, I get the following results: decoding time = 3.6366550829989137 implmementation1 Verify result Ok decoding time = 3.5178445390010893 threaded_impl Verify result Ok decoding time = 9.623698087001685 mutiproc_impl Verify result Ok decoding time = 13.102449985999556 c_module_impl Verify result Ok decoding time = 0.29033970499949646 We can see that the native parallel implementation is 12.5 times faster using 8 cores. This is because it not only use multiple cores but also benefit from a more efficient computation. The DRAM throughput reaches 23 GiB/s which is pretty good. It should be far enough for data read from a high-end SSD. Note that if you want to read data from a SSD efficiently, then reading it (all at once) from Python to bytes is inefficient (because of copies and page faults). Memory mapping is generally faster, especially on a high-end SSD. This can be done with mmap on Linux. Note that Numpy provides such a feature (though munmap is missing) and Numpy arrays have the benefit to be mutable so they can be reused many times which might help to avoid page-fault performance issues and enable further optimizations. In the end, Python is maybe simply not the right too to get good performance for such kind of computation though native modules can help a lot to speed up some specific parts (otherwise, it is not really a Python code anymore).
5
5
79,019,204
2024-9-24
https://stackoverflow.com/questions/79019204/too-many-positional-arguments-on-one-machine-but-does-not-know-the-error-on-t
I am trying to set up a GitHub Actions workflow (definition below) checking for pylint requirements. I fixed this all on my local machine. Then I noticed getting a too-many-positional-arguments on the workflow. But my local machine doesn't know that specific error. Now I tried to fix this by using pylint: disable=too-many-positional-arguments. Resulting in the following pylint error. usc_sign_in_bot/db_helpers.py:345:0: W0012: Unknown option value for 'disable', expected a valid pylint message and got 'too-many-positional-arguments' (unknown-option-value) Also adding the R0917 error message to the .pylintrc (also defined below) disable menu, does not help as it just gets a local error about not knowing the message to disable. Both github & local have pylint==3.3.1 installed. I think it's some kind of version mismatch but i am not sure how that would happen as the pylint versions are both the same and both use the same .pylintrc. Does anyone know where this mismatch comes from? .github/workflows/pylint: name: pylint on: [push] jobs: build: runs-on: ubuntu-latest strategy: matrix: python-version: ["3.10"] steps: - uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m pip install --upgrade pip python -m pip install -r requirements.txt pip install pytest pylint==3.3.1 - name: Analysing the code with pylint run: | pylint usc_sign_in_bot/ pylint tests/ .pylintrc: [MAIN] # Specify a configuration file. load-plugins=pylint.extensions.docstyle # Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the # number of processors available to use. jobs=0 # Specify a score threshold to be exceeded before program exits with error. fail-under=10.0 [MESSAGES CONTROL] # Disable the message, report, category or checker with the given id(s). disable=C0199,E0401,E0611 # Disable the score feature, we want it right score=no [FORMAT] # Maximum number of characters on a single line. max-line-length=100 # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=no [DESIGN] # Maximum number of arguments for function / method. max-args=10 [DOCSTRING] # Require all classes and methods to have a docstring. docstring-min-length=10 [CONVENTION] # Ensure docstrings are present for all modules, classes, methods, and functions. good-names=i,j,k,ex,Run,_ [REPORTS] # Tweak the output format. You can have a full report with `yes`. reports=no [TYPECHECK] generated-members=numpy.*,torch.* [EXCEPTIONS] # This option represents a list of qualified names for which no member or method should be checked. ignored-classes=NotImplementedError
I also experienced a similar issue, try setting max-positional-arguments=10 in your .pylintrc. Also, have a look at https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/too-many-positional-arguments.html Problematic code: # +1: [too-many-positional-arguments] def calculate_drag_force(velocity, area, density, drag_coefficient): return 0.5 * drag_coefficient * density * area * velocity**2 drag_force = calculate_drag_force(30, 2.5, 1.225, 0.47) Correct code: def calculate_drag_force(*, velocity, area, density, drag_coefficient): return 0.5 * drag_coefficient * density * area * velocity**2 drag_force = calculate_drag_force( velocity=30, area=2.5, density=1.225, drag_coefficient=0.47
2
3
79,020,533
2024-9-24
https://stackoverflow.com/questions/79020533/using-multiple-client-certificates-with-python-and-selenium
I’m working on a web-scrape project using Python and Selenium with a Chrome driver, which requires client certificates to access pages. I have 2 scenarios it must handle: Different certificates allow access to different URLs (e.g. Certificate A accesses URLs 1, 2 and 3, and Certificate B accesses URLs 4, 5 and 6) Multiple certificates can access the same URL (e.g. Certificate A and B both can access URLs 7, 8 and 9 – those URLs return different company-specific data with each different cert) I’m on Windows/Windows Server, and have used the Registry entry AutoSelectCertificateForUrls, which auto-selects a certificate, based on URL (or wildcard). But for scenario #2 above, it does no good. So ideally, I’d like to pass the URL and Cert name to the Python script, then have Chrome use that Cert when accessing the specified URL, but I’m not seeing a way to do that. So far, I have: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait, Select chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--allow-insecure-localhost') chrome_options.add_argument('--ignore-ssl-errors=yes') chrome_options.add_argument('--ignore-certificate-errors') driver = webdriver.Chrome() driver.get(url) : : # scrape code here Does anyone have good step-by-step instructions to handle this?
import sqlite3 import win32crypt from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options DATABASE_PATH = 'path/to/database.db' # Database with URLs and cert thumbprints CHROMEDRIVER_PATH = 'path/to/chromedriver' def fetch_thumbprint_for_url(url): conn = sqlite3.connect(DATABASE_PATH) cursor = conn.cursor() cursor.execute("SELECT thumbprint FROM certs WHERE url = ?", (url,)) result = cursor.fetchone() conn.close() return result[0] if result else None def get_cert_from_store(thumbprint): store = win32crypt.CERT_SYSTEM_STORE_CURRENT_USER store_handle = win32crypt.CertOpenStore(win32crypt.CERT_STORE_PROV_SYSTEM, 0, None, store, "MY") cert_context = win32crypt.CertFindCertificateInStore( store_handle, win32crypt.X509_ASN_ENCODING, 0, win32crypt.CERT_FIND_HASH, thumbprint, None ) if cert_context: return cert_context[0].get("CERT_CONTEXT") raise Exception("Certificate not found.") def setup_driver(): chrome_options = Options() chrome_options.add_argument('--allow-insecure-localhost') chrome_options.add_argument('--ignore-ssl-errors=yes') chrome_options.add_argument('--ignore-certificate-errors') service = Service(CHROMEDRIVER_PATH) return webdriver.Chrome(service=service, options=chrome_options) def access_url_with_cert(url): thumbprint = fetch_thumbprint_for_url(url) if not thumbprint: raise Exception("No thumbprint found for this URL.") cert = get_cert_from_store(thumbprint) if not cert: raise Exception("Certificate retrieval failed.") driver = setup_driver() driver.get(url) return driver if __name__ == "__main__": test_url = "https://example.com" # Update with the actual URL driver = access_url_with_cert(test_url) driver.quit()
3
1
78,998,888
2024-9-18
https://stackoverflow.com/questions/78998888/matplotlib-issue-with-mosaic-and-colorbars
I am facing a strange behaviour with my code. I don't understand why the subplot at the top left has a different space between the imshow and the colorbar compared to the subplot at the top right. And also I don't understand why the colorbar at the bottom is not aligned with the one at the top right. Can you explain this ? import matplotlib.pyplot as plt import numpy as np matrix = np.random.rand(100, 100) mosaic = "AB;CC" fig = plt.figure(layout="constrained") ax_dict = fig.subplot_mosaic(mosaic) img = ax_dict['A'].imshow(matrix, aspect="auto") fig.colorbar(img, ax=ax_dict['A']) img = ax_dict['B'].imshow(matrix, aspect="auto") fig.colorbar(img, ax=ax_dict['B']) img = ax_dict['C'].imshow(matrix, aspect="auto") fig.colorbar(img, ax=ax_dict['C']) plt.show()
I posted this issue on Matplotlib github page and get an answer from jklymak. This is not the most elegant way to do it, but it works. However, if you want to place the colorbar at the bottom you have to adapt the code, updating the value in the inset_axes and also the orientation of the colorbar. img = ax['A'].imshow(matrix, aspect='auto') fig.colorbar(img, cax=ax['A'].inset_axes([1.02, 0, 0.05, 1])) img = ax['B'].imshow(matrix, aspect='auto') fig.colorbar(img, cax=ax['B'].inset_axes([1.02, 0, 0.05, 1])) img = ax['C'].imshow(matrix, aspect='auto') fig.colorbar(img, cax=ax['C'].inset_axes([1.02, 0, 0.05, 1]))
2
1
79,017,946
2024-9-24
https://stackoverflow.com/questions/79017946/breaking-the-json-decode
Update: PanicException was fixed by pull/18249 in Polars 1.6.0 Broadcast length was fixed by pull/19148 in Polars 1.10.0 shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ meta_data β”‚ β”‚ --- β”‚ β”‚ struct[0] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I want to make the string of empty dictionary to be a struct, but json_decode fails as there are some rows with strings of empty dictionary in a df import polars as pl df = pl.DataFrame({ "meta_data": ["{}"] }) df.with_columns(pl.col('meta_data').str.json_decode()) giving me, PanicException: called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("a StructArray must contain at least one field")) edit - the column has more string containing key value pairs also, df = pl.DataFrame({'a': ['{}', '{"b": "c"}']}) df = df.with_columns(pl.col("a").str.json_decode()) the above one is working fine, but when I keep only '{}', then it breaks
At the moment Polars doesn't support structs without fields - see related issue. So it cannot work with only empty json objects. However, if you know which fields can be in your data you can supply appropriate dtype parameter to pl.Expr.str.json_decode(): df = pl.DataFrame({"a": ["{}"]}) df.with_columns( pl.col('a').str.json_decode(dtype=pl.Struct({"b": pl.Utf8})) ) shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ β”‚ --- β”‚ β”‚ struct[1] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {null} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which would also work for rows with non-empty data: df = pl.DataFrame({'a': ['{}', '{"b": "c"}']}) print(df.with_columns( pl.col('a').str.json_decode(dtype=pl.Struct({"b": pl.Utf8})) )) shape: (2, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ β”‚ --- β”‚ β”‚ struct[1] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {null} β”‚ β”‚ {"c"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ If you don't know the data in advance then you cannot provide proper dtype parameter. In this case polars will try to determine schema based on data, using infer_schema_length parameter. By default it would check first 100 values to define schema. If there's no meaningful values within the rows defined by infer_schema_length then your query might fail, and if the schema is not fully represented within these rows then the result might be incomplete: df = pl.DataFrame({'a': ['{}', '{"b": "c"}', '{"e": "d"}']}) # we only check first 2 rows to determine schema # so field e is not in the result df.with_columns( pl.col('a').str.json_decode(infer_schema_length=2) ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ β”‚ --- β”‚ β”‚ struct[1] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {null} β”‚ β”‚ {"c"} β”‚ β”‚ {null} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ So you can just set infer_schema_length to None pushing polars to scan full data to determine the schema: df = pl.DataFrame({'a': ['{}', '{"b": "c"}', '{"e": "d"}']}) df.with_columns( pl.col('a').str.json_decode(infer_schema_length=None) ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ β”‚ --- β”‚ β”‚ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {null,null} β”‚ β”‚ {"c",null} β”‚ β”‚ {null,"d"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
79,019,656
2024-9-24
https://stackoverflow.com/questions/79019656/hashed-cross-product-transformation-in-pytorch
I want to implement a hashed cross product transformation like the one Keras uses: >>> layer = keras.layers.HashedCrossing(num_bins=5, output_mode='one_hot') >>> feat1 = np.array([1, 5, 2, 1, 4]) >>> feat2 = np.array([2, 9, 42, 37, 8]) >>> layer((feat1, feat2)) <tf.Tensor: shape=(5, 5), dtype=float32, numpy= array([[0., 0., 1., 0., 0.], [1., 0., 0., 0., 0.], [0., 0., 0., 0., 1.], [1., 0., 0., 0., 0.], [0., 0., 1., 0., 0.]], dtype=float32)> >>> layer2 = keras.layers.HashedCrossing(num_bins=5, output_mode='int') >>> layer2((feat1, feat2)) <tf.Tensor: shape=(5,), dtype=int64, numpy=array([2, 0, 4, 0, 2])> This layer performs crosses of categorical features using the "hashing trick". Conceptually, the transformation can be thought of as: hash(concatenate(features)) % num_bins. I'm struggling to understand the concatenate(features) part. Do I have to do the hash of each "pair" of features? In the meantime, I tried with this: >>> cross_product_idx = (feat1*feat2.max()+1 + feat2) % num_bins >>> cross_product = nn.functional.one_hot(cross_product_idx, num_bins) It works, but not using a hash function can cause problems with distributions
I could trace it to this part of the code where they simply use "X" as a string separator on one set of crossed values from various features. I'm struggling to understand the concatenate(features) part. Do I have to do the hash of each "pair" of features? If you are crossing two features, for each pair of values from each feature, you would need to "combine" them somehow (which is what they term as "concatenation"). The concatenation I see from the code is just string concatenation using the separator "X". So if you have feature A: "A1", "A2" and feature B: "B1", "B2", "B3", you would need to do hash("A1_X_B1") % num_bins hash("A1_X_B2") % num_bins hash("A1_X_B3") % num_bins hash("A2_X_B1") % num_bins hash("A2_X_B2") % num_bins hash("A2_X_B2") % num_bins and then one-hot encode these numbers if you want. Tensoring the operations I'm going to assume your features are categorical but numeric IDs, because if they were strings you would need to additionally map them out to integers. PRIME_NUM = 2_147_483_647 def feature_cross(feature_a: torch.Tensor, feature_b: torch.Tensor, num_bins: int) -> torch.Tensor: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") feature_a = feature_a.to(device) feature_b = feature_b.to(device) # Add an additional dimension and repeat the feature to match the other feature's size a_expanded = feature_a.unsqueeze(1).expand(-1, feature_b.size(0)) # Add an additional dimension and repeat the feature to match the other feature's size b_expanded = feature_b.unsqueeze(0).expand(feature_a.size(0), -1) combined = (a_expanded.long() * PRIME_NUM + b_expanded.long()) hashed = combined % num_bins return hashed feature_a = torch.tensor([1001, 1002, 1003, 1004], dtype=torch.long) feature_b = torch.tensor([2001, 2002, 2003, 2004, 2005], dtype=torch.long) num_bins = 1000 result = feature_cross(feature_a, feature_b, num_bins) print(result) To take an example, if A = [1,2,3] and B = [4,5], we are expanding them to # a_expanded 1 1 2 2 3 3 # b_expanded 4 5 4 5 4 5 and combining them through addition (with prime number multiplication) to achieve a cross. You're right that using tuples can also be an option for combining the values since tuples can be hashed, but I don't know of a tensorised way of creating tuples.
3
4
79,019,484
2024-9-24
https://stackoverflow.com/questions/79019484/error-table-already-exits-when-using-to-sql-if-exists-append-with-p
Using Panadas 2.2.3, sqlite3.version 2.6.0 and python 3.12.5, I get an error "table ... already exits" when using to_sql with if_exists='append'. I just try to append some data from a Pandas df to a SQLite DB table. Using if_exists='replace' produces the same result. In order to make sure that the db connection is active and the columns match, I used some simple print statements in a first try block and the failing to.sql in a second try block. Also a "select statement" from the same table is used in the first block. The first block is executed without an exception and the second block throws the message 'table "groupedData" already exists': (See print('ERROR Try 2')) Source code: try: print(db_conn) print(table_grouped) data = [x.keys() for x in db_conn.cursor().execute(f'select * from {table_grouped};').fetchall()] print(data) except Error as e: print('ERROR Try 1') print(e) try: print(df_grouped.head(5)) df_grouped.to_sql(table_grouped, db_conn, if_exists='append', index=False) #if_exists : {β€˜fail’, β€˜replace’, β€˜append’} db_conn.commit() except Error as e: print('ERROR Try 2') print(e) Output: <sqlite3.Connection object at 0x000001C0E7C0EB60> groupedData [['CustomerID', 'TotalSalesValue', 'SalesDate']] CustomerID TotalSalesValue SalesDate 0 12345 400.0 2020-02-01 1 12345 1050.0 2020-02-04 2 12345 10.0 2020-02-10 3 12345 200.0 2021-02-01 4 12345 50.0 2021-02-04 ERROR Try 2 table "groupedData" already exists
We can see what is happening by logging the SQL statements made by Pandas. This minimal example: import sqlite3 from sqlite3 import Error import pandas as pd table_name = 'tbl' df = pd.DataFrame([(1,)], columns=['a']) with sqlite3.connect(':memory:') as conn: # Log all (successful) SQL statements. conn.set_trace_callback(print) # Create table with differently cased name. CREATE = """CREATE TABLE Tbl (a int)""" conn.execute(CREATE) print('*** Updating table ***') try: df.to_sql(table_name, conn, if_exists='append', index=False) conn.commit() except Error as e: print(e) Produces this output: CREATE TABLE Tbl (a int) *** Updating table *** SELECT name FROM sqlite_master WHERE type IN ('table', 'view') AND name='tbl'; table "tbl" already exists So we can see that when Pandas checks for the table's existence, it uses the exact name that is passed to to_sql(), so the existing table is not found. However when Pandas attempts to create the table, SQLite will raise an error* if a table already exists with the same case-insensitive name, as we can see in the SQLite CLI: sqlite> CREATE TABLE T (a int); sqlite> CREATE TABLE t (a int); Parse error: table t already exists CREATE TABLE t (a int); ^--- error here Arguably Pandas could check in a case-insensitive way, as described here, but equally it could be argued that it is the programmer's responsibility to use consistent names†. * Pandas raises a ValueError if it detects that a table already exists, however the code is trapping an SQLite Error, so the exception isn't being raised by Pandas. † In fact, it seems that this issue has been raised before, and the Pandas developers elected to not make any changes.
2
1
79,019,523
2024-9-24
https://stackoverflow.com/questions/79019523/is-typing-assert-never-removed-by-command-line-option-o-similar-to-assert-st
In Python, assert statement produces no code if command line optimization options -O or -OO are passed. Does it happen for typing.assert_never()? Is it safe to declare runtime assertions that will not be optimized out? Consider the case from typing import assert_never def func(item: int | str): match item: case int(): ... case str(): ... case _: assert_never(item) Is it guaranteed that the default branch will work even in optimized mode?
No. assert_never() is only a normal function consisting of a raise statement, at least in CPython. It is not removed at runtime, as with other typing features. It should be emphasized that assert_never() has nothing special: It just takes in an argument of type Never and never returns anything (i.e., it raises): def assert_never(arg: Never) -> Never: raise AssertionError("Expected code to be unreachable") Its inclusion in the stdlib was simply due to the expectation that it would be commonly used. To quote the typing guides: Because the assert_never() helper function is frequently useful, it is provided by the standard library as typing.assert_never starting in Python 3.11, and is also present in typing_extensions starting at version 4.1. β€” Β§ assert_never() and Exhaustiveness Checking | Python typing guides The same section also says: However, it is also possible to define a similar function in your own code, for example if you want to customize the runtime error message.
1
2
79,004,567
2024-9-19
https://stackoverflow.com/questions/79004567/selenium-headless-broke-after-chrome-update
After updating google chrome this weekend, headless mode using Selenium python API is bringing up a blank window when running in windows. The identical code I had running on a Debian VM does not work any longer. Here is a code snippet: chrome_options = Options() chrome_options.add_argument("--headless=new") #previously used --headless chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-automation") chrome_options.add_argument("--disable-extensions") chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"]) driver = webdriver.Chrome(options=chrome_options) To isolate the problem, I removed all fqdn dns blocks I had enforced for privacy including: ad.doubleclick.net, analytics.yahoo.com, google-analytics.com, googleadservices.com, plausible.io, stats.g.doubleclick.net
It's a new bug in Chrome / Chromedriver 129 when using the new headless mode on Windows: https://github.com/SeleniumHQ/selenium/issues/14514#issuecomment-2357777800. https://issues.chromium.org/issues/359921643#comment2 In the meantime, use --window-position=-2400,-2400 to hide the window. chrome_options.add_argument("--window-position=-2400,-2400") Or use Chrome's older headless mode (while it still exists), until the fix is released: chrome_options.add_argument("--headless=old")
3
6
79,020,229
2024-9-24
https://stackoverflow.com/questions/79020229/creating-a-default-value-recursively-given-a-type-types-genericalias
Given a type t (originally comes from function annotations), I need to create a default value of that type. Normally, t() will do just that and work for many types, including basic types such as bool or int. However, tuple[bool, int]() returns an empty tuple, which is not a correct default value. It can get slightly trickier with more complex types such as tuple[bool, list[int]]. I saw that tuple[bool, int].__args__ returns (<class 'bool'>, <class 'int'>), which might be useful for writing a recursive function that implements this, but I'm still having trouble with this. Is there an existing function to do this and return the default value? If not, how would I write this code to work with all standard types? I'm using Python 3.11.
Since tuple is the only instantiable standard type where the default value should not be just an instantiation of the type with no argument, you can simply special-case it in a function that recursively creates a default value for a given type: from types import GenericAlias def get_default_value(typ): if isinstance(typ, GenericAlias) and issubclass(typ.__origin__, tuple): return typ(map(get_default_value, typ.__args__)) return typ() so that: print(get_default_value(tuple[bool, list[int]])) outputs: (False, []) Demo: https://ideone.com/wlb6TL
2
2
79,020,484
2024-9-24
https://stackoverflow.com/questions/79020484/topic-modelling-many-documents-with-low-memory-overhead
I've been working on a topic modelling project using BERTopic 0.16.3, and the preliminary results were promising. However, as the project progressed and the requirements became apparent, I ran into a specific issue with scalability. Specifically: For development/testing, it needs to train reasonably quickly on a moderate number of documents (tens of thousands to low hundred thousands) Our dev machines are Macs, so this probably has to be done on CPU For production, it needs to train on a large number of documents (several million) without blowing up memory usage For a baseline, with the default settings on my machine, BERTopic has a peak memory usage of roughly 35 kB per document, which easily becomes hundreds of GBs or even TBs for the amount of data that will be provided in production Ideally, this would have peak memory usage sublinear in the number of documents. That last requirement necessitates batching the documents, since loading them all into memory at once requires linear memory. So, I've been looking into clustering algorithms that work with online topic modelling. BERTopic's documentation suggests scikit-learn's MiniBatchKMeans, but the results I'm getting from that aren't very good. Some models I've looked at include: Birch via scikit-learn: uses even more memory than BERTopic's default HDBSCAN even when batched. Also runs much slower. IncrementalDBSCAN via incdbscan: Seemed promising at first, but the runtime and eventually memory ballooned. For ~120k documents in batches of 5000, it didn't use more than 4GB of RAM in the first 3Β½ hours, but didn't finish within ten hours, and used nearly 40GB of RAM at some point in the middle. AgglomerativeClustering via scikit-learn: gave very good results from initial testing (perhaps even better than HDBSCAN), but it doesn't implement the partial_fit method. I found this answer on a different question which suggests it's possible to train two of them using single linkage independently and then merge them, but it gives no indication as to how. The latter two also don't provide the predict method, limiting their utility. I am fairly new to the subject, so perhaps I'm approaching this completely wrong and the immediate problem I'm trying to solve has no solution. So to be clear, at the base level, the question I'm trying to answer is: How do I perform topic modelling (and get good results) on a large number of documents without using too much memory?
In general, advanced techniques like UMAP and HDBSCAN are helpful in producing high quality results on larger datasets but will take more memory. Unless it's absolutely required, you may want to consider relaxing this constraint for the sake of performance, real-world human time, and actual cost (hourly instance or otherwise). At this scale for a workflow you expect to go to production, rather than trying to work around this in software it may be easier to switch hardware. The GPU-accelerated UMAP and HDBSCAN in cuML can handle this much data very quickly -- quick enough that it's probably worth considering renting a GPU-enabled system if you don't have one locally. For the following example, I took a sample of one million Amazon reviews, encoded them into embeddings (384 dimensions), and used the GPU UMAP and HDBSCAN in the current cuML release (v24.08). I ran this on a system with an H100 GPU. from bertopic import BERTopic from sentence_transformers import SentenceTransformer import pandas as pd from cuml.manifold.umap import UMAP from cuml.cluster import HDBSCAN df = pd.read_json("Electronics.json.gz", lines=True, nrows=1000000) reviews = df.reviewText.tolist() # Create embeddings sentence_model = SentenceTransformer("all-MiniLM-L6-v2") embeddings = sentence_model.encode(reviews, batch_size=1024, show_progress_bar=True) reducer = UMAP(n_components=5) %time reduced_embeddings = reducer.fit_transform(embeddings) CPU times: user 1min 33s, sys: 7.2 s, total: 1min 40s Wall time: 7.31 s clusterer = HDBSCAN() %time clusterer.fit(reduced_embeddings) CPU times: user 21.5 s, sys: 125 ms, total: 21.6 s Wall time: 21.6 s There's an example of how to run these steps on GPUs in the BERTopic FAQs. I work on these projects at NVIDIA and am a community contributor to BERTopic, so if you run into any issues please let me know and file a Github issue.
3
1
79,020,378
2024-9-24
https://stackoverflow.com/questions/79020378/do-i-need-to-use-timezones-with-timedelta-and-datetime-now
If I only use datetime.now() with timedelta to calculate deltas, is it safe to ignore time zones? For example, is there a case where if a start time is before daylight savings, and an end time is after, that I will get the wrong result if I don't use a time zone aware call to datetime.now()?
No, it is not safe. Do calculations in UTC if you want the actual time between timezone-aware values. Two values in the same timezone without UTC will give "wall time". For example, DST ends Nov 3, 2024 at 2am: # "pip install tzdata" for up-to-date Windows IANA database. import datetime as dt import zoneinfo as zi zone = zi.ZoneInfo('America/Los_Angeles') start = dt.datetime(2024, 11, 3, 0, 0 , 0, tzinfo=zone) end = dt.datetime(2024, 11, 3, 3, 0 , 0, tzinfo=zone) print(end - start) print(end.astimezone(dt.UTC) - start.astimezone(dt.UTC)) Output: 3:00:00 4:00:00 Even though both start and end times are time zone-aware, midnight to 3am appears to be 3 hours, but is actually 4 hours due to gaining an hour when DST ends. Be careful with timedelta as well: print(start + dt.timedelta(hours=24)) print((start.astimezone(dt.UTC) + dt.timedelta(hours=24)).astimezone(zone)) Output: 2024-11-04 00:00:00-08:00 2024-11-03 23:00:00-08:00 Adding 24 hours to a time zone-aware values doesn't account for the time shift. Do the calculation in UTC and display as the original TZ. Note the second time accounts for gaining an hour and represents exactly 24 hours later. The first time doesn't. You may want the first calculation if you want the same "wall time" but the next day, such as a recurring daily meeting at 9am...you may not want adding a day to shift the meeting time on DST transitions.
2
5
79,020,232
2024-9-24
https://stackoverflow.com/questions/79020232/assign-multi-index-variable-values-based-on-the-number-of-elements-in-a-datafram
I have a large csv dataset the looks like the following: id,x,y,z 34295,695.117,74.0177,70.6486 20915,800.784,98.5225,19.3014 30369,870.428,98.742,23.9953 48151,547.681,53.055,174.176 34026,1231.02,73.7678,203.404 34797,782.725,73.9831,218.592 15598,983.502,82.9373,314.081 34076,614.738,86.3301,171.316 20328,889.016,98.9201,13.3068 ... If I consider each of these lines an element, I would like to have a data structure where I can easily divide space into x,y,z ranges (3-d blocks of space) and determine how many elements are within a given block. For instance if I divided into cubes of 100 x 100 x 100: counts[900][100][100] = 3 because id's 20915, 30369, and 20328 from the excerpt of the csv above are all within the range x = 800-900, y = 0-100, and z = 0-100. The brute force way to create something like this is to create a multi-level dictionary as follows: import numpy import pandas df = pandas.read_csv("test.csv") xs = numpy.linspace(0, 1300, 14, endpoint=True) ys = numpy.linspace(0, 1000, 11, endpoint=True) zs = numpy.linspace(0, 1000, 11, endpoint=True) c = {} for x_index, x in enumerate(xs[:-1]): c[xs[x_index + 1]] = {} for y_index, y in enumerate(ys[:-1]): c[xs[x_index + 1]][ys[y_index + 1]] = {} for z_index, z in enumerate(zs[:-1]): c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]] = df[(df["x"] > xs[x_index]) & (df["x"] <= xs[x_index + 1]) & (df["y"] > ys[y_index]) & (df["y"] <= ys[y_index + 1]) & (df["z"] > zs[z_index]) & (df["z"] <= zs[z_index + 1])]["id"].count() if (c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]] > 0): print("c[" + str(xs[x_index + 1]) + "][" + str(ys[y_index + 1]) + "][" + str(zs[z_index + 1]) + "] = " + str(c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]])) This gives the expected output of: c[600.0][100.0][200.0] = 1 c[700.0][100.0][100.0] = 1 c[700.0][100.0][200.0] = 1 c[800.0][100.0][300.0] = 1 c[900.0][100.0][100.0] = 3 c[1000.0][100.0][400.0] = 1 c[1300.0][100.0][300.0] = 1 But since the actual production CSV file is very large, it is quite slow. Any suggestions for how to make it fast and a little less clunky?
You could cut and value_counts: tmp = df[['x', 'y', 'z']] bins = np.arange(0, np.ceil(np.max(tmp)/100)*100, 100) tmp.apply(lambda s: pd.cut(s, bins, labels=bins[1:])).value_counts().to_dict() Output: {(900.0, 100.0, 100.0): 3, (600.0, 100.0, 200.0): 1, (700.0, 100.0, 100.0): 1, (700.0, 100.0, 200.0): 1, (800.0, 100.0, 300.0): 1, (1000.0, 100.0, 400.0): 1} Or round up to the nearest 100 before value_counts: (np.ceil(df[['x', 'y', 'z']].div(100)) .mul(100).astype(int) .value_counts(sort=False) .to_dict() ) Output: {(600, 100, 200): 1, (700, 100, 100): 1, (700, 100, 200): 1, (800, 100, 300): 1, (900, 100, 100): 3, (1000, 100, 400): 1, (1300, 100, 300): 1}
1
1
79,019,358
2024-9-24
https://stackoverflow.com/questions/79019358/converting-pandas-dataframe-to-wiki-markup-table
I'm automating some data processing and creating jira tickets out of it. Pandas does have to_html or to_csv or even to_markdown. But jira supports only wiki markup for creating a table. e.g. <!-- wiki markup --> ||header1||header2||header3||\r\n|cell 11|cell 12|cell 13|\r\n|cell 21|cell 22|cell 23| will create header1 header2 header3 cell 11 cell 12 cell 13 cell 21 cell 22 cell 23 Is there anyway to convert pandas dataframe to wiki markup table to be used in Jira? I'm keeping df.iterrows as Last resort since iterating over dataframe is not a recommended solution as per answers in How can I iterate over rows in a Pandas DataFrame? Since my expected dataframe is small, iteration should be fine in my case. This question can be considered as more of a curiosity what can be done in case of larger dataframes.
Don't reinvent the wheel, tabulate supports a jira template: from tabulate import tabulate tabulate(df, headers='keys', tablefmt='jira', showindex=False) Output: '|| header1 || header2 || header3 ||\n| cell 11 | cell 12 | cell 13 |\n| cell 21 | cell 22 | cell 23 |' If you really want the \r\n line separator: tabulate(df, headers='keys', tablefmt='jira', showindex=False).replace('\n', '\r\n')
3
1
79,019,231
2024-9-24
https://stackoverflow.com/questions/79019231/how-to-reduce-the-dimension-of-csv-file
Suppose I have one CSV file with dimension mΓ—n means m rows and n columns. I want to reduce its dimension by replacing average value of corresponding sub matrix. Example 1: Given we have 6Γ—6 matrix (CSV file): col1,col2,col3,col4,col5,col6 a1,b1,c1,d1,e1, f1 a2,b2,c2,d2,e2, f2 a3,b3,c3,d3,e3, f3 a4,b4,c4,d4,e4, f4 a5,b5,c5,d5,e5, f5 a6,b6,c6,d6,e6, f6 If we want 2Γ—2 matrix, then resultant CSV file should be below: col1, col2 a', d' a", d" Where a'=(a1+a2+a3+b1+b2+b3+c1+c2+c3)/9 a"=(a4+a5+a6+b4+b5+b6+c4+c5+c6)/9 d'=(d1+d2+d3+e1+e2+e3+f1+f2+f3)/9 d"=(d4+d5+d6+e4+e5+e6+f4+f5+f6)/9 Example:2 Given we have 5Γ—6 matrix (CSV file): col1,col2,col3,col4,col5,col6 a1,b1,c1,d1,e1, f1 a2,b2,c2,d2,e2, f2 a3,b3,c3,d3,e3, f3 a4,b4,c4,d4,e4, f4 a5,b5,c5,d5,e5, f5 If we want 2Γ—2 matrix, then resultant CSV file should be below: col1, col2 a', d' a", d" Where a'=(a1+a2+a3+b1+b2+b3+c1+c2+c3)/9 a"=(a4+a5+b4+b5+c4+c5)/6 d'=(d1+d2+d3+e1+e2+e3+f1+f2+f3)/9 d"=(d4+d5+e4+e5+f4+f5)/6 Example 3: Given we have 6Γ—5 matrix (CSV file): col1,col2,col3,col4,col5,col6 a1,b1,c1,d1,e1 a2,b2,c2,d2,e2 a3,b3,c3,d3,e3 a4,b4,c4,d4,e4 a5,b5,c5,d5,e5 a6,b6,c6,d6,e6 If we want 2Γ—2 matrix, then resultant CSV file should be below: col1, col2 a', d' a", d" Where a'=(a1+a2+a3+b1+b2+b3+c1+c2+c3)/9 a"=(a4+a5+a6+b4+b5+b6+c4+c5+c6)/9 d'=(d1+d2+d3+e1+e2+e3)/6 d"=(d4+d5+d6+e4+e5+e6)/6 I want the python code which can reduce the dimension by putting the average of the sum of all sub matrix. For in example1, we have given 6Γ—6 matrix, we want 2Γ—2 matrix, so we consider (6Γ·2) Γ— (6Γ·2) = 3Γ—3 sub matrix and calculate average of 9 elements of 3Γ—3 matrix , which is the one element of resultant 2Γ—2 matrix and so on. And in example2, if given dimension isn't multiple of resultant dimension, we use ceiling function, we first start by consider ceiling(5Γ·2)Γ—(6Γ·2) = 3Γ—3 matrix, and at the end(corner) , we mayn't get 3Γ—3 matrix, we just calculate average of remaining elements, as we see in example2, example3.
Assuming this example: col1 col2 col3 col4 col5 col6 0 0 1 2 3 4 5 1 6 7 8 9 10 11 2 12 13 14 15 16 17 3 18 19 20 21 22 23 4 24 25 26 27 28 29 5 30 31 32 33 34 35 You could rename the indexes (with set_axis), stack, and groupby.mean: import math n, m = 2, 2 # desired shape out = (df .set_axis(np.arange(df.shape[0])//math.ceil(df.shape[0]/n), axis=0) .set_axis(np.arange(df.shape[1])//math.ceil(df.shape[1]/m), axis=1) .stack().groupby(level=[0, 1]).mean().unstack() .rename(columns=lambda x: f'col{x+1}') # optional ) Alternative using numpy and padding (numpy.pad) with NaNs before reshape and nanmean: import math n, m = 2, 2 # desired shape out = pd.DataFrame(np.nanmean(np.pad(df.astype(float), [(0, df.shape[0]%n), (0, df.shape[1]%m)], constant_values=np.nan) .reshape(n, math.ceil(df.shape[0]/n), m, -1), axis=(1, 3) ) ).rename(columns=lambda x: f'col{x+1}') Output: col1 col2 0 7.0 10.0 1 25.0 28.0 Output with a 5x6 input (last row missing): col1 col2 0 7.0 10.0 1 22.0 25.0
1
2
79,015,399
2024-9-23
https://stackoverflow.com/questions/79015399/qstate-assignproperty-not-working-in-pyside
I saw this example using QState, which seems to work with PyQt5. However on trying to use PySide for this, I get this error; Traceback (most recent call last): File ".../qstate-error.py", line 16, in <module> state1.assignProperty(widget, b'test', 1) TypeError: 'PySide2.QtCore.QState.assignProperty' called with wrong argument types: PySide2.QtCore.QState.assignProperty(QWidget, bytes, int) Supported signatures: PySide2.QtCore.QState.assignProperty(PySide2.QtCore.QObject, bytes, typing.Any) Unfortunately I have to use PySide for work, so switching to PyQt5 is not an option. I have tried this with PySide2 and PySide6 - both are throwing this error. But then I saw this question that explicitly mentions PySide - so presumably it works. So do QState require some special setup that I am not aware of? Or is there something broken in my build? I'm working on Rocky 8, python 3.9.16 For reference, the following example works in pyqt5, but does not work in pyside2 (same result in pyside6). try: from PySide2.QtWidgets import QApplication, QWidget from PySide2.QtCore import QState except ImportError: from PyQt5.QtWidgets import QApplication, QWidget from PyQt5.QtCore import QState if __name__ == '__main__': app = QApplication([]) widget = QWidget() state1 = QState() state1.assignProperty(widget, b'test', 1) widget.show() app.exec_()
It's a bug in the signature definition, reported in PYSIDE-2444. Based on the PyQt/PySide convention, a C++ char should be a bytes in Python, in fact the function works as expected in PyQt5/6, and also the latest PySide6 releases (due to the fix related to the above bug). In reality, many functions that accept char also accept standard Python strings and are internally converted, meaning that you can just use the following: state1.assignProperty(widget, 'test', 1) In case you need more strict typing and always use bytes, you can eventually use a QState subclass and "override" assignProperty() by eventually converting the type and calling the base implementation. It won't be a real override (as you could just create a function with any name and call the original assignProperty()), but it would be more consistent in code reading.
1
2
79,018,992
2024-9-24
https://stackoverflow.com/questions/79018992/score-number-of-true-instances-with-python-polars
I am working on a dataframe with the following structure: df = pl.DataFrame({ "datetime": [ "2024-09-24 00:00", "2024-09-24 01:020", "2024-09-24 02:00", "2024-09-24 03:00", ], "Bucket1": [2.5, 8, 0.7, 12], "Bucket2": [3.7, 10.1, 25.9, 9.9], "Bucket3": [40.0, 15.5, 10.7, 56], }) My goal is to output a table that counts the number of times a group of values appears across my dataset, something like this: shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ datetime ┆ 0-10 β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ 2024-09-24 00:00 ┆ 2 β”‚ β”‚ 2024-09-24 01:020 ┆ 1 β”‚ β”‚ 2024-09-24 02:00 ┆ 1 β”‚ β”‚ 2024-09-24 03:00 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ I have tried a couple approaches, like using pl.when together with .is_between to do something like when (Bucket1.is_between(0, 10, closed="left") | Bucket1.is_between(0, 10, closed="left")) then (1) But the result just evaluates to 1 regardless of how many Buckets evaluate to True. and also using concat list columns = ["Bucket1", "Bucket2", "Bucket3"] df.with_columns( pl.concat_list( [pl.col(col).is_between(0,10,closed="left") for col in columns] ) .arr.sum() .alias("0-10") ) The first approach didn't work as it just returns a list of booleans. The second one errors out with Invalid input for "col", Expected iterable of type "str" or "DataType", got iterable of "Expr" How could I tackle this problem using Polar?
In the latest version 1.8.1 of Polars, your code runs as expected after replacing the arr namespace with the list namespace. Moreover, it can be simplified to avoid the list comprehension as follows. cols = ["Bucket1", "Bucket2", "Bucket3"] df.with_columns( pl.concat_list(pl.col(cols).is_between(0, 10, closed="left")).list.sum().alias("0-10") ) shape: (4, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ datetime ┆ Bucket1 ┆ Bucket2 ┆ Bucket3 ┆ 0-10 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ f64 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ══════║ β”‚ 2024-09-24 00:00 ┆ 2.5 ┆ 3.7 ┆ 40.0 ┆ 2 β”‚ β”‚ 2024-09-24 01:020 ┆ 8.0 ┆ 10.1 ┆ 15.5 ┆ 1 β”‚ β”‚ 2024-09-24 02:00 ┆ 0.7 ┆ 25.9 ┆ 10.7 ┆ 1 β”‚ β”‚ 2024-09-24 03:00 ┆ 12.0 ┆ 9.9 ┆ 56.0 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
2
79,019,014
2024-9-24
https://stackoverflow.com/questions/79019014/column-is-not-accessible-using-groupby-and-applylambda
I'm encountering a KeyError when trying to use the .apply() method on a pandas DataFrame after performing a groupby. The goal is to calculate the weighted average baced on the Industry_adjusted_return column. The error indicates that the 'Industry_adjusted_return' column cannot be found. Below is a minimal example that reproduces the issue: ``` import pandas as pd # Creating a small DataFrame data = { 'ISIN': ['DE000A1DAHH0', 'DE000KSAG888'], 'Date': ['2017-03-01', '2017-03-01'], 'MP_quintile': [0, 0], 'Mcap_w': [8089460.00, 4154519.75], 'Industry_adjusted_return': [-0.00869, 0.043052] } df = pd.DataFrame(data) df['Date'] = pd.to_datetime(df['Date']) # Ensure 'Date' is datetime type I'm using Python 3.8 with pandas version 1.3.3. Any insights into why this error occurs and how to fix it would be greatly appreciated. code: for i,grouped in wa.groupby(['Date','MP_quintile']): print(i,grouped) weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum()) the Error { "name": "KeyError", "message": "'Industry_adjusted_return'", "stack": "--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\core\\indexes\\base.py:3802, in Index.get_loc(self, key, method, tolerance) 3801 try: -> 3802 return self._engine.get_loc(casted_key) 3803 except KeyError as err: File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\_libs\\index.pyx:138, in pandas._libs.index.IndexEngine.get_loc() File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\_libs\\index.pyx:146, in pandas._libs.index.IndexEngine.get_loc() File pandas\\_libs\\index_class_helper.pxi:49, in pandas._libs.index.Int64Engine._check_type() KeyError: 'Industry_adjusted_return' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) Cell In[10], line 8 3 print(i,grouped) 4 #weighted_average_returns = grouped.apply( lambda x: ((x['Mcap_w'] / x['Mcap_w'].sum()))).sum() 5 # grouped['weights_EW'] = 1 / len(grouped) 6 # grouped['return_EW'] = grouped['Industry_adjusted_return'] * grouped['weights_EW'] ----> 8 weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum()) # 9 # equally_weighted_returns=grouped['return_EW'].sum() 10 # # _df=cpd.from_dataframe(_df,allow_copy=True) 11 break File c:\\Users\\pandas\\core\\frame.py:9568, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs) 9557 from pandas.core.apply import frame_apply 9559 op = frame_apply( 9560 self, 9561 func=func, (...) 9566 kwargs=kwargs, 9567 ) -> 9568 return op.apply().__finalize__(self, method=\"apply\") File c:\\Users\\pandas\\core\\apply.py:764, in FrameApply.apply(self) 761 elif self.raw: 762 return self.apply_raw() --> 764 return self.apply_standard() File c:\\Users\\pandas\\core\\apply.py:891, in FrameApply.apply_standard(self) 890 def apply_standard(self): --> 891 results, res_index = self.apply_series_generator() 893 # wrap results 894 return self.wrap_results(results, res_index) File c:\\Users\\pandas\\core\\apply.py:907, in FrameApply.apply_series_generator(self) 904 with option_context(\"mode.chained_assignment\", None): 905 for i, v in enumerate(series_gen): 906 # ignore SettingWithCopy here in case the user mutates --> 907 results[i] = self.f(v) 908 if isinstance(results[i], ABCSeries): 909 # If we have a view on v, we need to make a copy because 910 # series_generator will swap out the underlying data 911 results[i] = results[i].copy(deep=False) Cell In[10], line 8, in <lambda>(x) 3 print(i,grouped) 4 #weighted_average_returns = grouped.apply( lambda x: ((x['Mcap_w'] / x['Mcap_w'].sum()))).sum() 5 # grouped['weights_EW'] = 1 / len(grouped) 6 # grouped['return_EW'] = grouped['Industry_adjusted_return'] * grouped['weights_EW'] ----> 8 weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum()) # 9 # equally_weighted_returns=grouped['return_EW'].sum() 10 # # _df=cpd.from_dataframe(_df,allow_copy=True) 11 break File c:\\Users\\pandas\\core\\series.py:981, in Series.__getitem__(self, key) 978 return self._values[key] 980 elif key_is_scalar: --> 981 return self._get_value(key) 983 if is_hashable(key): 984 # Otherwise index.get_value will raise InvalidIndexError 985 try: 986 # For labels that don't resolve as scalars like tuples and frozensets File c:\\Users\\pandas\\core\\series.py:1089, in Series._get_value(self, label, takeable) 1086 return self._values[label] 1088 # Similar to Index.get_value, but we do not fall back to positional -> 1089 loc = self.index.get_loc(label) 1090 return self.index._get_values_for_loc(self, loc, label) File c:\\Users\\pandas\\core\\indexes\\base.py:3804, in Index.get_loc(self, key, method, tolerance) 3802 return self._engine.get_loc(casted_key) 3803 except KeyError as err: -> 3804 raise KeyError(key) from err 3805 except TypeError: 3806 # If we have a listlike key, _check_indexing_error will raise 3807 # InvalidIndexError. Otherwise we fall through and re-raise 3808 # the TypeError. 3809 self._check_indexing_error(key) KeyError: 'Industry_adjusted_return'" }
You should access the columns directly from grouped when calculating the weighted average. No need to use .apply() in this case since you're applying a vectorized operation: import pandas as pd data = { 'ISIN': ['DE000A1DAHH0', 'DE000KSAG888'], 'Date': ['2017-03-01', '2017-03-01'], 'MP_quintile': [0, 0], 'Mcap_w': [8089460.00, 4154519.75], 'Industry_adjusted_return': [-0.00869, 0.043052] } df = pd.DataFrame(data) df['Date'] = pd.to_datetime(df['Date']) for i, grouped in df.groupby(['Date', 'MP_quintile']): print(f"Group: {i}\n{grouped}") weighted_average_returns = (grouped['Industry_adjusted_return'] * (grouped['Mcap_w'] / grouped['Mcap_w'].sum())).sum() print(f"Weighted Average Returns: {weighted_average_returns}\n") which returns Group: (Timestamp('2017-03-01 00:00:00'), 0) ISIN Date MP_quintile Mcap_w Industry_adjusted_return 0 DE000A1DAHH0 2017-03-01 0 8089460.00 -0.008690 1 DE000KSAG888 2017-03-01 0 4154519.75 0.043052 Weighted Average Returns: 0.008866641328527188
1
2
79,018,528
2024-9-24
https://stackoverflow.com/questions/79018528/exec-inside-a-function-and-generator
I need to write a custom exec function in python (for several purposes but this is not the problem here, so this custom exec which is called myExec will do exactly as exec for now). I went into this problem : def myExec(code): exec(code) code = """ a = 1 print(a) u = [a for x in range(3)] print(u) """ myExec(code) Running this program gives 1 Traceback (most recent call last): File "___.py", line 12, in <module> myExec(code) File "___.py", line 2, in myExec exec(code, globals(), locals()) File "<string>", line 4, in <module> File "<string>", line 4, in <listcomp> NameError: name 'a' is not defined So print(a) went without any problems. But the error occurs with the line u = [a for x in range(3)]. When the generator object is converted into a list, the name a seems undefined. Note that if the line were u = [a, a, a], then, no error is raised. Nor if we use exec instead of myExec. Any reason why and how to solve this ?
We can explain this behavior if we take a look at the decompiled code. from dis import dis def myExec(code): dis(code) a = 1 is compiled to STORE_NAME, so it stores a as a local variable here print(a) uses LOAD_NAME to load the local a. It is a local variable, so LOAD_NAME finds it. the list comprehension is compiled into a function that uses LOAD_GLOBAL for a That's where the error is coming from. a was created as a local variable and is accessed as a global one in the list comprehension. This results in a name error. This also explains why the exec works in the global scope (either by calling exec in the global scope or by passing globals()). Because then STORE_NAME stores a in the current scope (which is global) and LOAD_GLOBAL can find a. If you switch to Python3.12 which implements PEP 709 – Inlined comprehensions you will see that no extra function is created for the list comprehension and a is looked up with LOAD_NAME and can be found. So to fix your issue: either upgrade to Python3.12 or pass the globals()
2
1
79,010,439
2024-9-21
https://stackoverflow.com/questions/79010439/async-server-and-client-scripts-stopped-working-after-upgrading-to-python3-12
So I have two scripts that use asyncio's servers' for communication, the script's work by the server opening an asyncio server and listening for connections, the client script connecting to that server, the server script stopping listening for new connections and assigning the reader and the writer to global variables so data sending and receiving would be possible. Server.py: import asyncio import sys class Server: def __init__(self): self.reader, self.writer = None, None self.connected = False async def listen(self, ip: str, port: int) -> None: """ Listens for incoming connections and handles the first connection. After accepting the first connection, it stops the server from accepting further connections. :param ip: IP address to listen on. :param port: Port number to listen on. """ async def handle_connection(reader, writer): print("Client connected!") # Assign the reader and writer to instance variables for later use self.reader, self.writer = reader, writer self.connected = True print("Shutting down server from accepting new connections") server.close() await server.wait_closed() print(f"Listening on {ip}:{port}") server = await asyncio.start_server(handle_connection, ip, port) try: async with server: await server.serve_forever() except KeyboardInterrupt: sys.exit(1) except asyncio.CancelledError: print("Connection canceled") except Exception as e: print(f"Unexpected error while trying to listen, Error: {e}") sys.exit(1) if __name__ == '__main__': server = Server() asyncio.run(server.listen('192.168.0.35', 9090)) Client.py: import asyncio class Client: def __init__(self): self.reader, self.writer = None, None self.connected = False async def connect(self, ip: str, port: int) -> None: """ Connects to a server at the specified IP address and port. :param ip: IP address of the server. :param port: Port number of the server. """ while not self.connected: try: self.reader, self.writer = await asyncio.wait_for( asyncio.open_connection(ip, port), 5 ) print(f"Connecting to {ip}:{port}") self.connected = True break except Exception as e: print( f"Failed to connect to {ip}:{port} retrying in 10 seconds." ) print(e) await asyncio.sleep(10) continue if __name__ == '__main__': Client = Client() asyncio.run(Client.connect('192.168.0.35', 9090)) In python 3.11 the execution process was as follows; the Client script is connecting to the listening server script, the server script is calling the handle_connection function and the function is raising asyncio.CancelledError which exits the listening method and keeps the reader and writer alive. However in python 3.12; the Client script is connecting to the listening server script, the server script is calling the handle_connection and is being stuck at await server.wait_closed(). I did some debugging and discovered that the await server.wait_closed() line is not returning unless the writer is closed using writer.close(), which we do not want because as I said the script will be using the reader and writer for communication. My intended action was for the server script to listen to a single connection, when a connection is established for it to stop listening for any further connection attempts but still maintain a connection between it and the original connected client. EDIT: I upgraded from python3.11.9 to python3.12.6
To stop serving server.close does the job. The wait_closed has a broader meaning. Let me quote directly from the asyncio code, it explains everything, also why it was working on 3.11: async def wait_closed(self): """Wait until server is closed and all connections are dropped. - If the server is not closed, wait. - If it is closed, but there are still active connections, wait. Anyone waiting here will be unblocked once both conditions (server is closed and all connections have been dropped) have become true, in either order. Historical note: In 3.11 and before, this was broken, returning immediately if the server was already closed, even if there were still active connections. An attempted fix in 3.12.0 was still broken, returning immediately if the server was still open and there were no active connections. Hopefully in 3.12.1 we have it right. """
2
3
79,016,972
2024-9-24
https://stackoverflow.com/questions/79016972/killable-socket-in-python
My goal is to emit an interface to listen on a socket forever ... until someone up the decision chain decides it's enough. This is my implementation, it does not work. Mixing threads, sockets, object lifetime, default params and a language I do not speak too well is confusing. I tested individually different aspects of this code and everything was as expected except the line containing the comment BUG where I attempt to force the main thread to block until the server hears the child screaming or a timeout passes but instead recv() simply doesn't see the change in alive. #!/usr/bin/env python3 import socket import threading import time MAX_MSG_BYTES=1024 TEST_PORT=42668 def recv( s: socket.socket, alive: bool=True ) -> bytes: ''' Accepts packets on a socket until terminated. ''' s.settimeout(1) # 1 second while alive: print("'alive' is still", alive) try: data = s.recv(MAX_MSG_BYTES) assert data # Empty packets were a problem. yield data except TimeoutError: pass # expected error, any other is propagated up def test_nonblocking_recv() -> None: # Create 3 sockets - sever administrative, server content and client content. # Bind the latter and forget about the former. server_s = socket.create_server(('', TEST_PORT)) server_s.listen() client_s = socket.create_connection(('localhost', TEST_PORT)) content_s = next(iter(server_s.accept())) # Accept 1 connection. # client_s.sendall('If this is commented out, the server hangs.'.encode('utf8')) alive = True def read_one_message(): data = recv(content_s, alive) print(next(iter(data))) # BUG this causes outside alive to not be seen content_th = threading.Thread(target=read_one_message) content_th.start() time.sleep(3) alive = False print("But main thread 'alive' is", alive) content_th.join() assert threading.active_count() == 1 if __name__ == '__main__': test_nonblocking_recv()
I'm scared of globals. What I am attempting to do is pass a reference to "something somewhere that can be evaluated to bool". Global variables can be problematic - but sometimes they are the correct thing to use. "bool"s are scalar values in Python - when you pass alive as a parameter to your function, it will have its own reference of it (pointing to the True value), and it will never change no matter what you do on the main thread: when you assign to the local alive there, it puts a new reference, to False in the local name - the name in the other thread remains pointing to True. (we usually don't use the terms "pointing to" in Python, I am using they because I think that would be familiar to you). Just change alive to be a global variable there and it will work. If you want to constrain the variable scope, you could group are your functions in a class, and have alive be an instance attribute. In this way, other instances of the same class could, for example, listen to other ports. Anyway, it won't help saying you are "scared" of the correct, simplest thing to do there. In Python, only the functions which write to module level (i.e. global) variables have to declare them - they are read automatically as globals if they are not set in a function: #!/usr/bin/env python3 import socket import threading import time MAX_MSG_BYTES=1024 TEST_PORT=42668 alive: bool # declaration not needed, but helps with readability def recv( s: socket.socket) -> bytes: ''' Accepts packets on a socket until terminated. ''' s.settimeout(1) # 1 second while alive: print("'alive' is still", alive) try: data = s.recv(MAX_MSG_BYTES) assert data # Empty packets were a problem. yield data except TimeoutError: pass # expected error, any other is propagated up def test_nonblocking_recv() -> None: global alive # whenever a value is assigned to "alive" here, it goes into the #top level var. # Create 3 sockets - sever administrative, server content and client content. # Bind the latter and forget about the former. server_s = socket.create_server(('', TEST_PORT)) server_s.listen() client_s = socket.create_connection(('localhost', TEST_PORT)) content_s = next(iter(server_s.accept())) # Accept 1 connection. # client_s.sendall('If this is commented out, the server hangs.'.encode('utf8')) alive = True def read_one_message(): data = recv(content_s) print(next(iter(data))) content_th = threading.Thread(target=read_one_message) content_th.start() time.sleep(3) alive = False print("But main thread 'alive' is", alive) content_th.join() assert threading.active_count() == 1 if __name__ == '__main__': test_nonblocking_recv() But yes, you can use a mutable object instead of a scalar or a global variable. Without declaring a new class, a trick to do that is to use a container object, like a list, or dict: both your controller function and the worker will have a reference to the same object. You could have a 1-element list, for example, containing [True], and chaging that element to False would be visible in the worker: ... def recv( s: socket.socket, alive: list[bool]) -> bytes: # A mutable object must never be used as default value in a function declaration - so we don~t set it. ''' Accepts packets on a socket until terminated. ''' s.settimeout(1) # 1 second while alive[0]: print("'alive' is still", alive) ... def test_nonblocking_recv() -> None: ... alive = [True] # a new list, with a single element def read_one_message(): data = recv(content_s, alive) # we pass the list itself as argument print(next(iter(data))) ... alive[0] = False #we change the first element on the list. doing `alive = [False]` would simply # create a new reference here, while the worker wpuld keep its reference to the initial list. print("But main thread 'alive' is", alive) ... And, if you don't want to use a container, you can create a special class which bool evaluation can be controlled - it happens that the truthness value of any object in Python can be determined by the output of a special named methodn __bool__ (if that is not present, Python will check if it is a container with length, and then it is Falsy if len(obj) == 0, Truthy otherwise, otherwise if it is a number with value "0" - otherwise, the special value None is False, and everything else is True) TL;DR: create a small class with an internal state which can be changed to modify it is truthyness: ... class Switch: def __init__(self, intial_state=True): self.state = initial_state def turn_off(self): self.state = False def __bool__(self): return self.state def recv( s: socket.socket, alive: Switch) -> bytes: ... s.settimeout(1) # 1 second while alive: print("'alive' is still", alive) ... def test_nonblocking_recv() -> None: ... alive = Switch() def read_one_message(): data = recv(content_s, alive) print(next(iter(data))) ... alive.turn_off() print("But main thread 'alive' is", alive) ... Also, you could group test_nonblocking_recv and recv functions in a class, and use self.alive, as I stated earlier - or, simply move recv to be nested inside test_nonblocking_recv along with the read_one_message function: the two nested functions would see alive as a "nonlocal" variable, and everything would simply work (read_one_message already makes use of alive as a nonlocal variable in your code)
1
2
79,000,759
2024-9-19
https://stackoverflow.com/questions/79000759/is-it-possible-to-define-class-methods-as-shiny-python-modules
I'm trying to build a Shiny for Python app where portions of the form code can be broken out using Shiny Modules where I can define the specific ui and server logic as a class method, but ultimately inherit some base capability. I have the following shiny app: import pandas as pd from shiny import App, reactive, ui, module, render from abc import ABC, abstractmethod class ShinyFormTemplate(ABC): @abstractmethod @module.ui def ui_func(self): pass @abstractmethod @module.server def server_func(self, input, output, session, *args, **kwargs): pass class FruitForm(ShinyFormTemplate): @module.ui def ui_func(self): return ui.row( ui.input_text("fruits",'Select a Fruit',""), ui.output_text("output_fruit"), ui.input_text("qtys",'Quantity:',""), ui.output_text("output_qty"), ) @module.server def server_func(self, input, output, session, *args, **kwargs): @render.text def output_fruits(): return input.fruits() @render.text def output_qty(): return input.qtys() class VeggieForm(ShinyFormTemplate): @module.ui def ui_func(self): return ui.row( ui.input_radio_buttons("veggie","Select a Veggie:",{'Asparagus':'Asparagus','Spinach':'Spinach','Squash':'Squash','Lettuce':'Lettuce'}), ui.output_text("output_veggie"), ui.input_text("qtys",'Quantity:',""), ui.output_text("output_qty"), ) @module.server def server_func(self, input, output, session, *args, **kwargs): @render.text def output_veggie(): return input.veggie() @render.text def output_qty(): return input.qtys() fruits = FruitForm() veggies = VeggieForm() app_ui = ui.page_fluid( ui.page_navbar( ui.nav_panel("New Fruit", "New Fruit - Input Form", fruits.ui_func('fruit') ), ui.nav_panel("New Veggie", "New Veggie - Input Form", veggies.ui_func('veggie') ), title="Sample App", id="page", ), title="Basic App" ) def server(input, output, session): fruits.server_func('fruit') veggies.server_func('veggie') app = App(app_ui, server) But when trying to run this, I get an error: Exception has occurred: ValueError `id` must be a single string File "C:\Users\this_user\OneDrive\Documents\Programming Projects\Python\example_class_module\app.py", line 66, in <module> fruits.ui_func('fruit') ValueError: `id` must be a single string I do not understand this error because I thought I provided a string based id for the module namespace ('fruit' in this case). What I've Tried: I tried to define the server and ui objects outside the class and passing them in as arguments to the init() function instead of defining them as class methods. However, working this way, I don't think I can access attributes of the class instance from within the module function using self.attribute. Is it possible to define class methods as shiny module (server and ui) components?
I found a way to do all the things desired, but I'm not sure it's the best way. Welcome suggestions if there is a better way.. Instead of explicitly defining a ui or server module AS a class method, its possible to achieve similar functionality if the ui and server modules are defined INSIDE a class method and then called at the end of the method to activate the module code. Here is the basic idea: class MyForm: def __init__(self, namespace_id): # pass in other objects specific to this form, such as a dataframe to populate the form with initially self.__namespace_id = namespace_id def call_ui(self): @module.ui def ui_func(): # ui module components go here for the module return ui_func(self.__namespace_id) def call_server(self): @module.server def server_func(self, input, output, session): # server module logic goes here server_func(self.__namespace_id) form = MyForm('fruit') # the module in this class uses the 'fruit' namespace app_ui = ui.page_fluid( form.call_ui() ) def server(input, output, session): form.call_server(input, output, session) # input, output, and session must be explicitly passed into call_server method app = App(app_ui, server) This ultimately makes it possible to build a more complex app modularly that could for example have a bunch of UI forms that capture their own specific input fields and write to their own specific database table. Below is a working example built from the question that demonstrates encapsulation of a shiny module within a class, inheritance with common module functionality defined at the parent class (ShinyFormTemplate), and polymorphic behavior where the call_server() and call_ui() methods handle the form content specific to their child class (FruitForm, and VeggieForm). Also, note that the VeggieForm class was defined with additional complexity in it's init() method to set it apart from FruitForm (see the veggie_only_data parameter that only FruitForm requires). import pandas as pd from shiny import App, reactive, ui, module, render from abc import ABC, abstractmethod class ShinyFormTemplate(ABC): """This is the abstract base class that has some commonly defined and implemented ui and server logic methods, as well as abstract methods for ui and server logic methods that will be implemented by the children (FruitForm and VeggieForm)""" _namespace_id=None def __init__(self, namespace_id, *args, **kwargs): # will be inhereted by child classes self._namespace_id=namespace_id @abstractmethod def call_ui(self): pass @abstractmethod def call_server(self,input,output,session): pass def _common_button_ui(self): """This method gets inherited by both fruit and veggie classes, providing ui for counter button""" return ui.row( ui.column(6,ui.input_action_button('count_button',"Increment Counter")), ui.column(3,ui.output_text('show_counter')), ui.column(3), ) def _common_button_server(self,input,output,session): """This method gets inherited by both fruit and veggie classes, providing server functionality for counter button""" counter = reactive.value(0) @reactive.effect @reactive.event(input.count_button) def namespace_text(): counter.set(counter.get()+1) @render.text def show_counter(): return str(counter()) class FruitForm(ShinyFormTemplate): """This is the Fruit child class providing specific UI and Server functionality for Fruits.""" def call_ui(self): """This method defines the FruitForm specific ui module AND calls it at the end returning the result.""" @module.ui def ui_func(): return ui.row( ui.input_text("fruits",'Select a Fruit',""), ui.output_text("output_fruits"), ui.input_text("qtys",'Quantity:',""), ui.output_text("output_qty"), self._common_button_ui(), # protected method inherited from ShinyFormTemplate. # Insert additional fruit specific ui here that will operate in the 'fruit' namespace ) return ui.nav_panel("New Fruit", "New Fruit - Input Form", ui_func(self._namespace_id) # the call to establish the fruit ui has to be returned at the end of this class, so that it gets inserted into the app_ui object that is defined globally ) def call_server(self,input,output,session): """This method defines the ui module AND calls it at the end.""" @module.server def server_func(input, output, session): @render.text def output_fruits(): return input.fruits() self.__server_fruit_addl_stuff(input, output, session) # private method for FruitForm class only # Insert additional Fruit specific server logic here that will operate in the 'fruit' namespace self._common_button_server(input,output,session) # protected method inherited from ShinyFormTemplate server_func(self._namespace_id) def __server_fruit_addl_stuff(self, input, output, session): """Here is some additional server functionality that exists only in the FruitForm class and can be called by the call_server() method""" @render.text def output_qty(): return input.qtys() class VeggieForm(ShinyFormTemplate): """This is the Veggie child class providing specific UI and Server functionality for Veggies.""" def __init__(self, namespace_id, veggie_only_data, *args, **kwargs): # will be inhereted by child classes self._namespace_id=namespace_id self.__veggie_only_data=veggie_only_data def call_ui(self): """This method defines the VeggieForm specific ui module AND calls it at the end returning the result.""" @module.ui def ui_func(): return ui.row( ui.row(self.__veggie_only_data).add_style("font-weight: bold;"), ui.input_radio_buttons("veggie","Select a Veggie:",{'Asparagus':'Asparagus','Spinach':'Spinach','Squash':'Squash','Lettuce':'Lettuce'}), ui.output_text("output_veggie"), ui.input_text("qtys",'Quantity:',""), ui.output_text("output_qty"), # Insert additional Veggie specific ui here that will operate in the 'veggie' namespace self._common_button_ui(), ) return ui.nav_panel("New Veggie", "New Veggie - Input Form", ui_func(self._namespace_id) ) def call_server(self,input,output,session): @module.server def server_func(input, output, session): @render.text def output_veggie(): return input.veggie() @render.text def output_qty(): return input.qtys() # Insert additional Veggie specific server logic here that will operate in the 'veggie' namespace self._common_button_server(input, output, session) server_func(self._namespace_id) #Define fruit and veggie class object instances. This allows us to also pass in other objects like specific dataframes and database models that we can use to write populated form data to PostgreSQL database fruits = FruitForm(namespace_id='fruit') # all ui/server components will operate in the 'fruit' namespace veggies = VeggieForm(namespace_id='veggie',veggie_only_data='This class has a veggie specific data model') # all ui/server components will operate in the 'fruit' namespace food_forms = [fruits, veggies] # main ui object for the app app_ui = ui.page_fluid( ui.page_navbar( [form.call_ui() for form in food_forms], # iterate through any number of forms - creates and runs a ui module for each title="Sample App", id="page", ), title="Basic App" ) # main server function for the app def server(input, output, session): for form in food_forms: form.call_server(input, output, session) # iterate through any number of forms - creates and runs a server module for each app = App(app_ui, server)
3
1
79,015,728
2024-9-23
https://stackoverflow.com/questions/79015728/why-am-i-getting-runtimeerror-trying-to-backward-through-the-graph-a-second-ti
My code: import torch import random image_width, image_height = 128, 128 def apply_ellipse_mask(img, pos, axes): r = torch.arange(image_height)[:, None] c = torch.arange(image_width)[None, :] val_array = ((c - pos[0]) ** 2) / axes[0] ** 2 + ((r - pos[1]) ** 2) / axes[1] ** 2 mask = torch.where((0.9 < val_array) & (val_array < 1), torch.tensor(1.0), torch.tensor(0.0)) return img * (1.0 - mask) + mask random.seed(0xced) sphere_radius = image_height / 3 sphere_position = torch.tensor([image_width / 2, image_height / 2 ,0], requires_grad=True) ref_image = apply_ellipse_mask(torch.zeros(image_width, image_height, requires_grad=True), sphere_position, [sphere_radius, sphere_radius, sphere_radius]) ellipsoid_pos = torch.tensor([sphere_position[0], sphere_position[1], 0], requires_grad=True) ellipsoid_axes = torch.tensor([image_width / 3 + (random.random() - 0.5) * image_width / 5, image_height / 3 + (random.random() - 0.5) * image_height / 5, image_height / 2], requires_grad=True) optimizer = torch.optim.Adam([ellipsoid_axes], lr=0.1) criterion = torch.nn.MSELoss() for _ in range(100): optimizer.zero_grad() current_image = torch.zeros(image_width, image_height, requires_grad=True) current_image = apply_ellipse_mask(current_image, ellipsoid_pos, ellipsoid_axes) loss = criterion(current_image, ref_image) loss.backward() print(_, loss) optimizer.step() Error: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. Why would it be trying to backward through the same graph a second time? Am I directly accessing saved tensors after they were freed?
You have created a lot of leaf nodes (gradient-requiring variables), including: ref_image = apply_ellipse_mask(torch.zeros(image_width, image_height, requires_grad=True), sphere_position, [sphere_radius, sphere_radius, sphere_radius]) which creates a leaf node (with torch.zeros(image_width, image_height, requires_grad=True)) and applied some computations so you get a computation graph. But then you reuse the result every iteration. You do not recompute it every iteration so you are trying to go backward the same graph several times. The only things that should have require_grad = True are parameters you optimize on. You're having a differentiability problem You're trying to flow gradient to ellipsoid_axes through computation of the mask, but the computation of the mask is not differentiable in axes (it returns 0-1 anyway). You should make the mask "soft" using some kind of sigmoid instead. on your apply_ellipse_mask function This is more of a comment as this code will still cause the same error. Avoid for-loops like this with PyTorch as they are slow. Instead you could write: def apply_ellipse_mask(img, pos, axes): r = torch.arange(image_height)[:, None] c = torch.arange(image_width)[None, :] val_array = ((c - pos[0])**2) / axes[0]**2 + ((r - pos[1])**2) / axes[1]**2 mask = torch.where(0.9 < val < 1, torch.tensor(1.0), torch.tensor(0.0)) return img * (1.0 - mask) + mask
4
2
79,015,652
2024-9-23
https://stackoverflow.com/questions/79015652/implementing-eulers-form-of-trigonometric-interpolation
at the moment I am struggling to correctly implement Euler's from of trigonometric interpolation. To guide you through the work I have already done and information I have gathered, I will pair code with the mathematical definitions. The code will be written in python and will make use of numpy functions. Please note, that I won't use the Fast Fourier Transformation. Firstly, all of the experiments will be performed on the following dataset (x,y): [('0.0', '0.0'), ('0.6283185307179586', '0.6427876096865393'), ('1.2566370614359172', '0.984807753012208'), ('1.8849555921538759', '0.8660254037844387'), ('2.5132741228718345', '0.3420201433256689'), ('3.141592653589793', '-0.34202014332566866'), ('3.7699111843077517', '-0.8660254037844384'), ('4.39822971502571', '-0.9848077530122081'), ('5.026548245743669', '-0.6427876096865396'), ('5.654866776461628', '-2.4492935982947064e-16')] we define the Discrete Fourier Transform So my implementation for this function is as follows import numpy as np #consider numpy as imported from here on def F_n(Y): n = len(Y) Y_hat = [] for k in range(len(Y)): transformed_k = 1/n * sum([y_l * np.exp(-2 * np.pi * 1j* k * l/n) for l, y_l in enumerate(Y) ]) Y_hat.append(transformed_k) return Y_hat So comparing the resulting coefficients with the ones using np.fft.fft it looks like the coefficients are almost correct. They only differ by being shifted by a digit, i.e. # F_n(y) ['(-1.33907057366955e-17+0j)', '(0.14283712054380923-0.439607454395199j)', '(-0.048591754799448425+0.06688081278992913j)', '(-0.039133572999081954+0.028432205056635337j)', '(-0.036913281031968816+0.01199385205986717j)', '(-0.036397023426620205-2.0058074207055733e-17j)', '(-0.03691328103196878-0.011993852059867215j)', '(-0.03913357299908168-0.028432205056635646j)', '(-0.04859175479944824-0.06688081278992904j)', '(0.1428371205438091+0.439607454395199j)'] # np.fft.fft(y) ['(-1.1102230246251565e-16+0j)', '(1.428371205438092-4.39607454395199j)', '(-0.4859175479944836+0.6688081278992911j)', '(-0.3913357299908192+0.2843220505663533j)', '(-0.36913281031968803+0.11993852059867194j)', '(-0.36397023426620184-1.1102230246251565e-16j)', '(-0.36913281031968803-0.11993852059867194j)', '(-0.3913357299908196-0.2843220505663534j)', '(-0.4859175479944836-0.6688081278992911j)', '(1.4283712054380922+4.39607454395199j)'] Now I want to implement the trigonometric interpolation. Here is the definition I use this implementation for the theorem def trig_interpolation(Y_hat, x_range, depth=1000): n = len(Y_hat) get_summand = lambda c_j,l,x: c_j*np.exp(2 * np.pi * 1j * l*x) y_intp = [] x_intp = list((i/depth)*x_range for i in range(depth)) if n%2==0: K = n//2 for x in x_intp: y_intp.append(sum([get_summand(c_j,l,x) for l,c_j in zip(range(-K+1,K+1), Y_hat)])) else: K = n//2+1 for x in x_intp: y_intp.append(sum([get_summand(c_j,l,x) for l,c_j in zip(range(-K,K+1), Y_hat)])) return x_intp, y_intp x_range = max(x)-min(x) x_intp, y_intp = trig_interpolation(np.fft.fft(y), x_range) where x_range denotes the range of the x values in the original set of data points, depth denotes the resolution of the interpolation and in line 4 the get_summand function represents the term $$\exp(2\pi j x)$$ to make the summing process a little easier to read. When running my code on the coefficients given by numpy's fft, I get plt.plot(x_intp,np.real(y_intp)) plt.plot(x,y, 'o') Though, the points are aligned with the interpolation using numpy's fft, I expect the curve to look differently, behaving closer to an actual sine curve. Using my implementation calculating the Fourier coefficients gives me a curve which is to be expected wrong. I am asking to point out my mistakes, such that I can implement the trigonometric interpolation correctly according to the mathematical characterization I presented.
There are many separate problems with what you have done. The most important is that the discrete Fourier transform does NOT produce the order of frequencies that you require. The order is roughly (see https://numpy.org/doc/stable/reference/routines.fft.html ): freq[0] .... positive frequencies ... negative frequencies (mapped from very positive ones) So you have to shift your frequencies. There is a routine numpy.fft.fftshift but it doesn't quite shift in line with the numbering that you want, so I have used numpy.roll here instead to rotate the array. You need to divide your FFT by n to get the coefficients in the form that you require. Your x_range is wrong - it is based on an implied x_max, not the last element of the array. K is wrong in one case. You want x/x_range in your sum if x is not going between 0 and 1. import numpy as np import matplotlib.pyplot as plt n = 10 x = np.linspace( 0.0, 2 * np.pi, n, endpoint=False ) y = np.sin( x ) def trig_interpolation(Y_hat, x_range, depth=1000): n = len(Y_hat) get_summand = lambda c_j, l, x: c_j * np.exp(2 * np.pi * 1j * l * x / x_range ) x_intp = [(i / depth) * x_range for i in range(depth)] y_intp = [] K = n // 2 # Fix K for the odd/even cases if n%2==0: Y_hat = np.roll( Y_hat, K - 1 ) # Rotate for x in x_intp: y_intp.append(sum([get_summand(c_j,l,x) for l,c_j in zip(range(-K+1,K+1), Y_hat)])) else: Y_hat = np.roll( Y_hat, K ) # Rotate for x in x_intp: y_intp.append(sum([get_summand(c_j,l,x) for l,c_j in zip(range(-K,K+1), Y_hat)])) return x_intp, y_intp x_range = n * ( x[1] - x[0] ) # Fix the range: it ISN'T defined by last element of x x_intp, y_intp = trig_interpolation( np.fft.fft(y) / n, x_range) # Divide the FFT by n plt.plot(x_intp,np.real(y_intp)) plt.plot(x,y, 'o') plt.show()
1
2
79,015,532
2024-9-23
https://stackoverflow.com/questions/79015532/data-transformation-on-pandas-dataframe-to-connect-related-rows-based-on-shared
I have a table of company data that links subsidiary to parent companies as shown in the left hand side table of the screenshot. I need to transform the data into the table on the right hand side of the screenshot. This requires tracing through the two columns of the table and making the link between individual rows. So far the only thing I have tried is to join the table against itself recursively.. but I am thinking that some kind of tree structure here would make more sense? I.e. creating a branch of all connected companies with the 'ultimate parent company' as the trunk? These concepts are new to me so appreciate any input Reproducible input: df = pd.DataFrame({'Subsidiary Company': ['Company B', 'Company C', 'Company D', 'Company 2', 'Company 3'], 'Parent Company': ['Company A', 'Company B', 'Company C', 'Company 1', 'Company 2']})
You can use networkx to form a directed graph, then loop over the paths with all_simple_paths: import numpy as np import networkx as nx # create the directed graph G = nx.from_pandas_edgelist(df, source='Subsidiary Company', target='Parent Company', create_using=nx.DiGraph) # find roots (final level companies) roots = {v for v, d in G.in_degree() if d == 0} # {'Company 3', 'Company D'} # find leaves (ultimate parents) leaves = {v for v, d in G.out_degree() if d == 0} # {'Company 1', 'Company A'} # function to rename the columns # 0 is the subsidiary company # inf is the ultimate parent # other numbers are intermediates def renamer(x): if x == 0: return 'Subsidiary Company' if np.isfinite(x): return f'Intermediate Parent Company {int(x)}' return 'Ultimate Parent Company' # find the connected nodes # the roots/leaves for each group # iterate over the paths between each root/node combination # create the sliding paths (with range) # convert to DataFrame out = ( pd.DataFrame( [ dict(enumerate(p[i-1:-1])) | {float('inf'): p[-1]} for c in nx.weakly_connected_components(G) for r in c & roots for l in c & leaves for p in nx.all_simple_paths(G, r, l) for i in range(len(p)-1, 0, -1) ] ) .sort_index(axis=1) .rename(columns=renamer) ) Output: Subsidiary Company Intermediate Parent Company 1 Intermediate Parent Company 2 Ultimate Parent Company 0 Company B NaN NaN Company A 1 Company C Company B NaN Company A 2 Company D Company C Company B Company A 3 Company 2 NaN NaN Company 1 4 Company 3 Company 2 NaN Company 1
1
1
79,015,047
2024-9-23
https://stackoverflow.com/questions/79015047/ollama-multimodal-gemma-not-seeing-image
This sample multimodal/main.py appears to show Ollama I am trying to do the same with an image loaded from my machine. I am using the gemma2:27b model. The model is working with chat so that is not the issue. my Code import os.path import PIL.Image from dotenv import load_dotenv from ollama import generate load_dotenv() CHAT_MODEL_NAME = os.getenv("MODEL_NAME_LATEST") image_path = os.path.join("data", "image_one.jpg") test_image = PIL.Image.open(image_path) # test 1: for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[test_image], stream=True): print(response['response'], end='', flush=True) # response: ollama._types.RequestError: image must be bytes, path-like object, or file-like object # test 2: bytes for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[test_image.tobytes()], stream=True): print(response['response'], end='', flush=True) # response: Please provide me with the image! # test 3: Path for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[image_path], stream=True): print(response['response'], end='', flush=True) # response: Please provide me with the image! How do i properly load an image to Gemma Cross posted on issue forum 289
The Gemma2 models are not multimodal. They accept only text as input. If you want to process images, you need to use PaliGemma which is not supported by Ollama yet (you can follow this issue about it). You may find some PaliGemma examples at the Gemma cookbook github repo.
1
2
79,015,439
2024-9-23
https://stackoverflow.com/questions/79015439/how-do-i-fill-null-on-a-struct-column
I am trying to compare two dataframes via dfcompare = (df0 == df1) and nulls are never considered identical (unlike join there is no option to allow nulls to match). My approach with other fields is to fill them in with an "empty value" appropriate to their datatype. What should I use for structs? import polars as pl df = pl.DataFrame( { "int": [1, 2, None], "data" : [dict(a=1,b="b"),dict(a=11,b="bb"),None] } ) df.describe() print(df) df2 = df.with_columns(pl.col("int").fill_null(0)) df2.describe() print(df2) # these error out:... try: df3 = df2.with_columns(pl.col("data").fill_null(dict(a=0,b=""))) except (Exception,) as e: print("try#1", e) try: df3 = df2.with_columns(pl.col("data").fill_null(pl.struct(dict(a=0,b="")))) except (Exception,) as e: print("try#2", e) Output: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ int ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•β•ͺ═════════════║ β”‚ 1 ┆ {1,"b"} β”‚ β”‚ 2 ┆ {11,"bb"} β”‚ β”‚ null ┆ {null,null} β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ int ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════║ β”‚ 1 ┆ {1,"b"} β”‚ β”‚ 2 ┆ {11,"bb"} β”‚ β”‚ 0 ┆ {null,null} β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ try#1 invalid literal value: "{'a': 0, 'b': ''}" try#2 a Error originated just after this operation: DF ["int", "data"]; PROJECT */2 COLUMNS; SELECTION: "None" My, satisfactory, workaround has been to unnest the columns instead. This works fine (even better as it allow subfield-by-subfield fills). Still, I remain curious about how to achieve a suitable "struct literal" that can be passed into these types of functions. One can also imagine wanting to add a hardcoded column as in df4 = df.with_columns(pl.lit("0").alias("zerocol"))
A struct literal to use in the context of pl.Expr.fill_null can be created with pl.struct as follows. df.with_columns( pl.col("data").fill_null( pl.struct(a=pl.lit(1), b=pl.lit("MISSING")) ) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ int ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════════════║ β”‚ 1 ┆ {1,"b"} β”‚ β”‚ 2 ┆ {11,"bb"} β”‚ β”‚ null ┆ {1,"MISSING"} β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
79,014,724
2024-9-23
https://stackoverflow.com/questions/79014724/chang-pandas-dataframe-from-long-to-wide
Have a dataframe in following format data = {'regions':["USA", "USA", "USA", "FRANCE", "FRANCE","FRANCE"], 'dates':['2024-08-03', '2024-08-10', '2024-08-17','2024-08-03', '2024-08-10', '2024-08-17'], 'values': [3, 4, 5, 7, 8,0], } df = pd.DataFrame(data) regions dates values 0 USA 2024-08-03 3 1 USA 2024-08-10 4 2 USA 2024-08-17 5 3 FRANCE 2024-08-03 7 4 FRANCE 2024-08-10 8 5 FRANCE 2024-08-17 0 Need to change this df from long to wide format. Use the most recent dates as current date, and the other two dates will be lagged dates. Expected output is like regions dates values_lag2 values_lag1 values USA 2024-08-17 3 4 5 FRANCE 2024-08-17 7 8 0 Currently I used a for loop to manually to change the format. Just wondering if there is a more elegant way to realize it. Thanks
If same dates per each regions is possible convert column to datetimes, pivoting, change columns names and add column with maximal dates: df['dates'] = pd.to_datetime(df['dates']) out = df.pivot(index='regions', columns='dates', values='values') out.columns = [f'values_lag{i-1}' if i!=1 else 'values' for i in range(len(out.columns), 0, -1)] out = df.groupby('regions')['dates'].max().to_frame().join(out).reset_index() print (out) regions dates values_lag2 values_lag1 values 0 FRANCE 2024-08-17 7 8 0 1 USA 2024-08-17 3 4 5 Another idea if possible different datetimes and need only ordering add sorting, counter by groupby.cumcount and pivoting with helper column g: df['dates'] = pd.to_datetime(df['dates']) df = df.sort_values(['regions', 'dates']) df['g'] = df.groupby('regions').cumcount(ascending=False) out = (df.pivot(index='regions', columns='g', values='values') .sort_index(ascending=False, axis=1)) out.columns=[f'values_lag{i}' if i!=0 else 'values' for i in out.columns] out = df.groupby('regions')['dates'].max().to_frame().join(out).reset_index() print (out) regions dates values_lag2 values_lag1 values 0 FRANCE 2024-08-17 7 8 0 1 USA 2024-08-17 3 4 5
1
1
79,008,061
2024-9-20
https://stackoverflow.com/questions/79008061/proper-way-to-process-larger-than-memory-datasets-in-polars
I have begun to learn and implement Polars because of (1) the potential speed improvements and (2) for the promise of being able to process larger-than-memory datasets. However, I'm struggling to see how the second promise is actually delivered in specific scenarios that my use case requires. One specific example I'm struggling with is how to read a multi-GB JSONL file from S3, apply a few transformations, and send the modified records to STDOUT. Gaps in the lazy "sink" methods... As I just raised in GitHub, the sink_*() methods do not support writing to a buffer or file-like - only to a named file path. Otherwise, it seems the simple solution would be something like sink_ndjson(sys.stdout, ...) No clear way to "batch" a DataFrame or LazyFrame into smaller data frames. The next thing I tried was to get smaller batches or dataframes (for instance 100K rows at a time) which I could process in memory and write with write_ndjson(sys.stdout, ...) one at a time until I reach the end of the stream. The closest I could find is LazyFrame.slice(offset, batch_size).collect() - except in practice, this seems to hang/crash on the first invocation rather than reading just the first n records and then proceeding. Even when I set a low number of records in the LazyFrame's schema scan limit. Perhaps this is a bug - but even still, the slice() method does not seem specifically designed for getting incremental batches from the lazy frame. Any help is much appreciated!
As was mentioned in comments, the streaming engine is undergoing a significant revamp to address the shortcomings of the current implementation. The details of that revamp, as far as I'm aware, haven't been documented so I can't say that this exact use case will be addressed in that revamp. It's not clear, to me, what the benefit is of saving data to stdout (in memory) if your overall data is too big for memory so that enhancement request may be too niche for it to be picked up in general. It seems a more in-mission ask would be for a read_ndjson_batched function similar to the read_csv_batched function. In the interim, you can implement this in python using s3's fsspec handler to read the file in lines and form your own batches. import polars as pl import fsspec # or the s3 fsspec implementation lib from io import BytesIO s3fs = fsspec.filesystem() # or aforementioned s3 class batch_size = 100_000 with s3fs.open(path, 'rb') as ff: while True: batch = [] dbl_break=False for _ in range(batch_size): line = ff.readline() if not line: dbl_break=True break batch.append(line) if len(batch)==0: break df_batch = pl.read_ndjson(b"\n".join(batch)) do_batch_process(df_batch) if dbl_break: break
4
3
79,013,954
2024-9-23
https://stackoverflow.com/questions/79013954/how-to-add-up-value-in-column-v-by-multiplying-previous-cell-with-a-fixed-facto
Starting with a "pd.DataFrame" df :~ n v 0 1 0.0 1 2 0.0 2 3 0.0 3 4 0.0 4 5 0.0 5 6 0.0 I'd like to add up value in column "v", whereby a cell in column "v" is produced by multiplying previous cell of "v" with a fixed factor, then add current cell value of column "n". (See sample calculation table below) ## sample calculation table :~ n v[i] n + v[i-1] * fixed factor 1 1.0 = 1 + 0.0 * 0.5 2 2.5 = 2 + 1.0 * 0.5 3 4.25 = 3 + 2.5 * 0.5 4 6.125 = 4 + 4.25 * 0.5 5 8.0625 = 5 + 6.125 * 0.5 6 10.03125 = 6 + 8.063 * 0.5 Managed to do it with row-by-row iteration (see for-loop below). However I think vectorised methods (like cumsum and shift) may be more efficient, but could not figure out how; because cumsum is complicated by multiplication, starting with an empty column v, and need to reference to previous cell of a column. Wonder how to do this with vectorised methods ? To reproduce my code : df = pd.DataFrame({'n':[1,2,3,4,5,6]}) df['v'] = 0.0 def fnv(nu, vu): return nu + vu * 0.5 for i in range(0, df.shape[0]): df.v.at[i] = fnv(df.n.at[i], df.v.at[i-1] if i>0 else 0.0) df (RESULTS) :~ n v 0 1 1.00 1 2 2.50 2 3 4.25 3 4 6.12 4 5 8.06 5 6 10.03
Since n is variable, you can't easily vectorize this (you could using a matrix operation, see below, but this would take O(n^2) space). A good tradeoff might be to use numba to speed the operation: from numba import jit @jit(nopython=True) def fnv(n, factor=0.5): out = [] prev = 0 for x in n: out.append(x + prev*factor) prev = out[-1] return out df['v'] = fnv(df['n'].to_numpy()) Output: n v 0 1 1.00000 1 2 2.50000 2 3 4.25000 3 4 6.12500 4 5 8.06250 5 6 10.03125 vectorized approach You could vectorize using a square matrix: x = np.arange(len(df)) tmp = x[:, None]-x df['v'] = np.nansum(np.where(tmp>=0, 0.5**tmp, np.nan) * df['n'].to_numpy(), axis=1) Intermediates: # tmp [[ 0 -1 -2 -3 -4 -5] [ 1 0 -1 -2 -3 -4] [ 2 1 0 -1 -2 -3] [ 3 2 1 0 -1 -2] [ 4 3 2 1 0 -1] [ 5 4 3 2 1 0]] # tmp >= 0 [[ True False False False False False] [ True True False False False False] [ True True True False False False] [ True True True True False False] [ True True True True True False] [ True True True True True True]] # np.where(tmp>=0, 0.5**tmp, np.nan) * df['n'].to_numpy() [[1. nan nan nan nan nan] [0.5 2. nan nan nan nan] [0.25 1. 3. nan nan nan] [0.125 0.5 1.5 4. nan nan] [0.0625 0.25 0.75 2. 5. nan] [0.03125 0.125 0.375 1. 2.5 6. ]]
3
2
79,014,004
2024-9-23
https://stackoverflow.com/questions/79014004/how-to-fill-blank-cells-created-by-join-but-keep-original-null-in-pandas
I have two dataframe, one is daily and one is quarterly idx = pd.date_range("2023-03-31", periods=100, freq="D") idx_q = idx.to_series().resample("QE").last() df1 = pd.DataFrame({"A": [1, "a", None], "B": [4, None, 6]}, index=idx_q) np.random.seed(42) df2 = pd.DataFrame({"C": np.random.randn(100), "D": np.random.randn(100)}, index=idx) # resample df2 to workdays df2 = df2.resample("B").asfreq() # mask values larger than 0.9 in df2 with NaN df2 = df2.mask(df2 > 0.9) df = df2.join(df1) I want to join them and ffill quarter data to daily. The problem is my data have None from source and should be kept in result. What's the right way to make this ffill?
IIUC, you can build a mask from df1 with isna and reindex: df = df1.join(df2) df = df.ffill().mask(df1.isna().reindex(columns=df.columns, fill_value=False)) Another approach could be to use a placeholder instead of NaNs in df1: df = df1.fillna('PLACEHOLDER').join(df2).ffill().replace('PLACEHOLDER', np.nan) Output: A B C D 2023-03-31 1 4.0 0.496714 -1.415371 2023-06-30 a NaN 0.496714 0.856399 2023-07-08 NaN 6.0 -0.234587 -1.142970
2
2
79,013,736
2024-9-23
https://stackoverflow.com/questions/79013736/fill-numpy-array-to-the-right-based-on-previous-column
I have the following states and transition matrix import numpy as np n_states = 3 states = np.arange(n_states) T = np.array([ [0.5, 0.5, 0], [0.5, 0, 0.5], [0, 0, 1] ]) I would like to simulate n_sims paths where each path consist of n_steps. Each path starts at 0. Therefore, I write n_sims = 100 n_steps = 10 paths = np.zeros((n_sims, n_steps), dtype=int) With the help of np.random.Generator.choice I would like to "fill to the right" the paths using the transition matrix. My attempt look as follows rng = np.random.default_rng(seed=123) for s in range(1, n_steps+1): paths[:,s] = rng.choice( a=n_states, size=n_sim, p=T[paths[:,s-1]] ) This result in the following error: ValueError: p must be 1-dimensional How can I overcome this? If possible, I would like to prevent for-loops and vectorize the code.
IIUC, your process being inherently iterative, you won't benefit much from numpy's vectorization. You might want to consider using pure python: def simulation(T, n_states=3, n_sims=100, n_steps=10, seed=123): rng = np.random.default_rng(seed) start = np.zeros(n_steps, dtype=np.int64) out = [start] for i in range(n_sims-1): a = np.array([rng.choice(n_states, p=T[x]) for x in start]) out.append(a) start = a return np.array(out) simulation(T, n_states=3, n_sims=100, n_steps=10) Example output: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 1, 1, 0, 1, 1], [2, 0, 1, 0, 1, 2, 2, 0, 2, 2], [2, 0, 0, 1, 0, 2, 2, 1, 2, 2], [2, 1, 0, 2, 1, 2, 2, 2, 2, 2], [2, 2, 0, 2, 0, 2, 2, 2, 2, 2], [2, 2, 0, 2, 1, 2, 2, 2, 2, 2], [2, 2, 1, 2, 0, 2, 2, 2, 2, 2], [2, 2, 0, 2, 0, 2, 2, 2, 2, 2], [2, 2, 0, 2, 0, 2, 2, 2, 2, 2], [2, 2, 1, 2, 0, 2, 2, 2, 2, 2], [2, 2, 2, 2, 0, 2, 2, 2, 2, 2], [2, 2, 2, 2, 0, 2, 2, 2, 2, 2], [2, 2, 2, 2, 0, 2, 2, 2, 2, 2], [2, 2, 2, 2, 1, 2, 2, 2, 2, 2], [2, 2, 2, 2, 0, 2, 2, 2, 2, 2], [2, 2, 2, 2, 1, 2, 2, 2, 2, 2], [2, 2, 2, 2, 0, 2, 2, 2, 2, 2], ... [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]])
2
1
79,013,239
2024-9-23
https://stackoverflow.com/questions/79013239/preventing-date-conversion-in-excel-with-xlwings
the problem is as the title states. I have a column AX filled with values. The name of the column is "Remarks" and it will contain remarks but some of those remarks are dates and some are full blown notes like "Person A owes Person B X amount." The problem I'm currently facing now is that in xlwings the columns that are just dates like "1/8/24" are converted to the date data type. I do not want this conversion to happen. I want it to remain as "1/8/24" literally and remain as the data type of "Text". The full workflow is as follows: Read data from excel (I have no write access) Create a new excel workbook Put processed data into new excel workbook So I tried to fix it in two places After I read the data I converted the AX columns' values all to string with str(cell.value) among other options, none of which worked. Before the new excel workbook is saved. Nothing in option 1 worked and I figured that it had something to with how Excel is handling dates. So, I'm now trying to prevent the conversion and just have "1/8/24" appear literally but nothing is working. I checked the documentation and I tried Range.options to prevent the conversion but it doesn't help much. As when I inspected the cell with "1/8/24" it showed up as a datetime.datetime object. Converting that with str just turns it back into a date anyways. So, I figured that I have to find a way to do the converting after it was written into the workbook. I messed around with data types in Excel and I found out that if I used this Clicked next on everything Then selected "Text" in the final screen the dates appeared. So, that leads me to try a new option which is to convert the data type of the entire column to just "Text". So I tried out stuff like this sheet.range("AX1").expand("down").api.NumberFormat = "@". But the workbook that was generated still doesn't show "1/8/24" literally. Instead it shows some number like 45299. Surprisingly when I converted that cell into "Long Date" it gets turned into a date "1st August 2024". This is where I stopped working as I ran out of ideas and have no idea how to continue. Any guidance is very much appreciated, thank you.
Dates, or dates looking strings, are always a pain. I think the simplest way to deal with it in your case is to insert a ' as the first character in every cell to force Excel to treat it like a string, something like .range(f'A{i}').value = f"'{value}" (not tested, assuming that the column of data in question is in column A).
2
0
79,013,441
2024-9-23
https://stackoverflow.com/questions/79013441/pandas-period-for-a-custom-time-period
I want to create pandas.Period for a custom time period, for example for a duration starting_time = pd.Timestamp('2024-01-01 09:15:00') and ending_time = pd.Timestamp('2024-01-05 08:17:00'). One way to achieving this is by first getting the pandas.Timedelta and then create pandas.Period. import pandas as pd # Define start and end times starting_time = pd.Timestamp('2024-01-01 09:15:00') ending_time = pd.Timestamp('2024-01-05 08:17:00') # Calculate the duration (period) between the two timestamps period_duration = ending_time - starting_time period_duration_in_minutes = (period_duration.total_seconds()) //60 freq_str = f"{period_duration_in_minutes}min" period = pd.Period(starting_time, freq = freq_str) print(period.start_time) print(period.end_time) But I need a straightforward approach, something like this (I know this won’t work)- period = pd.Period(start_time = starting_time, end_time=ending_time)
You don't need to compute the duration in minutes, just pass the subtraction: pd.Period(starting_time, freq=ending_time-starting_time) Which is almost your ideal straightforward approach. Output: Period('2024-01-01 09:15', '5702min') Note that you could also use a function to have the desired parameters: def cust_period(start, end): return pd.Period(value=start, freq=end-start) cust_period(starting_time, ending_time) # Period('2024-01-01 09:15', '5702min')
3
2
79,012,126
2024-9-22
https://stackoverflow.com/questions/79012126/error-when-trying-add-filter-to-gmail-using-python-script
I am trying to create a simple Python app that would be able to delete and add filters to Gmail. Using different SCOPES I can easily list labels, filters, and so on but when trying to add a new filter I am getting an error. The code below is a simplification of my actual code (that is broken into set of functions) but basically, it does exactly the same as my full code. import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build def main(): scope = ['https://www.googleapis.com/auth/gmail.settings.basic'] credentials_file = 'credentials.json' token_file = f'token_test.json' credentials = None if os.path.exists(token_file): credentials = Credentials.from_authorized_user_file(token_file, scope) if not credentials or not credentials.valid: if credentials and credentials.expired and credentials.refresh_token: credentials.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file(credentials_file, scope) credentials = flow.run_local_server(port=0) with open(token_file, 'w') as token: token.write(credentials.to_json()) service = build('gmail', 'v1', credentials=credentials) filter_body = {'criteria': {'from': '[email protected]'}, 'action': {'removeLabelsIds': ['SPAM']}} result = ( service.users() .settings() .filters() .create(userId='me', body=filter_body) .execute()) return result if __name__ == '__main__': main() I am getting this error Traceback (most recent call last): File "C:\DATA\WORK\Assets\Development\Python\GmailManager\src\temp.py", line 36, in <module> main() File "C:\DATA\WORK\Assets\Development\Python\GmailManager\src\temp.py", line 31, in main .execute()) ^^^^^^^^^ File "C:\DATA\WORK\Assets\Development\Python\GmailManager\.venv\Lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\DATA\WORK\Assets\Development\Python\GmailManager\.venv\Lib\site-packages\googleapiclient\http.py", line 938, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: <HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/settings/filters?alt=json returned "Filter doesn't have any actions". Details: "[{'message': "Filter doesn't have any actions", 'domain': 'global', 'reason': 'invalidArgument'}]"> The info under link https://gmail.googleapis.com/gmail/v1/users/me/settings/filters?alt=json is this { "error": { "code": 401, "message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.", "errors": [ { "message": "Login Required.", "domain": "global", "reason": "required", "location": "Authorization", "locationType": "header" } ], "status": "UNAUTHENTICATED", "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "CREDENTIALS_MISSING", "domain": "googleapis.com", "metadata": { "method": "caribou.api.proto.MailboxService.ListFilters", "service": "gmail.googleapis.com" } } ] } } which is quite strange because right now I do not know if the error stems from content of the filter or from authentication error.
Please modify your script as follows. From: filter_body = {'criteria': {'from': '[email protected]'}, 'action': {'removeLabelsIds': ['SPAM']}} To: filter_body = {'criteria': {'from': '[email protected]'}, 'action': {'removeLabelIds': ['SPAM']}} In this modification, Labels of removeLabelsIds is modified to Label like removeLabelIds. Reference: Method: users.settings.filters.create
2
1
79,012,503
2024-9-22
https://stackoverflow.com/questions/79012503/how-to-properly-track-gradients-with-mygrad-when-using-scipys-rectbivariatespli
I'm working on a project where I need to interpolate enthalpy values using scipy.interpolate.RectBivariateSpline and then perform automatic differentiation using mygrad. However, I'm encountering an issue where the gradient is not tracked at all across the interpolation. Here is a simplified version of my code: import numpy as np from scipy.interpolate import RectBivariateSpline import CoolProp.CoolProp as CP import mygrad as mg from mygrad import tensor # Define the refrigerant refrigerant = 'R134a' # Constant temperature (e.g., 20Β°C) T = 20 + 273.15 # Convert to Kelvin # Get saturation pressures P_sat = CP.PropsSI('P', 'T', T, 'Q', 0, refrigerant) # Define a pressure range around the saturation pressure P_min = P_sat * 0.5 P_max = P_sat * 1.5 P_values = np.linspace(P_min, P_max, 100) # Define a temperature range around the constant temperature T_min = T - 10 T_max = T + 10 T_values = np.linspace(T_min, T_max, 100) # Generate enthalpy data h_values = [] for P in P_values: h_row = [] for T in T_values: try: h = CP.PropsSI('H', 'P', P, 'T', T, refrigerant) h_row.append(h) except: h_row.append(np.nan) h_values.append(h_row) # Convert lists to arrays h_values = np.array(h_values) # Fit spline for enthalpy h_spline = RectBivariateSpline(P_values, T_values, h_values) # Function to interpolate enthalpy def h_interp(P, T): return tensor(h_spline.ev(P, T)) # Example function using the interpolated enthalpy with AD def example_function(P): h = h_interp(P, T) result = h**2 # Example calculation return result # Define a pressure value for testing P_test = tensor(P_sat, ) # Compute the example function and its gradient result = example_function(P_test) result.backward() # Print the result and the gradient print(f"Result: {result.item()}") print(f"Gradient: {P_test.grad}") Are these just issues of RectBivariateSpline or mygrad? Would this work with other algebraic differentiation libs? Should I use something else besides splines?
The problem here is that MyGrad doesn't know how to differentiate this operation. You can get around this by defining a custom operation with a backwards pass. The MyGrad docs explain this here. In order to implement the backward pass, you need to be able to evaluate a partial derivative of the spline. The SciPy docs explain this here. (See the dx and dy arguments.) Combining the two, you get this: import numpy as np import mygrad as mg from mygrad import execute_op from mygrad.operation_base import Operation from mygrad.typing import ArrayLike # All operations should inherit from Operation, or one of its subclasses class CustomInterpolate(Operation): """ Performs f(x, y) = RectBivariateSpline.ev(x, y) """ def __call__(self, x: mg.Tensor, y: mg.Tensor, spline) -> np.ndarray: # This method defines the "forward pass" of the operation. # It must bind the variable tensors to the op and compute # the output of the operation as a numpy array # All tensors must be bound as a tuple to the `variables` # instance variable. self.variables = (x, y) self.spline = spline # The forward pass should be performed using numpy arrays, # not the tensors themselves. x_arr = x.data y_arr = y.data return self.spline.ev(x_arr, y_arr) def backward_var(self, grad, index, **kwargs): """Given ``grad = dβ„’/df``, computes ``βˆ‚β„’/βˆ‚x`` and ``βˆ‚β„’/βˆ‚y`` ``β„’`` is assumed to be the terminal node from which ``β„’.backward()`` was called. Parameters ---------- grad : numpy.ndarray The back-propagated total derivative with respect to the present operation: dβ„’/df. This will have the same shape as f, the result of the forward pass. index : Literal[0, 1] The index-location of ``var`` in ``self.variables`` Returns ------- numpy.ndarray βˆ‚β„’/βˆ‚x_{i} Raises ------ SkipGradient""" x, y = self.variables x_arr = x.data y_arr = y.data # The operation need not incorporate specialized logic for # broadcasting. The appropriate sum-reductions will be performed # by MyGrad's autodiff system. if index == 0: # backprop through a return self.spline.ev(x.data, y.data, dx=1) elif index == 1: # backprop through b return self.spline.ev(x.data, y.data, dy=1) # Our function stitches together our operation class with the # operation arguments via `mygrad.prepare_op` def custom_interpolate(x: ArrayLike, y: ArrayLike, spline, constant=None) -> mg.Tensor: # `execute_op` will take care of: # - casting `x` and `y` to tensors if they are instead array-likes # - propagating 'constant' status to the resulting output based on the inputs # - handling in-place operations (specified via the `out` parameter) return execute_op(CustomInterpolate, x, y, op_args=(spline,), constant=constant) You can use this operation like so: def h_interp(P, T): return custom_interpolate(P, T, h_spline) And then you can differentiate across this interpolation operation. Output: Result: 176061599645.87317 Gradient: -0.02227965104792612
1
2
79,012,469
2024-9-22
https://stackoverflow.com/questions/79012469/how-do-i-change-a-variable-while-inheriting
I'm using OOP for the first time, I want to make the SubWindow class inherit all properties from the MainWindow class, but self.root would be tk.Toplevel() instead of tk.Tk(): import tkinter as tk class MainWindow: def __init__(self, size, title): self.root = tk.Tk() self.root.geometry(size) self.root.title(title) def packAll(self): for widget in self.root.children: self.root.children[widget].pack() class SubWindow(MainWindow): def __init__(self, size, title): super().__init__(size, title) If I put self.root = tk.Toplevel() after super().__init__(size, title), it still creates another instance of tkinter. Private variables haven't worked either. I haven't found a solution online. How do I solve this?
What you can do is to pass a required value for self.root to the base class constructor, with a default where the base class chooses tk.Tk(): import tkinter as tk class MainWindow: def __init__(self, size, title, root=None): if root == None: root = tk.Tk() self.root = root self.root.geometry(size) self.root.title(title) def packAll(self): for widget in self.root.children: self.root.children[widget].pack() class SubWindow(MainWindow): def __init__(self, size, title): super().__init__(size, title, tk.Toplevel()) This way only one root is instantiated, occurring before geometry() and title() are called. But, really, you shouldn't have to have if statements when you can just pass in the right values: import tkinter as tk class Window: def __init__(self, root, size, title): self.root = root self.root.geometry(size) self.root.title(title) def packAll(self): for widget in self.root.children: self.root.children[widget].pack() class MainWindow(Window): def __init__(self, size, title): super().__init__(tk.Tk(), size, title) class SubWindow(Window): def __init__(self, size, title): super().__init__(tk.Toplevel(), size, title)
3
3
79,009,647
2024-9-21
https://stackoverflow.com/questions/79009647/how-to-calculate-the-exponential-moving-average-ema-through-record-iterations
I have created a pandas dataframe as follows: import pandas as pd import numpy as np ds = { 'trend' : [1,1,1,1,2,2,3,3,3,3,3,3,4,4,4,4,4], 'price' : [23,43,56,21,43,55,54,32,9,12,11,12,23,3,2,1,1]} df = pd.DataFrame(data=ds) The dataframe looks as follows: display(df) trend price 0 1 23 1 1 43 2 1 56 3 1 21 4 2 43 5 2 55 6 3 54 7 3 32 8 3 9 9 3 12 10 3 11 11 3 12 12 4 23 13 4 3 14 4 2 15 4 1 16 4 1 I have saved the dataframe to a .csv file called df.csv: df.to_csv("df.csv", index = False) I need to create a new field called ema2 which: iterates through each and every record of the dataframe calculates the Exponential Moving Average (EMA) by considering the price observed at each iteration and the prices (EMA length is 2 in this example) observed in the previous trends. For example: I iterate at record 0 and the EMA is NaN (missing). I iterate at record 1 and the EMA is still NaN (missing) I Iterate at record 12 and the EMA is 24.20 (it considers price at record 3, price at record 5 and price at record 12 I Iterate at record 13 and the EMA is 13.53 (it considers price at record 3, price at record 5 and price at record 13 I Iterate at record 15 and the EMA is 12.46 (it considers price at record 3, price at record 5 and price at record 15 and so on ..... I have written the following code: time_window = 2 ema= [] for i in range(len(df)): ds = pd.read_csv("df.csv", nrows=i+1) d = ds.groupby(['trend'], as_index=False).agg( {'price':'last'}) d['ema2'] = d['price'].ewm(com=time_window - 1, min_periods=time_window).mean() ema.append(d['ema2'].iloc[-1]) df['ema2'] = ema Which produces the correct dataframe: print(df) trend price ema2 0 1 23 NaN 1 1 43 NaN 2 1 56 NaN 3 1 21 NaN 4 2 43 35.666667 5 2 55 43.666667 6 3 54 49.571429 7 3 32 37.000000 8 3 9 23.857143 9 3 12 25.571429 10 3 11 25.000000 11 3 12 25.571429 12 4 23 24.200000 13 4 3 13.533333 14 4 2 13.000000 15 4 1 12.466667 16 4 1 12.466667 The problem is that when the dataframe has millions of records: it takes a very long time to run. Does anyone know how to get the same results in a quick, efficient way, please?
I slightly changed the example that you asked about RSI. I added -1 in the first prev, the cycle of filling by slices, price and in setting the values ​​by slice of the data frame. You can also try numba, cython, but most likely the code will need to be rewritten(not all functions in them are available from numpy, I don't know about pandas). trends = df["trend"].unique() arr = df['price'].values range_group = np.stack( [df[df["trend"] == trend].index.values.take([0, -1]) for trend in trends] ) price = np.full((len(df), trends.size), np.nan) prev = arr[range_group[:time_window-1, 1]] for i in range(time_window-1, len(trends)): stop = range_group[i, 1] + 1 price[range_group[i, 0]:stop, -1] = arr[range_group[i, 0]:stop] price[range_group[i, 0]:stop, -(prev.size+1):-1] = prev prev = price[range_group[i, 1], -(prev.size+1):] price = price[range_group[time_window-1, 0]:] val = (pd.DataFrame(price).T.ewm(com=time_window - 1, min_periods=time_window).mean().iloc[-1].values) df.loc[range_group[time_window-1, 0]:, 'ema'] = val
2
1
79,011,621
2024-9-22
https://stackoverflow.com/questions/79011621/sqlite-gives-a-value-which-i-cannot-recreate
Using django: here is some values and the query: max_played, avg_days_last_played = get_next_song_priority_values() # Calculate time since played using raw SQL time_since_played_expr = RawSQL("(julianday('now') - julianday(main_song.played_at))", []) query = Song.objects # Annotate priority songs_with_priority = query.annotate( time_since_played=ExpressionWrapper(time_since_played_expr, output_field=FloatField()), priority=( F('rating') - (F('count_played') / Value(max_played)) + (F('time_since_played') / Value(avg_days_last_played)) ), ).order_by('-priority') my logging: logger.info(f'Next Song: {next_song}') calculated_priority = ( next_song.rating - (next_song.count_played / max_played) + (next_song.time_since_played / avg_days_last_played) ) logger.info(f'Next Song: priority {next_song.priority:.2f} vs calc {calculated_priority:.2f}') logger.info(f'Next Song: rating {next_song.rating:.2f}') playd = next_song.count_played / max_played logger.info(f'Next Song: played {playd:.2f} ({next_song.count_played} / {max_played})') tspd = next_song.time_since_played / avg_days_last_played logger.info( f'Next Song: days {tspd:.2f} ({next_song.time_since_played} / {avg_days_last_played})' ) and I get: INFO Next Song: <Song-1489 End of the World Cold> INFO Next Song: priority 2.73 vs calc 2.56 INFO Next Song: rating 0.50 INFO Next Song: played 0.17 (1 / 6) INFO Next Song: days 2.23 (4.043296354357153 / 1.8125720233656466) So my calculated value is lower. All the values are there: the rating of 0.5 is solid, the play counts if 1 vs 6 is solid, the time since is used from the result next_song.time_since_played. I'm using the same values sqlite should be using, but my calc is different.
In SQLite, integer division produces an integer value, so 1 / 6 = 0: sqlite> select 1 / 6; 0 You can multiply one of the value by 1.0 to convert it to a float value and then this should work: sqlite> select 1 / 6.0; 0.166666666666667 priority=( F('rating') - (F('count_played') / Value(max_played * 1.0)) # <- this line + (F('time_since_played') / Value(avg_days_last_played)) ) or explicitly convert max_played into a float: (F('count_played') / Value(float(max_played)))
1
2
79,010,903
2024-9-22
https://stackoverflow.com/questions/79010903/counting-the-number-of-positive-integers-that-are-lexicographically-smaller-than
Say I have a number num and I want to count the number of positive integers in the range [1, n] which are lexicographically smaller than num and n is some arbitrary large integer. A number x is lexicographically smaller than a number y if the converted string str(x) is lexicographically smaller than the converted string str(y). I want to do this efficiently since n could be large (eg. 10^9). My idea for this is using digit dynamic programming. Essentially what I'm thinking is that every number in the range [1,n] can be represented as a string of len(str(n)) slots. At each slot, the upper bound for this is either the digit at the last position of num (this is for the case where we pick trailing zeros) or the digit at the last position of n. This is because if the previous digit is already smaller than the corresponding digit in num then we are free to pick any digit up to the corresponding digit in n. Below is my code in Python that attempts to do this from functools import cache def count(num, n): num = str(num) n = str(n) max_length = len(n) @cache def dp(indx, compare_indx, tight, has_number): if indx == max_length: return int(has_number) ans = 0 upper_bound = int(num[compare_indx]) if tight else int(n[indx]) for digit in range(upper_bound + 1): if digit == 0 and not has_number: ans += dp(indx + 1, compare_indx, tight and (digit == upper_bound), False) else: ans += dp(indx + 1, min(compare_indx + 1, len(num) - 1), tight and (digit == upper_bound), True) return ans return dp(0, 0, True, False) However, count(7, 13) outputs 35 which is not correct since the lexicographical order of [1, 13] is [1, 10, 11, 12, 13, 2, 3, 4, 5, 6, 7, 8, 9] so count(7, 13) should be 10. Can anyone help me out here?
I couldn't follow the logic in your explanation, but this shouldn't need dynamic programming. In essence you want to do a separate count for each possible width of an integer. For instance, when calling count(7, 13), you'd want to count: integers with one digit: [1, 6] = 6 integers integers with two digits: [10, 13] = 4 integers The outcome is the sum: 6 + 4 = 10 Take count(86, 130) as another example, where the first argument has more than one digit: integers with one digit: [1, 8] = 8 integers (note that 8 is included) integers with two digits: [10, 85] = 76 integers (note that 86 is excluded) integers with three digits: [100, 130] = 31 integers Total is: 115 So some care has to be taken at the high-end of the ranges: when it is a proper prefix of the first argument, it should be included, if not, it should be excluded. And of course, for that last group (with the greatest number of digits) you should take care not to exceed the value of the second argument. Here is how you could code that logic: def count(num, n): strnum = str(num) lennum = len(strnum) max_length = len(str(n)) strnum += "0" * (max_length - lennum) # pad with zeroes at the right count = 0 low = 1 for width in range(1, max_length + 1): high = int(strnum[:width]) addone = width < lennum and n >= high count += min(high, n + 1) - low + addone low *= 10 return count
3
2
79,010,931
2024-9-22
https://stackoverflow.com/questions/79010931/how-to-trigger-a-post-request-api-to-add-a-record-in-a-sqlite-database-table-usi
I am trying to submit a HTML form from the browser to create a new user in a SQLite database table. Clicking on the Submit button triggers a POST request using FastAPI and Sqlalchemy 2.0. The API works perfectly when executed from the Swagger UI. But it does not work when triggered from an actual HTML form, returning a 422 Unprocessable Entity error. Below is the code I have used for the same along with the error I am seeing on the browser. I can see that the error is pointing to an id which does not exist in my Pydantic model and also in my html form. Any help on how to handle this error would be greatly appreciated. I am using: Python 3.12.6 (x64) on Windows 11 Sqlalchemy 2.0.34 Pydantic 2.9.1 core/database.py from os import getenv from dotenv import load_dotenv from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker, DeclarativeBase load_dotenv() # Needed to load full path of the .env file engine = create_engine( getenv("DATABASE_URL"), connect_args={"check_same_thread": False} ) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) # Database dependency async def get_db(): db = SessionLocal() try: yield db finally: db.close() # declarative base class class Base(DeclarativeBase): pass users/models.py from datetime import datetime, timezone from sqlalchemy import String, Enum from sqlalchemy.orm import Mapped, mapped_column from typing import Optional, List from enum import Enum as pyEnum from .database import Base class Gender(str, pyEnum): default = "" male = "M" female = "F" def time_now(): return datetime.now(timezone.utc).strftime("%b %d, %Y %I:%M:%S %p") class AbstractBase(Base): __abstract__ = True id: Mapped[Optional[int]] = mapped_column(primary_key=True, index=True) created_by: Mapped[Optional[str]] = mapped_column(String(50), nullable=True, default="") updated_by: Mapped[Optional[str]] = mapped_column(String(50), nullable=True, default="") created: Mapped[Optional[str]] = mapped_column(nullable=True, default=time_now) updated: Mapped[Optional[str]] = mapped_column(nullable=True, default=time_now, onupdate=time_now) class User(AbstractBase): __tablename__ = "users" first_name: Mapped[str] = mapped_column(String(25), nullable=False) last_name: Mapped[Optional[str]] = mapped_column(String(25), default="") gender: Mapped[Optional[Gender]] = mapped_column(Enum(Gender), nullable=False, default=Gender.default.value) email: Mapped[str] = mapped_column(String(50), unique=True, nullable=False, index=True) users/schemas.py from typing import Optional from pydantic import BaseModel, EmailStr, Field from .models import Gender class UserCreate(BaseModel): first_name: str = Field(min_length=1, max_length=25) last_name: Optional[str] = Field(min_length=0, max_length=25, default="") gender: Optional[Gender] = Gender.default.value email: EmailStr = Field(min_length=7, max_length=50) class UserUpdate(UserCreate): pass class UserResponse(UserUpdate): id: Optional[int] created_by: Optional[EmailStr] = Field(min_length=7, max_length=50) updated_by: Optional[EmailStr] = Field(min_length=7, max_length=50) created: Optional[str] updated: Optional[str] class Config: from_attributes = True users/routers.py from fastapi import APIRouter, Depends, HTTPException, Request, Form from fastapi.templating import Jinja2Templates from fastapi.responses import HTMLResponse from sqlalchemy.orm import Session from core.database import get_db from users.models import User from users.schemas import UserCreate, UserUpdate, UserResponse templates = Jinja2Templates(directory="templates") users_router = APIRouter() # API to redirect user to the Register page @users_router.get("/register", response_class=HTMLResponse, status_code=200) async def redirect_user(request: Request): return templates.TemplateResponse( name="create_user.html", context={ "request": request, "title": "FastAPI - Create User", "navbar": "create_user" } ) # API to create new user @users_router.post("/create", response_model=UserCreate, response_class=HTMLResponse, status_code=201) async def create_user(request: Request, user: UserCreate=Form(), db: Session = Depends(get_db)): if db.query(User).filter(User.email == user.email).first(): raise HTTPException( status_code=403, detail=f"Email '{user.email}' already exists. Please try with another email." ) obj = User( first_name = user.first_name, last_name = user.last_name, gender = user.gender.value, email = user.email.lower(), created_by = user.email.lower(), updated_by = user.email.lower() ) db.add(obj) db.commit() db.refresh(obj) return ( templates.TemplateResponse( name="create_user.html", context={ "request": request, "item": obj, "title": "FastAPI - Create User", "navbar": "create_user" } ) ) main.py from fastapi import FastAPI, Request from fastapi.middleware.cors import CORSMiddleware from fastapi.templating import Jinja2Templates from core.database import Base, engine from users.routers import users_router templates = Jinja2Templates(directory="templates") # Create FastAPI instance app = FastAPI() app.include_router(users_router, prefix='/users', tags = ['Users']) # Specify URLS that are allowed to connect to the APIs origins = [ "http://localhost", "http://127.0.0.1", "http://localhost:8000", "http://127.0.0.1:8000" ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) # Create tables in database Base.metadata.create_all(bind=engine) templates/create_user.py {% extends 'base.html' %} {% block title %} {{ title }} {% endblock title %} {% block content %} <form method="POST" action="/create"> <div class="mb-3"> <label for="first_name" class="form-label">First Name</label> <input type="text" class="form-control" id="first_name" name="first_name"> </div> <div class="mb-3"> <label for="last_name" class="form-label">Last Name</label> <input type="text" class="form-control" id="last_name" name="last_name"> </div> <div class="mb-3"> <label for="email" class="form-label">Email</label> <input type="email" class="form-control" id="email" name="email" aria-describedby="emailHelp"> </div> <a href="{{ url_for('create_user') }}" type="submit" class="btn btn-outline-success float-end">Submit</a> </form> {% endblock content %} Error Message on Browser: { "detail": [ { "type": "int_parsing", "loc": [ "path", "id" ], "msg": "Input should be a valid integer, unable to parse string as an integer", "input": "create" } ] }
I could see here couple of potential problems: User Form dependency for each form field: @users_router.post("/create", response_class=HTMLResponse, status_code=201) async def create_user( request: Request, first_name: str = Form(), last_name: str = Form(), gender: str = Form(), email: str = Form(), db: Session = Depends(get_db) ): ... or parse FormData into Pydantic Model: class UserCreate(BaseModel): first_name:str = Field(min_length=1, max_length=25) last_name: str | None= Field(default="", min_length=0, max_length=25) gender: Gender | None = Field(default=Gender.default.value) email: EmailStr =Field(min_length=7, max_length=50) @classmethod def as_form( cls, first_name: str = Form(...), last_name: str | None =Form(''), gender: Gender | None = Form(Gender.default.value), email:EmailStr =Form(...) ): ... In your form you using <a> tag with href attribute and type="submit" attribute, but <a> tag doesntt support type attribute and clicking it will cause GET request to the URL specified in href attribute, not POST request with form data. This means your form data isnt being submitted. So you could change change attribute of your tag: <form method="POST" action="/users/create"> or <form method="POST" action="{{request.url_for('create_user')}}"> but make sure to add name='create_user' to your route so that url_for can find it @users_router.post("/create", ..., name='create_user') async def create_user(...): ... after this, replace <a> tag with element of type submit <button type="submit" class="btn btn-outline-success float-end">Submit</button>
2
1
79,003,067
2024-9-19
https://stackoverflow.com/questions/79003067/how-to-check-if-a-specific-list-element-is-a-number
I have 2 lists, one that contains both numbers and strings and one only numbers: list1 = [1, 2, 'A', 'B', 3, '4'] list2 = [1, 2, 3, 4, 5, 6] My goal is to print from list2 only the numbers that have another number (both as number or string) in the same index in list1. Expected output: [1,2,5,6] I have tried the following code: lenght1 = len(list1) for i in range(lenght1): if (list1[i].isdigit()): print(list2[i]) But I receive the following error: AttributeError: 'int' object has no attribute 'isdigit' Same error with .isnumber(). Is there a way to check a specific list element if it is a number?
This could be solved in a one-liner solution, like so: list1 = [1,2,'A','B',3,'4'] list2 = [1,2,3,4,5,6] print([list2[index] for index, x in enumerate(list1) if isinstance(x, int)]) But we can't check if a string can become an int in this specific case. Basically, using list comprehension, we filter the first list and we create a new list based on the second one, so we should implement a way to check if a string can become an int. To do so, we need one more check (and we'll move to a foreach for better readability). list1 = [1,2,'A','B',3,'4'] list2 = [1,2,3,4,5,6] output_list = [] for index, x in enumerate(list1): if isinstance(x, int): output_list.append(list2[index]) if isinstance(x, str): if x.isdigit(): output_list.append(list2[index]) In conclusion, isdigit() is only available for strings, that's why you get the error. You loop through integers too, so it doesn't exist. Note that this code is NOT flexible and it only covers the specific case. For example, if negative numbers are prompted or lists are of different lengths, it will break. If we want to also cover the negative numbers case, a quick fix is to swap from x.isdigit() to a try-except, like so: list1 = [1,2,'A','B',3,'4'] list2 = [1,2,3,4,5,6] output_list = [] for index, x in enumerate(list1): if isinstance(x, int): output_list.append(list2[index]) if isinstance(x, str): try: int(x) output_list.append(list2[index]) except: continue If it can't be converted to int, then go to the next iteration.
3
6
79,009,790
2024-9-21
https://stackoverflow.com/questions/79009790/executing-a-market-buy-order-through-kucoin-api
I have just started with the Kucoin API but I'm having trouble executing a market buy order in the futures market using python. I'm using this Github as a reference: https://github.com/Kucoin/kucoin-futures-python-sdk heres the code I tried: from kucoin_futures.client import Trade client = Trade(key='myKey', secret='mySecret', passphrase='myPP') client.create_market_order(symbol="ZEROUSDTM", side="buy",funds=1, lever=1) however this returns the following error: File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\kucoin_futures\base_request\base_request.py", line 128, in check_response_data raise Exception("{}-{}".format(response_data.status_code, response_data.text)) Exception: 200-{"msg":"Quantity parameter cannot be empty.","code":"100001"} does anyone know where I'm going wrong? Thanks for reading
by default orders are limit orders. So you need to specify price and size attributes. This is according the API manual: https://www.kucoin.com/docs/rest/spot-trading/orders/place-order
2
1
79,008,530
2024-9-20
https://stackoverflow.com/questions/79008530/mutating-cells-in-a-large-polars-python-dataframe-with-iter-rows-yields-segmen
I have a large dataframe that looks like this: df_large = pl.DataFrame({'x':['h1','h2','h2','h3'], 'y':[1,2,3,4], 'ind1':['0/0','1/0','1/1','0/1'], 'ind2':['0/1','0/2','1/1','0/0'] }).lazy() df_large.collect() | x | y | ind_1 | ind_2 | |_______|_______|_______|_________| | "h1" | 1 | "0/0" | '0/1' | | "h2" | 2 | "1/0" | '0/2' | | "h2" | 3 | "1/1" | '1/1' | | "h3" | 4 | "0/1" | '0/0' | df_large contains coordinates [x (str), y (int)] and string values for many individuals [ind_1,ind_2,...]. It is very large, so I have to read the CSV file as a lazy dataframe. Additionally, I have a small dataframe that looks like this: df_rep = pl.DataFrame({'x':['h1','h2','h2'], 'y':[1,2,2], 'ind':['ind1','ind1','ind2']}) df_rep | x | y | indvs | |_______|_______|_________| | "h1" | 1 | "ind_1" | | "h2" | 2 | "ind_1" | | "h2" | 2 | "ind_2" | I need to mutate the values for the columns ind_k in df_large when they appears on df_rep. I did the following code for that: for row in df_rep.iter_rows(): df_large = df_large.with_columns( pl.when(pl.col('x') == row[0], pl.col('y') == row[1]) .then(pl.col(row[2]).str.replace_all('(.)/(.)','./.')) .otherwise(pl.col(row[2])) .alias(row[2]) ) df_large.collect() | x | y | ind_1 | ind_2 | |_______|_______|_______|_________| | "h1" | 1 | "./." | '0/1' | | "h2" | 2 | "./." | './.' | | "h2" | 3 | "1/1" | '1/1' | | "h3" | 4 | "0/1" | '0/0' | This method, while slow, works for a subset of the larger dataset. However, Polars produces a segmentation fault when applied to the full dataset. I was hoping you could provide feedback on how to resolve this issue. An alternative method to achieve my goal without using iter_rows() would be ideal! I am a beginner with Polars, and I would greatly appreciate any feedback. I've been stuck on this issue for some time now :(
If you reshape the small frame with .pivot() df_rep.with_columns(value=True).pivot(on="ind", index=["x", "y"]) shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ ind1 ┆ ind2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════β•ͺ══════║ β”‚ h1 ┆ 1 ┆ true ┆ null β”‚ β”‚ h2 ┆ 2 ┆ true ┆ true β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ You could then match the rows with a left .join() and put then when/then logic into a single .with_columns() call. index = ["x", "y"] other = df_rep.with_columns(value=True).pivot(on="ind", index=index) names = other.drop(index).columns (df_large .join(other, on=index, how="left") .with_columns( pl.when(pl.col(f"{name}_right")) .then(pl.col(name).str.replace_all(r"(.)/(.)", "./.")) .otherwise(pl.col(name)) for name in names ) ) shape: (4, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ ind1 ┆ ind2 ┆ ind1_right ┆ ind2_right β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str ┆ str ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════β•ͺ══════β•ͺ════════════β•ͺ════════════║ β”‚ h1 ┆ 1 ┆ ./. ┆ 0/1 ┆ true ┆ null β”‚ β”‚ h2 ┆ 2 ┆ ./. ┆ ./. ┆ true ┆ true β”‚ β”‚ h2 ┆ 3 ┆ 1/1 ┆ 1/1 ┆ null ┆ null β”‚ β”‚ h3 ┆ 4 ┆ 0/1 ┆ 0/0 ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can then .drop() the names_right columns.
1
2
79,009,454
2024-9-21
https://stackoverflow.com/questions/79009454/convert-a-list-of-time-string-to-unique-string-format
I have a list of time string with different formats as shown time = ["1:5 am", "1:35 am", "8:1 am", "9:14 am", "14:23 pm", "20:2 pm"] dict = {'time': time} df = pd.DataFrame(dict) and wanted to replace strings in list as shown below. ["01:05 am", "01:35 am", "08:01 am", "09:14 am", "14:23 pm", "20:02 pm"] Not sure how to write a regex that format the string in DataFrame.
A possible solution, which is based on regex. (df['time'].str.replace(r'^(\d):', r'0\1:', regex=True) .str.replace(r':(\d)\s', r':0\1 ', regex=True)) The main ideas are: With r'^(\d):', one matches a single digit at the beginning of the string followed by a colon (e.g., 1: in 1:5 am). With r'0\1:', one adds a 0 before the captured single digit and retains the colon. With r':(\d)\s', one matches a single digit after a colon and before a space (e.g., :5 in 1:5 am). With r':0\1 ', one adds a 0 before the captured single digit and retains the colon and space. Output: 0 01:05 am 1 01:35 am 2 08:01 am 3 09:14 am 4 14:23 pm 5 20:02 pm Name: time, dtype: object
3
1
79,008,533
2024-9-20
https://stackoverflow.com/questions/79008533/how-do-i-count-blank-an-filled-cells-in-each-column-in-a-csv-file
What I want is to count the filled and blank cells in each column of a .csv file. This is my code: import pandas as pd file_path = r"C:\Users\andre\OneDrive\Documentos\FarmΓ‘cia\Python\Cadastro_clientes\cadastro_cli.csv" df = pd.read_csv(file_path, sep='|', header=None) # No names argument, read all columns filled_counts = df.count() # Count non-null entries for all columns blank_counts = df.isnull().sum() # Count null (blank) entries for all columns summary = pd.DataFrame({ 'Filled': filled_counts, 'Blank': blank_counts }) print("\nFilled and Blank Counts:") print(summary) I only get this, which is not what I want at all: Filled and Blank Counts: Filled Blank 0 22318 0 I'm using Jupyter Notebook. Any help or tips are very appreciated!
If you want to tally the number of filled and blank cells for each column individually, use: summary = pd.DataFrame({ 'Filled': df.notnull().sum(), # Count non-null (filled) cells 'Blank': df.isnull().sum() # Count null (blank) cells }) print(summary, "\n") If you want to tally for the whole csv, use: total_filled = summary['Filled'].sum() total_blank = summary['Blank'].sum() print(f"Filled: {total_filled}, \nBlank: {total_blank}" Complete script: import pandas as pd file_path = r"C:\Users\andre\OneDrive\Documentos\FarmΓ‘cia\Python\Cadastro_clientes\cadastro_cli.csv" df = pd.read_csv(file_path, sep='|') #if csv does not have headers add 'header=None' # Calculate filled and blank counts for each column summary = pd.DataFrame({ 'Filled': df.notnull().sum(), 'Blank': df.isnull().sum() }) print(summary, "\n") # Calculate totals for the entire CSV total_filled = summary['Filled'].sum() total_blank = summary['Blank'].sum() print(f"Total filled: {total_filled} \nTotal blank: {total_blank}")
1
3
79,008,391
2024-9-20
https://stackoverflow.com/questions/79008391/grabbing-a-specific-url-from-a-webpage-with-re-and-requests
import requests, re r = requests.get('example.com') p = re.compile('\d') print(p.match(str(r.text))) This always prints None, even though r.text definitely contains numbers, but print(p.match('12345')) works. What do I need to do to r.text to make it readable by re.compile.match()? Casting to str is clearly insufficient.
It is because re.match only checks for a match at the beginning of the string, and r.text does not start with a number. If you want to find the first match, then use re.search instead: import requests, re r = requests.get('https://example.com') p = re.compile(r'\d') print(p.search(r.text)) Output: <re.Match object; span=(88, 89), match='8'> From the docs: Pattern.match: If zero or more characters at the beginning of string match this regular expression, return a corresponding Match. Pattern.search: Scan through string looking for the first location where this regular expression produces a match, and return a corresponding Match.
2
0
79,007,387
2024-9-20
https://stackoverflow.com/questions/79007387/python-3-superclass-instantiation-via-derived-classs-default-constructor
In this code: class A(): def __init__(self, x): self.x = x def __str__(self): return self.x class B(A): def __str__(self): return super().__str__() b = B("Hi") print(b) The output is: Hi. What is happening under the hood? How does the default constructor in the derived class invoke the super class constructor? How are the params passed to the derived class object get mapped to those of the super class?
How does the default constructor in the derived class invoke the super class constructor? You didn't override it in B, so you inherited it from A. That's what inheritance is for. >>> B.__init__ is A.__init__ True In the same vein, you may as well not define B.__str__ at all here, since it doesn't do anything (other than add a useless extra frame into the call stack). How are the params passed to the derived class object get mapped to those of the super class? You may be overthinking this. As shown above, B.__init__ and A.__init__ are identical. B.__init__ gets resolved in the namespace of A, since it is not present in the namespace of B. >>> B.__mro__ (__main__.B, __main__.A, object) >>> A.__dict__["__init__"] <function __main__.A.__init__(self, x)> >>> B.__dict__["__init__"] ... # KeyError: '__init__' What is happening under the hood? Please note that __init__ methods are not constructors. If you're looking for an analogy of constructors in other programming languages, the __new__ method may be more suitable than __init__. First, an instance of B will created (by __new__), and then this instance will be passed as the first positional argument self to A.__init__, along with the string value "Hi" for the second positional argument x.
1
7
79,004,202
2024-9-19
https://stackoverflow.com/questions/79004202/minimum-number-of-operations-for-array-of-numbers-to-all-equal-one-number
You have one array of numbers, for example [2, 5, 1]. You have a second array of numbers, for example [8, 4, 3]. For each of the numbers in the second array, how many operations would it take to make the first array all equal that number? You can only increment or decrement by 1 at a time. To get to 8, it would take (8-2)+(8-5)+(8-1)=16 operations. To get to 4, it would take (4-2)+(5-4)+(4-1)=6 operations. To get to 3, it would take (3-2)+(5-3)+(3-1)=5 operations. So the answer would be [16, 6, 5]. I was able to do this in one line: answer = [sum(abs(x-y) for x in a1) for y in a2] But this wasn't fast enough when the arrays can have up to 105 items. How would I be able to do this faster?
Same approach as the others (sort, use bisect to split into the i smaller and the n-i larger values, and look up their sums with precomputed prefix sums), just less code: from itertools import accumulate from bisect import bisect def solve(a1, a2): a1.sort() p1 = 0, *accumulate(a1) n = len(a1) total = p1[-1] return [ (i*y - p1[i]) + (total-p1[i] - (n-i)*y) for y in a2 for i in [bisect(a1, y)] ] Attempt This Online! The expression in the list comprehension could be "optimized" to (2*i-n)*y + total-2*p1[i], but I can't make sense of that and prefer the longer version where I compute the costs for the i smaller values and the n-i larger values separately.
2
1
79,004,666
2024-9-19
https://stackoverflow.com/questions/79004666/why-does-ismethod-return-false-for-a-method-when-accessed-via-the-class
Define a simple method: class Foo: def bar(self): print('bar!') Now use inspect.ismethod: >>> from inspect import ismethod >>> ismethod(Foo.bar) False Why is this? Isn't bar a method?
Consult the documentation: inspect.ismethod(object) Return True if the object is a bound method written in Python. So ismethod is actually only testing for bound methods. That means if you create an instance and access the method through the instance, ismethod will return True: >>> obj = Foo() >>> ismethod(obj.bar) True A "bound" method means a method which is already bound to an object, so that object is provided as the self parameter. This is how you can call obj.bar() with no arguments, even though bar was declared to have one parameter. We can also see the difference by looking at the types: >>> Foo.bar <function Foo.bar at 0x7853771f1ab0> >>> obj.bar <bound method Foo.bar of <__main__.Foo object at 0x7853771e4850>> Only the latter is a bound method. Foo.bar is a function, it is not considered to be a method by Python's type system. So why does inspect.ismethod behave this way? From Python's perspective, bar is an ordinary function which happens to be referenced by an attribute of the class Foo. To make this clear, suppose you define a function outside of the class, and then assign it to a class attribute: def baz(self): print('baz!') Foo.baz = baz If you call ismethod(baz) you should expect it to be False because baz is obviously just a function declared in the outer scope. And you will get the same result if you call ismethod(Foo.baz), because Foo.baz is just a reference to the same function. In fact, the expression Foo.baz is evaluated to get a reference to the function before that reference is passed as an argument to ismethod; of course, ismethod can't give different answers for the same argument depending on where that argument comes from. The same applies when the function bar is declared inside the class Foo. The way classes work in Python, this is pretty much equivalent to declaring the function outside of the class and then assigning it to a class attribute of the same name as the function. So bar and baz work exactly the same way: >>> obj.baz <bound method baz of <__main__.Foo object at 0x7853771e4850>> >>> obj.baz() baz! In short, Foo.bar is "not a method" because in Python, "methods" are just ordinary functions until they are bound to instances.
3
4
78,997,019
2024-9-18
https://stackoverflow.com/questions/78997019/in-python-3-12-why-does-%c3%96l-take-less-memory-than-%c3%96
I just read PEP 393 and learned that Python's str type uses different internal representations, depending on the content. So, I experimented a little bit and was a bit surprised by the results: >>> sys.getsizeof('') 41 >>> sys.getsizeof('H') 42 >>> sys.getsizeof('Hi') 43 >>> sys.getsizeof('Γ–') 61 >>> sys.getsizeof('Γ–l') 59 I understand that in the first three cases, the strings don't contain any non-ASCII characters, so an encoding with 1 byte per char can be used. Putting a non-ASCII character like Γ– in a string forces the interpreter to use a different encoding. Therefore, I'm not surprised that 'Γ–' takes more space than 'H'. However, why does 'Γ–l' take less space than 'Γ–'? I assumed that whatever internal representation is used for 'Γ–l' allows for an even shorter representation of 'Γ–'. I'm using Python 3.12, apparently it is not reproducible in earlier versions.
This test code (the structures are only correct according to 3.12.4 source, and even so I didn't quite double-check them) import ctypes import sys class PyUnicodeObject(ctypes.Structure): _fields_ = [ ("ob_refcnt", ctypes.c_ssize_t), ("ob_type", ctypes.c_void_p), ("length", ctypes.c_ssize_t), ("hash", ctypes.c_ssize_t), ("state", ctypes.c_uint64), ] class StateBitField(ctypes.LittleEndianStructure): _fields_ = [ ("interned", ctypes.c_uint, 2), ("kind", ctypes.c_uint, 3), ("compact", ctypes.c_uint, 1), ("ascii", ctypes.c_uint, 1), ("statically_allocated", ctypes.c_uint, 1), ("_padding", ctypes.c_uint, 24), ] def __repr__(self): return ", ".join(f"{k}: {getattr(self, k)}" for k, *_ in self._fields_ if not k.startswith("_")) def dump_s(s: str): o = PyUnicodeObject.from_address(id(s)) state_int = o.state state = StateBitField.from_buffer(ctypes.c_uint64(state_int)) print(f"{s!r}".ljust(8), f"{o.length=}, {sys.getsizeof(s)=}, {state}") dump_s('5') dump_s('a') dump_s('Γ€') dump_s('vvv') dump_s('Γ–Γ–Γ–') dump_s(str(chr(214))) # avoid the string having been interned into module source dump_s(str(chr(214) + chr(108))) # avoid the string having been interned into module source prints out '5' o.length=1, sys.getsizeof(s)=42, interned: 3, kind: 1, compact: 1, ascii: 1, statically_allocated: 1 'a' o.length=1, sys.getsizeof(s)=42, interned: 3, kind: 1, compact: 1, ascii: 1, statically_allocated: 1 'Γ€' o.length=1, sys.getsizeof(s)=61, interned: 0, kind: 1, compact: 1, ascii: 0, statically_allocated: 1 'vvv' o.length=3, sys.getsizeof(s)=44, interned: 2, kind: 1, compact: 1, ascii: 1, statically_allocated: 0 'Γ–Γ–Γ–' o.length=3, sys.getsizeof(s)=60, interned: 0, kind: 1, compact: 1, ascii: 0, statically_allocated: 0 'Γ–' o.length=1, sys.getsizeof(s)=61, interned: 0, kind: 1, compact: 1, ascii: 0, statically_allocated: 1 'Γ–l' o.length=2, sys.getsizeof(s)=59, interned: 0, kind: 1, compact: 1, ascii: 0, statically_allocated: 0 'Γ–' o.length=1, sys.getsizeof(s)=61, interned: 0, kind: 1, compact: 1, ascii: 0, statically_allocated: 1 – the smoking gun seems to be statically_allocated on Γ– etc.. I think that stems from this line in pycore_runtime_init_generated where it looks like the runtime statically objects for all Latin-1 strings (among others). As discussed in the comments, this CPython PR added UTF-8 representations of all of these statically allocated strings, so Γ– is statically stored as both Latin-1 (1 character) and UTF-8 (2 characters). Also, I should note getsizeof() actually forwards to unicode_sizeof_impl, it's not just measuring memory.
36
19
79,002,792
2024-9-19
https://stackoverflow.com/questions/79002792/numpy-random-0-and-1-matrix-with-bias-towards-0
Is there a smart, fast way to create an n x n numpy array filled with 0's and 1's with bias towards 0's? I did np.random.randint(2, (size,size)) but the bias is not accounted for here. I could do for loop but i want a faster cleaner way to populate the matrix. Thanks!
I would use numpy.random.choice with custom probabilities: n = 10 p = 0.9 # probability of 0s out = np.random.choice([0, 1], size=(n, n), p=[p, 1-p]) Another option, generating numbers in the 0-1 range, comparing to the probability of 0s and converting the booleans to integer: out = (np.random.random((n, n))>p).astype(int) Example output: [[0 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 1]]
3
3
78,999,687
2024-9-18
https://stackoverflow.com/questions/78999687/polars-make-all-groups-the-same-size
Question I'm trying to make all groups for a given data frame have the same size. In Starting point below, I show an example of a data frame that I whish to transform. In Goal I try to demonstrate what I'm trying to achieve. I want to group by the column group, make all groups have a size of 4, and fill 'missing' values with null - I hope it's clear. I have tried several approaches but have not been able to figure this one out. Starting point dfa = pl.DataFrame(data={'group': ['a', 'a', 'a', 'b', 'b', 'c'], 'value': ['a1', 'a2', 'a3', 'b1', 'b2', 'c1']}) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════║ β”‚ a ┆ a1 β”‚ β”‚ a ┆ a2 β”‚ β”‚ a ┆ a3 β”‚ β”‚ b ┆ b1 β”‚ β”‚ b ┆ b2 β”‚ β”‚ c ┆ c1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Goal >>> make_groups_uniform(dfa, group_by='group', group_size=4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════║ β”‚ a ┆ a1 β”‚ β”‚ a ┆ a2 β”‚ β”‚ a ┆ a3 β”‚ β”‚ a ┆ null β”‚ β”‚ b ┆ b1 β”‚ β”‚ b ┆ b2 β”‚ β”‚ b ┆ null β”‚ β”‚ b ┆ null β”‚ β”‚ c ┆ c1 β”‚ β”‚ c ┆ null β”‚ β”‚ c ┆ null β”‚ β”‚ c ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Package version polars: 1.1.0
The advantage of the approach below is that we don’t transform original DataFrame (except maybe sorting if you want to rearrange the groups), we only create additional rows and append them back to the original DataFrame. I've adjusted my answer a bit, based on assumption that you want size of the group to be max of the size of all groups, but it works as well for fixed group_size. group_by() allows per-group calculation. len() to determine size of the group. repeat_by() creates lists based on previously calculated group size and max() group size. filter() to filter out empty lists for case when we don't need to add extra rows. explode() lists into column. concat() back to existing DataFrame. (optional) sort() if you need groups to be together. # you can use fixed group_size instead of pl.col.len.max() as well pl.concat([ dfa, ( dfa.group_by("group").len() .select(pl.col.group.repeat_by(pl.col.len.max() - pl.col.len)) .filter(pl.col.group.list.len() != 0) .explode("group") ) ], how="diagonal").sort("group") shape: (9, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════║ β”‚ a ┆ a1 β”‚ β”‚ a ┆ a2 β”‚ β”‚ a ┆ a3 β”‚ β”‚ b ┆ b1 β”‚ β”‚ b ┆ b2 β”‚ β”‚ b ┆ null β”‚ β”‚ c ┆ c1 β”‚ β”‚ c ┆ null β”‚ β”‚ c ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ If you need fixed group size, then repeat() is probably more performant, but the idea is the same - only generate additional rows and append them back to original DataFrame. group_size = 3 # you can make it dynamic as well though # group_size = dfa.group_by("group").len().max()["len"].item() pl.concat([ dfa, ( dfa.group_by("group") .agg(value = pl.repeat(None, group_size - pl.len().cast(int))) .filter(pl.col.value.list.len() != 0) .explode("value") ) ]).sort("group") shape: (9, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════║ β”‚ a ┆ a1 β”‚ β”‚ a ┆ a2 β”‚ β”‚ a ┆ a3 β”‚ β”‚ b ┆ b1 β”‚ β”‚ b ┆ b2 β”‚ β”‚ b ┆ null β”‚ β”‚ c ┆ c1 β”‚ β”‚ c ┆ null β”‚ β”‚ c ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
3
3
79,002,206
2024-9-19
https://stackoverflow.com/questions/79002206/concatenate-polars-dataframe-with-columns-of-dtype-enum
Consider having two pl.DataFrames with identical schema. One of the columns has dtype=pl.Enum. import polars as pl enum_col1 = pl.Enum(["type1"]) enum_col2 = pl.Enum(["type2"]) df1 = pl.DataFrame( {"enum_col": "type1", "value": 10}, schema={"enum_col": enum_col1, "value": pl.Int64}, ) df2 = pl.DataFrame( {"enum_col": "type2", "value": 200}, schema={"enum_col": enum_col2, "value": pl.Int64}, ) print(df1) print(df2) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_col ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ enum ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ type1 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_col ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ enum ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ type2 ┆ 200 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ If I try to do a simple pl.concat([df1, df2]), I get the following error: polars.exceptions.SchemaError: type Enum(Some(local), Physical) is incompatible with expected type Enum(Some(local), Physical) You can get around this issue by "enlarging" the enums like this: pl.concat( [ df1.with_columns(pl.col("enum_col").cast(pl.Enum(["type1", "type2"]))), df2.with_columns(pl.col("enum_col").cast(pl.Enum(["type1", "type2"]))), ] ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_col ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ enum ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ type1 ┆ 10 β”‚ β”‚ type2 ┆ 200 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I guess, there is a more pythonic way to do this?
you can cast enum_col to combined enum type: enum_col = enum_col1 | enum_col2 pl.concat( df.with_columns(pl.col.enum_col.cast(enum_col)) for df in [df1, df2] ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_col ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ enum ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ type1 ┆ 10 β”‚ β”‚ type2 ┆ 200 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ You can also create new enum_col dynamically, for example: from functools import reduce enum_col = reduce(lambda x,y: x | y, [df.schema["enum_col"] for df in [df1, df2]]) Enum(categories=['type1', 'type2'])
2
3
79,000,778
2024-9-19
https://stackoverflow.com/questions/79000778/how-to-expand-a-single-index-dataframe-to-a-multiindex-dataframe-in-an-efficient
import pandas as pd concordance_region = pd.DataFrame( { "country 1": pd.Series([1, 0], index=["region a", "region b"]), "country 2": pd.Series([0, 1], index=["region a", "region b"]), "country 3": pd.Series([0, 1], index=["region a", "region b"]), } ) display(concordance_region) country_index = concordance_region.columns region_index = concordance_region.index sector_index = ['sector a','sector b'] country_sector = pd.MultiIndex.from_product([country_index, sector_index], names=["country", "sector"]) region_sector = pd.MultiIndex.from_product([region_index, sector_index], names=["region", "sector"]) concordance_region_expanded = pd.DataFrame([[1,0,0,0,0,0],[0,1,0,0,0,0],[0,0,1,0,1,0],[0,0,0,1,0,1]], index=region_sector, columns=country_sector) display(concordance_region_expanded) I want to achieve the above expansion without hard-coding the number. An option is that: concordance_region_extended = pd.DataFrame(index=region_sector, columns=country_sector) for region in region_index: for sector_1 in sector_index: for country in country_index: for sector_2 in sector_index: if sector_1 == sector_2 and concordance_region.loc[region, country] == 1: concordance_region_expanded.loc[(region, sector_1),(country, sector_2)] = 1 concordance_region_expanded = concordance_region_expanded.fillna(value=0).infer_objects(copy=False) concordance_region_expanded But I think the above code is neither efficient nor elegant. Any way to solve the above problem?
Code use np.kron and identity matrix (identity matrix can be created with np.eye.) import pandas as pd import numpy as np # taken from questioner's code sector_index = ['sector a', 'sector b'] country_sector = pd.MultiIndex.from_product( [country_index, sector_index], names=["country", "sector"]) region_sector = pd.MultiIndex.from_product( [region_index, sector_index], names=["region", "sector"]) # start answer n = len(sector_index) out = pd.DataFrame( np.kron(concordance_region.values, np.eye(n)), index=region_sector, columns=country_sector, dtype='int' ) out
1
4
78,999,867
2024-9-18
https://stackoverflow.com/questions/78999867/django-change-form-prefix-separator
I'm using form prefix to render the same django form twice in the same template and avoid identical fields id's. When you do so, the separator between the prefix and the field name is '-', I would like it to be '_' instead. Is it possible ? Thanks
You could "monkey patch" [wiki] the BaseForm code, for example in some AppConfig: # app_name/config.py from django.apps import AppConfig class MyAppConfig(AppConfig): def ready(self): from django.forms.forms import BaseForm def add_prefix(self, field_name): return f'{self.prefix}_{field_name}' if self.prefix else field_name BaseForm.add_prefix = add_prefix But I would advise not to do this. This will normally generate the correct prefixes with an underscore. But some Django apps or some logic in the Django project itself might not use this method, and make the assumption it works with an hyphen instead. While probably most of such packages could indeed get fixed, it would take a lot of work.
2
2
78,998,333
2024-9-18
https://stackoverflow.com/questions/78998333/python-inheritance-not-returning-new-class
I'm having problems understanding why the inheritance is not working in the following example: import vlc class CustomMediaPlayer(vlc.MediaPlayer): def __init__(self, *args): super().__init__(*args) def custom_method(self): print("EUREKA") custom_mp = CustomMediaPlayer() print(custom_mp) custom_mp.custom_method() This outputs: <vlc.MediaPlayer object at 0x7743d37db8f0> AttributeError: 'MediaPlayer' object has no attribute 'custom_method' instead of a CustomMediaPlayer object, with the custom_method. Why is this happening? Is it because vlc.MediaPlayer is a _Ctype class?
This can happen if the base class overrides the __new__ method, which controls what happens when someone attempts to instantiate the class. The typical behaviour is that a new instance of the given class argument is created; but the __new__ function is free to return an existing object, or create an object according to its own internal logic. In their comment, Rogue links to the source code for the module, which does indeed override __new__. If there is a way to subclass vlc.MediaPlayer successfully, it should be listed in the module's documentation; otherwise the way to address this problem will depend on your specific requirements.
4
3
78,998,783
2024-9-18
https://stackoverflow.com/questions/78998783/sum-of-corresponding-values-from-different-arrays-of-the-same-size-with-python
I'm rather new to Python so it's quite possible that my question has already been asked on the net but when I find things that seem relevant, I don't always know how to use them in my code (especially if it's a function definition), so I apologise if there's any redundancy. I work with daily temperature data from the Copernicus website (https://marine.copernicus.eu/). As the netCDF files are too large if I want the data for every day of every month for several years, what I'm trying to do is access the data without downloading it so that I can work with it. The data is in the form of an array for 1 day of a month of a year. I want to sum the values of all the arrays for each day of a month in a year. To make things clearer, here's an example: Simplified arrays : array1([1,4,3,9] [7,5,2,3]) array2([3,8,6,1] [6,4,7,2]) #... etc until day 28,29,30 or 31 The result I want : array1 + array 2 => ([1+3,4+8,3+6,9+1] [7+6,5+4,2+7,3+2]) array1 + array 2 => ([4,12,9,10] [13,9,9,5]) I first tried to do it without loop with the data for 2 days and it works. My code : import os import xarray as xr import numpy as np import netCDF4 as nc import copernicusmarine # Access the data DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m") # Get only thetao (temperature) variable for 1 day subset = DS[['thetao']].sel(time = slice("2014-01-01", "2014-01-01")) # Obtain only data of a certain depth target_depth = 0 #surface subset_T = subset.thetao.isel(depth=target_depth) # To view my data in array thetao_depth0 = subset_T.data thetao_depth0 # Same thing for next day of the same month and year subset2 = DS[['thetao']].sel(time = slice("2014-01-02", "2014-01-02")) subset_T2 = subset2.thetao.isel(depth=target_depth) thetao_depth0_2 = subset_T2.data thetao_depth0_2 # The sum of my arrays days_sum = thetao_depth0 + thetao_depth0_2 days_sum My thetao_depth0 arrays look like this : For 01/01/2014 : array([[[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], ..., [-1.70870081, -1.70870081, -1.70870081, ..., -1.70870081, -1.70870081, -1.70870081], [-1.71016569, -1.71016569, -1.71016569, ..., -1.71016569, -1.71016569, -1.71016569], [ nan, nan, nan, ..., nan, nan, nan]]]) For 02/01/2014 : array([[[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], ..., [-1.70870081, -1.70870081, -1.70870081, ..., -1.70870081, -1.70870081, -1.70870081], [-1.71016569, -1.71016569, -1.71016569, ..., -1.71016569, -1.71016569, -1.71016569], [ nan, nan, nan, ..., nan, nan, nan]]]) And I get days_sum : array([[[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], ..., [-3.41740161, -3.41740161, -3.41740161, ..., -3.41740161, -3.41740161, -3.41740161], [-3.42033139, -3.42033139, -3.42033139, ..., -3.42033139, -3.42033139, -3.42033139], [ nan, nan, nan, ..., nan, nan, nan]]]) Now here's where it gets complicated. I'd like to create a loop that does the same thing with all the arrays for every day of a month in a year (from 01/01/2014 to 31/01/2014 for example). So far I've done this : day = ['01','02','03','04','05','06','07','08','09','10','11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26','27','28','29','30','31'] month = ['01'] year = ['2014'] DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m") for y in year: for m in month: for d in day: start_date="%s"%y+"-%s"%m+"-%s"%d end_date=start_date subset_thetao = DS[['thetao']].sel(time = slice(start_date, end_date)) target_depth = 0 subset_depth = subset_thetao.thetao.isel(depth=target_depth) thetao_depth0 = subset_depth.data But I'm having trouble adding up the arrays for each round of the loop. I first tried things with np.sum but either it's not made for what I want to do, or I'm doing it wrong, especially when it comes to storing the array with the sum in a variable. I've added empty_array = np.array([]) before my for loop but I don't know what to do next in the loop. This is the first time I've handled arrays with python, so maybe I'm doing it wrong. In the end, what I'd like to do is average the values of my different arrays over a month. A simplified example with 3 days of a month : array1([1,4,3,9] [7,5,2,3]) array2([3,8,6,1] [6,4,7,2]) array3([3,2,6,1] [1,4,5,2]) To get : array([(1+3+3)/3,(4+8+2)/3,...etc] [...etc]) array([2.3,4.6,5,3.6] [4.6,4.3,4.6,2.3])
I've added empty_array = np.array([]) before my for loop but I don't know what to do next in the loop. Almost right. You need to instantiate an array of the same shape as the arrays you will sum. With Numpy, you can only sum arrays of the same shape. You can inspect shape using arr.shape. The array should initially be filled with zeros. This way, you end up with the sum after adding all your other arrays. import numpy as np # Create a zero-filled array with the same shape as the arrays you need to sum sum_arr = np.zeros_like(thetao_depth0) # alternative: sum_arr = np.zeros(thetao_depth0.shape) for d in day: # ... # get the array subset_depth.data for the current day # ... # add the data of this day to the sum sum_arr += subset_depth.data
2
2
78,997,513
2024-9-18
https://stackoverflow.com/questions/78997513/why-is-there-typeerror-string-indices-must-be-integers-when-using-negative-in
I would like to understand why this works fine: >>> test_string = 'long brown fox jump over a lazy python' >>> 'formatted "{test_string[0]}"'.format(test_string=test_string) 'formatted "l"' Yet this fails: >>> 'formatted "{test_string[-1]}"'.format(test_string=test_string) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: string indices must be integers >>> 'formatted "{test_string[11:14]}"'.format(test_string=test_string) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: string indices must be integers I know this could be used: 'formatted "{test_string}"'.format(test_string=test_string[11:14]) ...but that is not possible in my situation. I am dealing with a sandbox-like environment where a list of variables is passed to str.format() as dictionary of kwargs. These variables are outside of my control. I know the names and types of variables in advance and can only pass formatter string. The formatter string is my only input. It all works fine when I need to combine a few strings or manipulate numbers and their precision. But it all falls apart when I need to extract a substring.
Why it doesn't work This is explained in the spec of str.format(): The arg_name can be followed by any number of index or attribute expressions. An expression of the form '.name' selects the named attribute using getattr(), while an expression of the form '[index]' does an index lookup using __getitem__(). That is, you can index the string using bracket notation, and the index you put inside the brackets will be the argument of the __getitem__() method of the string. This is indexing, not slicing. The bottom line is that str.format() simply doesn't support slicing of the replacement field (= the part between {}), as this functionality isn't part of spec. Regarding negative indices, the grammar specifies: element_index ::= digit+ | index_string This means that the index can either be a sequence of digits (digit+) or a string. Since any negative index such as -1 is not a sequence of digits, it will be parsed as index_string. However, str.__getitem__() only supports arguments of type integer. Hence the error TypeError: string indices must be integers, not 'str'. Solutions to the problem Use f-strings >>> test_string = 'long brown fox jump over a lazy python' >>> f"formatted {test_string[0]}" 'formatted l' >>> f"formatted {test_string[0:2]}" 'formatted lo' >>> f"formatted {test_string[-1]}" 'formatted n' Use str.format() but slice the argument of str.format() directly, rather than the replacement field >>> test_string = 'long brown fox jump over a lazy python' >>> 'formatted {replacement}'.format(replacement=test_string[0:2]) 'formatted lo' >>> 'formatted {replacement}'.format(replacement=test_string[-1]) 'formatted n'
4
4
78,996,869
2024-9-18
https://stackoverflow.com/questions/78996869/random-adjacendy-matrix-from-list-of-degrees
I want to do exactly the same thing as this post, but in python; aka given a list of natural integers, generate a random adjacency matrix whose degrees would match the list. I had great hope as the solution proposed uses a function from igraph, sample_degseq. However it seems like this function does not exist in the python version of igraph, at least as far as looked into it. I could program such a function myself but I'm not exactly smart enough to make it fast enough, and I would like this to be done in an efficient way.
The equivalent of R/igraph's sample_degseq() in python-igraph is Graph.Degree_Sequence(). Note that not all methods sample uniformly, and not all methods produce the same kind of graph (simple graph vs multigraph). "configuration_simple" and "edge_switching_simple" sample simple graphs uniformly. The former is exactly uniform (but very slow for anything but small degrees) and the latter almost exactly uniform. I recommend "edge_switching_simple". It basically generates a first graph using Graph.Realize_Degree_Sequence(), then it rewires it using rewire(), using 10 times as many steps as the number of edges.
2
1
78,981,196
2024-9-13
https://stackoverflow.com/questions/78981196/can-i-access-directories-in-palantir-and-use-for-to-get-names-of-all-tables-insi
I need to simplify process of downloading datasets from Palantir. My idea was to use it like directory on local pc, but the problem is, that when i make codespace to use my own code, it seems like it uses virtual python environment, so i cant access directories outside of the environment, which has the datasets i want to use. So the process should be from my perspective: Get into a directory with datasets Make some kind of FOR cycle based on the logic i need and insert names of files into a list Download all tables from list Is there some way to do it? I tried to access directory with the datasets, but as I am in virtual python environment, I dont know how. I need to run the script inside Palantir. Right now we download datasets one by one through Palantir UI, but that consumes a lot of time.
If what you want to do (best guess, I +1 the comment below your post that it would be great if you can clarify what is what exactly - datasets, files, etc.) is: I have a lot of files on my local laptop, I need to upload them to Foundry, process them, and this will generate another dataset of lot of files, how can I download them in bulk ? Then my guess, is: you create a dataset in Foundry, you can bulk upload to dataset by drag and dropping all your files from your local laptop to the dataset. A dataset is primarily a "set of files" which can be of any type. There is no need to have a schema on a dataset to be processed You pick the app of your choice (Code Workspace for a jupyter like experience, Code Repo for pro-code, Pipeline Builder for no-code/low-code) - My preference is Code Repo, but Code Workspace is likely a good option as well given it generates small code snippets for you You process the files one by one. Here is a typical example = https://www.palantir.com/docs/foundry/transforms-python/unstructured-files/ Below is an example that simply "copy paste" content from the input dataset to the output dataset # @lightweight() # Optional - simply doesn't use spark as not needed # @incremental() # Optional - only to process the new files on each run @transform( input_files=Input("/PATH/example_incremental_dataset"), output=Output("/PATH/example_incremental_lightweight_output"), ) def compute_downstream(input_files, output): fs = input_files.filesystem() files = list(fs.ls()) # listing all the files in the dataset timestamp = int(time.time()) logger.warning(f"These are the files that will be processed: {files}") for curr_input_file in files: with input_files.filesystem().open(curr_input_file.path, "rb") as f1: with output.filesystem().open(curr_input_file.path + f"_{timestamp}.txt", "wb") as f2: f2.write(f1.read()) Now you want to download the output. This hasn't a first class solution, but you will have a few alternatives depending on what exactly you want to download For example: You have a tabular dataset: You can download as CSV/EXCEL in the limit of 200k rows or so (top right in the dataset > Actions > Download as CSV). If files are produced, you can go download them one by one (dataset > Details > Files > Download). See https://www.palantir.com/docs/foundry/code-repositories/prepare-datasets-download/#access-the-file-for-download You could add some post-processing, to compress all the files into one archive file, which you can download manually # UNTESTED # Note: If you want to read the files written on your output, to then save the zip file on your output as well, you will need to add the @incremental() decorator # which acts a bit like an "advanced" mode where you can read your output - which is useful in that case import zipfile import os def compress_files(file_paths, output_zip): with zipfile.ZipFile(output_zip, 'w') as zipf: for file in file_paths: if os.path.isfile(file): # Check if file exists zipf.write(file, os.path.basename(file)) else: print(f"File {file} does not exist and will be skipped.") # Example usage files_to_compress = ['file1.txt', 'file2.txt', 'file3.txt'] output_zip_file = 'compressed_files.zip' compress_files(files_to_compress, output_zip_file) You could script the "download" by triggering the "download" event programmatically from your laptop, see the docs about Foundry's API = https://www.palantir.com/docs/foundry/api/datasets-resources/files/list-files/ and https://www.palantir.com/docs/foundry/api/datasets-resources/files/get-file-content/ Hope that helps EDIT: In case you have a dynamic set of files, see https://www.palantir.com/docs/foundry/transforms-python/unstructured-files/ in particular: file_statuses = list(your_input.filesystem().ls()) # Result: [FileStatus(path='students.csv', size=688, modified=...)] paths = [f.path for f in file_statuses] # Result: ['students.csv', ...]
2
0
78,972,060
2024-9-11
https://stackoverflow.com/questions/78972060/how-to-extract-values-based-on-column-names-and-put-it-in-another-column-in-pola
I would like to fill a value in a column based on another columns' name, in the Polars library from python (I obtained the following DF by exploding my variables' column names): Input: df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Average ┆ Median ┆ Q1 ┆ Variable β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════β•ͺ═════β•ͺ══════════║ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Average β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Median β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Q1 β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Average β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Median β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Q1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """) Expected output: shape: (6, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Average ┆ Median ┆ Q1 ┆ Variable ┆ value β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════β•ͺ═════β•ͺ══════════β•ͺ═══════║ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Average ┆ 2 β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Median ┆ 3 β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Q1 ┆ 4 β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Average ┆ 1 β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Median ┆ 5 β”‚ β”‚ Banana ┆ 1 ┆ 5 ┆ 10 ┆ Q1 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I have tried: df = df.with_columns(value = pl.col(f"{pl.col.variable}")) But that does not work because polars perceives the argument as a function (?). Does anyone know how to do this? Note: I have also tried to transpose the dataframe, which, not only was that computationally expensive, also did not work! Because it would transpose the DF into a 5-rows-long DF. What I need is a (Name * Number of Variables)-rows-long DF. That is, for example, I have 3 different names (say, Apple, Banana, and Dragonfruit), and I have 3 variables (Average, Median, Q1), then my DF should be 9-rows-long!
You can use when/then() to check whether the value of the column Variable is the same as the column name. coalesce() to choose first non-empty result. df.with_columns( value = pl.coalesce( pl.when(pl.col.Variable == col).then(pl.col(col)) for col in df["Variable"].unique() ) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ Average ┆ Median ┆ Q1 ┆ Variable ┆ value β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════β•ͺ═════β•ͺ══════════β•ͺ═══════║ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Average ┆ 2 β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Median ┆ 3 β”‚ β”‚ Apple ┆ 2 ┆ 3 ┆ 4 ┆ Q1 ┆ 4 β”‚ β”‚ Banana ┆ 3 ┆ 5 ┆ 10 ┆ Average ┆ 3 β”‚ β”‚ Banana ┆ 3 ┆ 5 ┆ 10 ┆ Median ┆ 5 β”‚ β”‚ Banana ┆ 3 ┆ 5 ┆ 10 ┆ Q1 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
78,975,421
2024-9-11
https://stackoverflow.com/questions/78975421/how-do-i-filter-across-multiple-model-relationships
My models: class Order (models.Model): customer = models.ForeignKey("Customer", on_delete=models.RESTRICT) request_date = models.DateField() price = models.DecimalField(max_digits=10, decimal_places=2) @property def agent_name(self): assignment = Assignment.objects.get(assig_year = self.request_date.year, customer = self.customer) if assignment is not None: return assignment.sales_agent.name + ' ' + assignment.sales_agent.surname else: return 'ERROR' class Company (models.Model): pass class Customer (Company): pass class Assignment (models.Model): assig_year = models.PositiveSmallIntegerField() customer = models.ForeignKey("Customer", on_delete=models.CASCADE) sales_agent = models.ForeignKey("Agent", on_delete=models.CASCADE) class Meta: #unique key year + customer constraints = [ UniqueConstraint( fields=['assig_year', 'customer'], name='Primary_Key_Assignment' ) ] class Employee (models.Model): name = models.CharField(max_length=32) surname = models.CharField(max_length=32) class Agent (Employee): pass One assignment relates each customer to a sales agent for a given year. Each customer may have several orders along the year and the agent assigned to the customer is accountable for serving all of them. In one of my views I am displaying all orders by listing their corresponding sales agent, customer, date and price: def GetOrders(request): orders = Order.objects.order_by('-request_date') template = loader.get_template('orders.html') context = { 'orders' : orders, } return HttpResponse(template.render(context,request)) orders.html: <!DOCTYPE html> <html> <head> <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"> </head> <body> <main> <table> <thead> <th>Agent</th> <th>Customer</th> <th>Date</th> <th>Price</th> </thead> <tbody> {% for x in orders %} <td>{{ x.agent_name }}</td> <td>{{ x.customer.name }}</td> <td>{{ x.request_date }}</td> <td>{{ x.price }}</td> </tr> {% endfor %} </tbody> </table> </main> </body> </html> I want to add some filtering capability to select the sales agent I'm interested in. I don't know how to deal with relationships to check the sales agent. I tried the agent_name property: <!DOCTYPE html> <html> <head> <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"> </head> <body> <main> <div class="filters"> <form action="" method="GET"> <div class="row"> <div class="col-xl-3"> <label>Agent:</label> <input type="text" class="form-control" placeholder="Name" name="name" {% if name %} value = "{{ name }}" {% endif %}> </div> <div class="col-xl-2" style="padding-top: 2%;"> <button type="submit" class="btn custom-btn">Filter</button> </div> </div> </form> </div> <p/> <table> <thead> <th>Agent</th> <th>Customer</th> <th>Date</th> <th>Price</th> </thead> <tbody> {% for x in orders %} <td>{{ x.agent_name }}</td> <td>{{ x.customer.name }}</td> <td>{{ x.request_date }}</td> <td>{{ x.price }}</td> </tr> {% endfor %} </tbody> </table> </main> </body> </html> My view turns to: def GetOrders(request): orders = Order.objects.order_by('-request_date') com = request.GET.get('name') if com != '' and com is not None: orders = orders.filter(Q(agent_name__icontains=com)) template = loader.get_template('orders.html') context = { 'orders' : orders, } return HttpResponse(template.render(context,request)) But I cannot use it as a filter criterium because it is not a real model field and I get a FieldError: Cannot resolve keyword 'agent_name' into field
class Assignment (models.Model): assig_year = models.PositiveSmallIntegerField() customer = models.ForeignKey("Customer", on_delete=models.CASCADE) sales_agent = models.ForeignKey("Agent", on_delete=models.CASCADE) class Meta: #unique key year + customer constraints = [ UniqueConstraint( fields=['assig_year', 'customer'], name='Primary_Key_Assignment' ) ] class Order (models.Model): assignment = models.ForeignKey(Assignment, on_delete=models.RESTRICT, related_name="orders") request_date = models.DateField() price = models.DecimalField(max_digits=10, decimal_places=2) The reason being: In the comments, you were worried about data duplication and ER loops. But you already had ER spaghetti and duplication by having a customer connection on both Order and Assignment. While there's nothing wrong with that, it's also somewhat redundant given the constraint that the same sales agent will handle all the customer's orders. With the above proposed change, we remove the customer FK from Order and instead add an FK Assignment, and we keep the FK from Assignment to Customer. Data duplication is eliminated and ER spaghetti is eliminated (since the dependency chain is now linear): Order -> Assignment -> Customer Additionally, the view you need can now be syntactically much simpler: def GetOrders(request): com = request.GET.get('name') if com != '' and com is not None: # This is the slightly more expensive but maybe more readable version: assignments = Assignment.objects.filter(sales_agent=com) orders = Orders.objects.filter(assignment__in=assignments) # I haven't verified this attempt at a DB optimized version, but I think it's on par: orders = Order.objects.select_related('assignment').filter(assignment__sales_agent=com) else: return Order.objects.none() # Or however you want to handle the case of there being no assignments/orders for a given sales agent template = loader.get_template('orders.html') context = { 'orders' : orders, } return HttpResponse(template.render(context,request)) As a bonus, if you ever need a view to see orders per year, for example, you get that for free now, simply invoke assignment.orders. Which works for both sales agents and customers, as both of those entities use Assignment as the middle man.
2
1
78,972,238
2024-9-11
https://stackoverflow.com/questions/78972238/celery-tasks-with-psycopg-programmingerror-the-last-operation-didnt-produce-a
I'm working on aproject in which I have A PostgreSQL 16.2 database A Python 3.12 backend using psycopg 3.2.1 and psycopg_pool 3.2.2. Celery for handling asynchronous tasks. The celery tasks uses the database pool through the following code: import os from psycopg_pool import ConnectionPool from contextlib import contextmanager PG_USERNAME = os.getenv('PG_USERNAME') if not PG_USERNAME: raise ValueError(f"Invalid postgres username") PG_PASSWORD = os.getenv('PG_PASSWORD') if not PG_PASSWORD: raise ValueError(f"Invalid postgres pass") PG_HOST = os.getenv('PG_HOST') if not PG_HOST: raise ValueError(f"Invalid postgres host") PG_PORT = os.getenv('PG_PORT') if not PG_PORT: raise ValueError(f"Invalid postgres port") # Options used to prevent closed connections # conn_options = f"-c statement_timeout=1800000 -c tcp_keepalives_idle=30 -c tcp_keepalives_interval=30" conninfo = f'host={PG_HOST} port={PG_PORT} dbname=postgres user={PG_USERNAME} password={PG_PASSWORD}' connection_pool = ConnectionPool( min_size=4, max_size=100, conninfo=conninfo, check=ConnectionPool.check_connection, #options=conn_options, ) @contextmanager def get_db_conn(): conn = connection_pool.getconn() try: yield conn finally: connection_pool.putconn(conn) And an example celery task would be @app.task(bind=True) def example_task(self, id): with get_db_conn() as conn: try: with conn.cursor(row_factory=dict_row) as cursor: test = None cursor.execute('SELECT * FROM test WHERE id = %s', (id,)) try: test = cursor.fetchone() except psycopg.errors.ProgrammingError: logger.warning(f'Test log msg') conn.rollback() return cursor.execute("UPDATE test SET status = 'running' WHERE id = %s", (id,)) conn.commit() # Some processing... # Fetch another resource needed cursor.execute('SELECT * FROM test WHERE id = %s', (test['resource_id'],)) cursor.fetchone() # Update the entry with the result cursor.execute(""" UPDATE test SET status = 'done', properties = %s WHERE id = %s """, (Jsonb(properties), id)) conn.commit() except Exception as e: logger.exception(f'Error: {e}') conn.rollback() with conn.cursor(row_factory=dict_row) as cursor: # Update status to error with exception information cursor.execute(""" UPDATE test SET status = 'error', error = %s WHERE id = %s """, (Jsonb({'error': str(e), 'stacktrace': traceback.format_exc()}), webpage_id)) conn.commit() The code works most of the times, but sometimes, when multiple tasks of the same type are being launched, I'm getting some errors of type psycopg.ProgrammingError: the last operation didn't produce a result on the second fetchone() call. Meanwhile, on the database I can see the following warning WARNING: there is already a transaction in progress I suspect there might be some problems with the way I'm working with connections, but I cannot find were. As far as I know, once get_db_conn() is called that connection is not available for other tasks, so in theory there cannot be multiple tasks using the same connection, and therefore there should be no transaction already in progress when performing the second fetchone() call. The resource exists, as every other task can access it, so that's not the problem.
If both the main target row of test as well as the additional one selected based on its test.resource_id foreign key aren't shareable, lock them. Otherwise, concurrent workers are likely bumping into each other, taking on the processing of the same row and altering its fields and the fields of the one its associated with through resource_id, at unpredictable points between subsequent steps of this operation. Regular explicit locks get automatically released on commit/rollback so to keep your conn.commit() after updating target's status field, you can use session-level advisory locks to let them last multiple transactions instead: @app.task(bind=True) def example_task(self, id): with get_db_conn() as conn: try: with conn.cursor(row_factory=dict_row) as cursor: test = None cursor.execute("""SELECT *, pg_advisory_lock_shared(resource_id) FROM test WHERE id = %s AND pg_try_advisory_lock(id) """, (id,)) try: test = cursor.fetchone() #if it fails here, someone else is already processing this `id` #if it waits, someone else was altering the row behind `resource_id` #in the 2nd case, it's best to wait for them to finish except psycopg.errors.ProgrammingError: logger.warning(f'Test log msg') conn.rollback() return cursor.execute("""UPDATE test SET status = 'running' WHERE id = %s """, (id,)) conn.commit() # Some processing... # Fetch another resource needed cursor.execute("""SELECT * FROM test WHERE id = %s /*AND probably more conditions here*/ """, (test['resource_id'],)) cursor.fetchone() # Update the entry with the result cursor.execute("""UPDATE test SET status = 'done' , properties = %s WHERE id = %s RETURNING pg_advisory_unlock(id) , pg_advisory_unlock(resource_id) """, (Jsonb(properties), id)) conn.commit() except Exception as e: logger.exception(f'Error: {e}') conn.rollback() with conn.cursor(row_factory=dict_row) as cursor: # Update status to error with exception information cursor.execute("""UPDATE test SET status = 'error', error = %s WHERE id = %s RETURNING pg_advisory_unlock(id) , pg_advisory_unlock(resource_id) """, (Jsonb({'error': str(e), 'stacktrace': traceback.format_exc()}), webpage_id)) conn.commit() The problem might also be in the part of the code that you did not share, where you pick and assign the id you pass to example_task(self, id) from outside. If that's more or less how workers find their next task: select id from test where status='ready' order by priority , created_at limit 1; Then there's nothing stopping two workers from picking the same one if the second one grabs it before the first one has the chance to conn.commit() its status change. You could acquire the lock right there and make all following calls skip to the nearest row that's still free: select id from test where status='ready' order by priority , created_at for update skip locked--this limit 1; But to hold on to a lock like that you'd have to only conn.commit() once you're done with the whole operation, without running commits between its sub-steps - otherwise you'd lose the lock along the way. To guard the rest of the operation beyond the nearest .commit(), use that lock to secure the query against immediate collisions but also add an advisory lock that survives multiple transactions. Advisory locks don't offer a skip locked but it can be emulated with a recurisve cte (walks the id's and stops at the first one that doesn't return false on locking attempt). Or, you can just look up which id's are already advisory-locked according to pg_locks.objid and exclude those select id, pg_try_advisory_lock(id) from test where status='ready' and id not in(select objid from pg_locks where locktype='advisory') order by priority , created_at for update skip locked limit 1; You could also get rid of that entirely and look up free id's straight from the worker: @app.task(bind=True) def example_task(self, id): with get_db_conn() as conn: try: with conn.cursor(row_factory=dict_row) as cursor: test = None cursor.execute("""WITH find_free_id_and_lock_it AS (UPDATE test SET status='running' WHERE id=(SELECT id FROM test WHERE status='ready' ORDER BY priority , created_at FOR UPDATE SKIP LOCKED LIMIT 1) RETURNING *) ,lock_resource AS (SELECT *, pg_advisory_lock_shared(id) FROM test WHERE id=(SELECT resource_id FROM find_free_id_and_lock_it) FOR SHARE/*waits if necessary*/) SELECT target.* , resource.*--replace with alias list FROM find_free_id_and_lock_it AS target JOIN lock_resource AS resource ON target.resource_id=resource.id; """, (id,)) try: test = cursor.fetchone() except psycopg.errors.ProgrammingError: logger.warning(f'Test log msg') conn.rollback() return conn.commit() # Some processing... cursor.execute("""UPDATE test SET status = 'done' , properties = %s WHERE id = %s RETURNING pg_advisory_unlock(resource_id) """, (Jsonb(properties), id)) conn.commit() except Exception as e: logger.exception(f'Error: {e}') conn.rollback() with conn.cursor(row_factory=dict_row) as cursor: # Update status to error with exception information cursor.execute("""UPDATE test SET status = 'error', error = %s WHERE id = %s RETURNING pg_advisory_unlock(resource_id) """, (Jsonb({'error': str(e), 'stacktrace': traceback.format_exc()}), webpage_id)) conn.commit() Both target and resource lookups, adequate locks as well as the status update are all applied within a single query and transaction. Depending on what you do in # Some processing... and how long that takes, it might be preferable to acquire the shared lock on resource later, just in time, like it was done originally.
4
3
78,987,685
2024-9-15
https://stackoverflow.com/questions/78987685/abnormal-interpolating-spline-with-odd-number-of-points
I have implemented a cubic B-Spline interpolation, not approximation, as follows: import numpy as np import math from geomdl import knotvector def cox_de_boor( d_, t_, k_, knots_): if (d_ == 0): if ( knots_[k_] <= t_ <= knots_[k_+1]): return 1.0 return 0.0 denom_l = (knots_[k_+d_] - knots_[k_]) left = 0.0 if (denom_l != 0.0): left = ((t_ - knots_[k_]) / denom_l) * cox_de_boor(d_-1, t_, k_, knots_) denom_r = (knots_[k_+d_+1] - knots_[k_+1]) right = 0.0 if (denom_r != 0.0): right = ((knots_[k_+d_+1] - t_) / denom_r) * cox_de_boor(d_-1, t_, k_+1, knots_) return left + right def interpolate( d_, P_, n_, ts_, knots_ ): A = np.zeros((n_, n_)) for i in range(n_): for j in range(n_): A[i, j] = cox_de_boor(d_, ts_[i], j, knots_) control_points = np.linalg.solve(A, P_) return control_points def create_B_spline( d_, P_, t_, knots_): sum = Vector() # just a vector class. for i in range( len(P_) ): sum += P_[i] * cox_de_boor(d_, t_, i, knots_) return sum def B_spline( points_ ): d = 3 # change to 2 for quadratic. P = np.array( points_ ) n = len( P ) ts = np.linspace( 0.0, 1.0, n ) knots = knotvector.generate( d, n ) # len = n + d + 1 control_points = interpolate( d, P, n, ts, knots) crv_pnts = [] for i in range(10): t = float(i) / 9 crv_pnts.append( create_B_spline(d, control_points, t, knots) ) return crv_pnts control_points = [ [float(i), math.sin(i), 0.0] for i in range(4) ] cps = B_spline( control_points ) Result is OK when interpolating 4 points (control vertices): Result is NOT OK when interpolating 5 points (control vertices): Result is OK when interpolating 6 points (control vertices): and so on... I noticed two things: The spline does not interpolate properly when the number of control vertices is odd. The spline interpolates properly with any number of vertices when the degree becomes quadratic. So, if you change d = 2, in the B_spline function, the curve will interpolate properly for odd and even number of control vertices. The cox de boor function is correct and according to the mathematical expression, but with a small alteration on the 2nd conditional expression t[i] <= t **<=** t[i+1] (see my previous SO question here for more details). Also, I used numpy to solve the linear system, which also works as expected. Other than np.linalg.solve, I have tried np.linalg.lstsq but it returns the same results. I honestly do not know where to attribute this abnormal behaviour. What could cause this issue?
The abnormal behavior described is very interesting and its cause is subtle. Basically, the root cause of this behavior is the Cox-DeBoor function implementation with the <= fix. In my answer to the OP's previous SO question I give a detailed explanation of why this fix is wrong. In short, this implementation constructs basis functions that are "almost-correct" except for inner-knot values. At the intervals outside the inner knots, the basis functions give correct values - including the t=1 value, which was the reason for the fix in the first place (to prevent a zero-row in the interpolation matrix). Given this knowledge, we can explain how this mysterious abnormal behavior came to be. The main things to notice are the arrays ts and knots, which are passed as arguments to the interpolate() function within the function B_spline(). The interpolate() function evaluates the basis functions for all ts. If no inner value of ts is equal to an inner knot then all these values will be correct and the result will be correct. However, if there exists any ts[i]==inner_knot, then the value there will be wrong and will ruin the result. Now, all that remains is to try to explain the two things noted in the question. First, notice that the knots are clamped and equally spaced - this is what knotvector.generate( d, n ) does. So for d=3 the knots are: [0,0,0,0, 1/(n-3), 2/(n-3),..., (n-4)/(n-3), 1,1,1,1]. For example, for n=5, the knot vector is [0,0,0,0,0.5,1,1,1,1]. More generally, the inner knots (including 0 and 1) can be computed with the command knots[d:-d] = np.linspace(0, 1, n-d+1). The ts are computed with the command np.linspace( 0.0, 1.0, n ) (i.e., [0, 1/(n-1), 2/(n-1),..., (n-2)/(n-1), 1]). "The spline does not interpolate properly when the number of control vertices is odd": This is because when n is odd and d=3, n-d+1 is also odd and 0.5 must then be one of the knots. By the same argument, 0.5 must also be one of the ts. Therefore, we get a wrong answer! Interestingly, if n is even and d=3, one can prove that there is no common factor between n-3 and n-1 (two consecutive odd numbers do not have a common factor) and therefore no inner value of ts is equal to an inner knot and the result will be correct! For example, for n=6, ts = [0, 1/5, 2/5, 3/5, 4/5, 1] and knots = [..,0, 1/3, 2/3, 1,..], so no fracture i/3 is equal to a fracture j/5. "The spline interpolates properly with any number of vertices when the degree becomes quadratic": This is because when d=2 the inner knots will be np.linspace(0, 1, n-1) (i.e., [0,0,0, 1/(n-2), 2/(n-2),..., (n-3)/(n-2), 1,1,1]) and the ts will be np.linspace(0, 1, n). Since there is no common factor between n-2 and n-1, we won't have i/(n-2) == j/(n-1) so always ts != knots and the result will be correct! I believe this explains this interesting abnormal interpolation behavior.
2
1
78,982,732
2024-9-13
https://stackoverflow.com/questions/78982732/extracting-data-from-two-nested-columns-in-one-dataframe
I have a pandas dataframe that contains transactions. A transaction is either booked as a payment, or a ledger_account_booking. A single transaction can have multiple payments and/or multiple ledger account bookings. Therefore, my columns payments and ledger_account_bookings contain a list of dicts, where the number of lists in a dict can vary. A small example dataframe looks as follows: transaction_id total_amount date payments ledger_account_bookings 4308 645,83 30-8-2024 [] [] 4254 291,67 2-7-2024 [] [{'ledger_id': '4265', 'amount': '291,67'}] 4128 847 14-2-2024 [{'payment_id': '4128', 'amount': '847.0'}] [] 4248 4286,98 25-6-2024 [{'payment_id': '4261', 'amount': '400.0'}, {'payment_id': '4262', 'amount': '11.0'}, {'payment_id': '4263', 'amount': '1668.51'}, {'payment_id': '4264', 'amount': '1868.54'}, {'payment_id': '4265', 'amount': '20.91'}, {'payment_id': '4266', 'amount': '2.21'}, {'payment_id': '4267', 'amount': '309.62'}] [{'ledger_id' : '4265', 'amount': '6,19'}] 4192 6130,22 24-4-2024 [{'payment_id': '4193', 'amount': '9.68'}] [{'ledger_id': '4222', 'amount':'2106.0'}, {'ledger_id': '4222','amount': '4014.54'}] 4090 1158,98 25-1-2024 [{'id': '4110','amount': '16.22'}, {'id': '4111', 'amount': '84.0'}, {'id': '4112', 'amount': '41.99'}, {'id': '4113, 'amount': '9.11',} {'id': '4114', 'amount': '10.0'}, {'id': '4115', 'amount': '997.16'}] [{'ledger_id': '4231', 'amount': '-0.32'}, {'ledger_id': '4231', 'amount': '-0.18'}] What I want is that every dict in one of the columns payments or ledger_account_bookings becomes a row in my dataframe. Expected result would look something like this: transaction_id total_amount date payment_id payment_amount ledger_id ledger_amount 4308 645,83 30-8-2024 NaN NaN NaN NaN 4254 291,67 2-7-2024 Nan NaN 4265 291,67 4128 847 14-2-2024 4128 847.0 NaN NaN 4248 4286,98 25-6-2024 4261 400.0 NaN NaN 4248 4286,98 25-6-2024 4262 11.0 NaN Nan 4248 4286,98 25-6-2024 4263 1668.51 NaN Nan 4248 4286,98 25-6-2024 4264 1868.4 NaN Nan 4248 4286,98 25-6-2024 4265 20.91 NaN Nan 4248 4286,98 25-6-2024 4266 2.21 NaN Nan 4248 4286,98 25-6-2024 4267 309.62 NaN Nan 4248 4286,98 25-6-2024 NaN NaN 4265 6,19 4192 6130,22 24-4-2024 4193 9.68 NaN NaN 4192 6130,22 24-4-2024 NaN NaN 4222 2106 4192 6130,22 24-4-2024 NaN NaN 4222 4014.54 4090 1158,98 25-1-2024 4110 16.22 NaN NaN 4090 1158,98 25-1-2024 4111 84.0 NaN NaN 4090 1158,98 25-1-2024 4112 41.99 NaN NaN 4090 1158,98 25-1-2024 4113 9.11 NaN NaN 4090 1158,98 25-1-2024 4114 10.0 NaN NaN 4090 1158,98 25-1-2024 4115 997.16 NaN NaN 4090 1158,98 25-1-2024 NaN NaN 4231 0.32 4090 1158,98 25-1-2024 NaN NaN 4231 0.18 For example, transaction 4248 has 7 payments and 1 ledger account booking. So the resulting dataframe would have 8 rows. transaction 4192 has 2 payments and 1 ledger account bookings, so resulting df should have 3 rows. I know how to achieve this for one column, for example by using the following code: df_explode = df_financial_mutations.explode(['payments']) #Normalize the json column into separate columns df_normalized = json_normalize(df_explode['payments']) #Add prefix to the columns that were 'exploded' df_normalized = df_normalized.add_prefix('payments_') The problem is, I don't know how to do it for two columns. If I would call explode on ledger_account_bookings again, the result becomes murky since I already have exploded the payments column, and therefore 'duplicate' rows were introduced into my dataframe. So, where a payment was exploded, I now have two rows with exactly the same values in the ledger_account_bookings column. When I explode again, this time on the other column, those 'duplicates' are also exploded, so that my dataframe now contains rows of data that don't make sense. How do I solve such a problem where I need to explode two columns at once? I've seen Efficient way to unnest (explode) multiple list columns in a pandas DataFrame but unfortunately the lists of payments and ledger_account_bookings can be of different size, and can be dynamic as well (e.g. it's possible to have 0-5 payments and 0-5 ledger_account_bookings, there is no fixed value) Any help would be greatly appreciated.
Universal solution for processing data by tuples: #in tuple set original and new columns names prefixes cols = [('payments', 'payments'),('ledger_account_bookings', 'ledger')] L = [] for col, prefix in cols: df_explode = df_financial_mutations.pop(col).explode() #Normalize the json column into separate columns df_normalized = pd.json_normalize(df_explode).set_index(df_explode.index) #Add prefix to the columns that were 'exploded' #Remove missing values if all NaNs per rows df_normalized = df_normalized.add_prefix(f'{prefix}_').dropna(how='all') L.append(df_normalized) #join original columns to concanecated list of DataFrames out = df_financial_mutations.join(pd.concat(L)).reset_index(drop=True) #clean data - replacement missing values by another column out['payments_id'] = out['payments_id'].fillna(out.pop('payments_payment_id')) #renaming columns names out = out.rename(columns={'ledger_ledger_id':'ledger_id'}) print (out) transaction_id total_amount date payments_amount payments_id \ 0 4308 645,83 30-8-2024 NaN NaN 1 4254 291,67 2-7-2024 NaN NaN 2 4128 847 14-2-2024 847.0 4128 3 4248 4286,98 25-6-2024 400.0 4261 4 4248 4286,98 25-6-2024 11.0 4262 5 4248 4286,98 25-6-2024 1668.51 4263 6 4248 4286,98 25-6-2024 1868.54 4264 7 4248 4286,98 25-6-2024 20.91 4265 8 4248 4286,98 25-6-2024 2.21 4266 9 4248 4286,98 25-6-2024 309.62 4267 10 4248 4286,98 25-6-2024 NaN NaN 11 4192 6130,22 24-4-2024 9.68 4193 12 4192 6130,22 24-4-2024 NaN NaN 13 4192 6130,22 24-4-2024 NaN NaN 14 4090 1158,98 25-1-2024 16.22 4110 15 4090 1158,98 25-1-2024 84.0 4111 16 4090 1158,98 25-1-2024 41.99 4112 17 4090 1158,98 25-1-2024 9.11 4113 18 4090 1158,98 25-1-2024 10.0 4114 19 4090 1158,98 25-1-2024 997.16 4115 20 4090 1158,98 25-1-2024 NaN NaN 21 4090 1158,98 25-1-2024 NaN NaN ledger_id ledger_amount 0 NaN NaN 1 4265 291,67 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 NaN NaN 6 NaN NaN 7 NaN NaN 8 NaN NaN 9 NaN NaN 10 4265 6,19 11 NaN NaN 12 4222 2106.0 13 4222 4014.54 14 NaN NaN 15 NaN NaN 16 NaN NaN 17 NaN NaN 18 NaN NaN 19 NaN NaN 20 4231 -0.32 21 4231 -0.18 I suggest process each column separately and join to original data - solution processing each column separately: #extract column payments by pop and expoding df_explode = df_financial_mutations.pop('payments').explode() #Normalize the json column into separate columns #Rewrite new index by original values from exploded DataFrame df_normalized = pd.json_normalize(df_explode).set_index(df_explode.index) #Add prefix to the columns that were 'exploded' df_normalized = df_normalized.add_prefix('payments_') #Rewrite missing values from payments_id by payments_payment_id and remove column df_normalized['payments_id'] = (df_normalized['payments_id'] .fillna(df_normalized.pop('payments_payment_id'))) #Remove missing values if all NaNs per rows df_normalized = df_normalized.dropna(how='all') print (df_normalized) payments_amount payments_id 2 847.0 4128 3 400.0 4261 3 11.0 4262 3 1668.51 4263 3 1868.54 4264 3 20.91 4265 3 2.21 4266 3 309.62 4267 4 9.68 4193 5 16.22 4110 5 84.0 4111 5 41.99 4112 5 9.11 4113 5 10.0 4114 5 997.16 4115 df_explode1 = df_financial_mutations.pop('ledger_account_bookings').explode() #Normalize the json column into separate columns #Rewrite new index by original values from exploded DataFrame df_normalized1 = pd.json_normalize(df_explode1).set_index(df_explode1.index) #Add prefix to the columns that were 'exploded' df_normalized1 = df_normalized1.add_prefix('ledger_') #Remove missing values if all NaNs per rows df_normalized1 = df_normalized1.dropna(how='all') print (df_normalized1) ledger_ledger_id ledger_amount 1 4265 291,67 3 4265 6,19 4 4222 2106.0 4 4222 4014.54 5 4231 -0.32 5 4231 -0.18 out = df_financial_mutations.join(pd.concat([df_normalized, df_normalized1])) print (out) transaction_id total_amount date payments_amount payments_id \ 0 4308 645,83 30-8-2024 NaN NaN 1 4254 291,67 2-7-2024 NaN NaN 2 4128 847 14-2-2024 847.0 4128 3 4248 4286,98 25-6-2024 400.0 4261 3 4248 4286,98 25-6-2024 11.0 4262 3 4248 4286,98 25-6-2024 1668.51 4263 3 4248 4286,98 25-6-2024 1868.54 4264 3 4248 4286,98 25-6-2024 20.91 4265 3 4248 4286,98 25-6-2024 2.21 4266 3 4248 4286,98 25-6-2024 309.62 4267 3 4248 4286,98 25-6-2024 NaN NaN 4 4192 6130,22 24-4-2024 9.68 4193 4 4192 6130,22 24-4-2024 NaN NaN 4 4192 6130,22 24-4-2024 NaN NaN 5 4090 1158,98 25-1-2024 16.22 4110 5 4090 1158,98 25-1-2024 84.0 4111 5 4090 1158,98 25-1-2024 41.99 4112 5 4090 1158,98 25-1-2024 9.11 4113 5 4090 1158,98 25-1-2024 10.0 4114 5 4090 1158,98 25-1-2024 997.16 4115 5 4090 1158,98 25-1-2024 NaN NaN 5 4090 1158,98 25-1-2024 NaN NaN ledger_ledger_id ledger_amount 0 NaN NaN 1 4265 291,67 2 NaN NaN 3 NaN NaN 3 NaN NaN 3 NaN NaN 3 NaN NaN 3 NaN NaN 3 NaN NaN 3 NaN NaN 3 4265 6,19 4 NaN NaN 4 4222 2106.0 4 4222 4014.54 5 NaN NaN 5 NaN NaN 5 NaN NaN 5 NaN NaN 5 NaN NaN 5 NaN NaN 5 4231 -0.32 5 4231 -0.18 #Create default index if necessary out = out.reset_index(drop=True)
3
3
78,979,548
2024-9-12
https://stackoverflow.com/questions/78979548/create-list-column-out-of-column-names
I have a simple pl.DataFrame with a number of columns that only contain boolean values. import polars as pl df = pl.DataFrame( {"s1": [True, True, False], "s2": [False, True, True], "s3": [False, False, False]} ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ s1 ┆ s2 ┆ s3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ bool ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════║ β”‚ true ┆ false ┆ false β”‚ β”‚ true ┆ true ┆ false β”‚ β”‚ false ┆ true ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I need to add another column that contains lists of varying length. A list in any individual row should contain the column name where the values of the columns S1, s2, and s3 have a True value. Here's what I am actually looking for: shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ s1 ┆ s2 ┆ s3 β”‚ list β”‚ β”‚ --- ┆ --- ┆ --- β”‚ --- β”‚ β”‚ bool ┆ bool ┆ bool β”‚ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════║══════════════║ β”‚ true ┆ false ┆ false β”‚ ["s1"] β”‚ β”‚ true ┆ true ┆ false β”‚ ["s1", "s2"] β”‚ β”‚ false ┆ true ┆ false β”‚ ["s2"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
List API You could build a list of when/then expressions and then remove the nulls. df.with_columns( pl.concat_list( pl.when(col).then(pl.lit(col)) for col in df.columns ) .list.drop_nulls() .alias("list") ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ s1 ┆ s2 ┆ s3 ┆ list β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ bool ┆ bool ┆ bool ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ══════════════║ β”‚ true ┆ false ┆ false ┆ ["s1"] β”‚ β”‚ true ┆ true ┆ false ┆ ["s1", "s2"] β”‚ β”‚ false ┆ true ┆ false ┆ ["s2"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Unpivot If "raw performance" is of concern, it can be done at the frame level. You can reshape with .unpivot() and .group_by to create the lists. (df.with_row_index() .unpivot(index="index") .filter(pl.col.value) .group_by("index", maintain_order=True) .agg(pl.col.variable.alias("list")) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ list β”‚ β”‚ --- ┆ --- β”‚ β”‚ u32 ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════════════║ β”‚ 0 ┆ ["s1"] β”‚ β”‚ 1 ┆ ["s1", "s2"] β”‚ β”‚ 2 ┆ ["s2"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As we've maintained the order, we can horizontally .concat() to combine them. pl.concat( [ df, df.with_row_index() .unpivot(index="index") .filter(pl.col.value) .group_by("index", maintain_order=True) .agg(pl.col.variable.alias("list")) .drop("index") # optional ], how = "horizontal" ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ s1 ┆ s2 ┆ s3 ┆ list β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ bool ┆ bool ┆ bool ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ══════════════║ β”‚ true ┆ false ┆ false ┆ ["s1"] β”‚ β”‚ true ┆ true ┆ false ┆ ["s1", "s2"] β”‚ β”‚ false ┆ true ┆ false ┆ ["s2"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Timing As a basic comparison. bigger_df = df.sample(2_000_000, with_replacement=True) Name Time concat_list 1.4s unpivot + concat 0.2s
3
1
78,991,877
2024-9-16
https://stackoverflow.com/questions/78991877/checking-count-discrepancies-from-one-date-to-another-in-dataframe
Suppose I have this data data = {'site': ['ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY'], 'usage_date': ['2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01'], 'item_id': ['COR30013', 'PAC10463', 'COR30018', 'PAC10958', 'PAC11188', 'PAC20467', 'COR20275', 'PAC20702', 'COR30020', 'PAC10137', 'PAC10445', 'COR30029', 'COR30025', 'PAC10457', 'COR10746', 'PAC11136', 'COR10346', 'PAC11050', 'PAC11132', 'PAC11135', 'PAC10964', 'COR10439', 'PAC11131', 'COR10695', 'PAC11128', 'COR10433', 'COR10432', 'PAC11051', 'PAC10137', 'COR10695', 'COR30029', 'COR10346', 'COR10432', 'COR10746', 'COR10439', 'COR10433', 'COR20275', 'COR30020', 'COR30018', 'PAC11135', 'PAC10964', 'PAC11136', 'PAC10445', 'PAC11050', 'PAC11132', 'PAC20467', 'PAC11188', 'PAC10463', 'PAC20702', 'PAC10457', 'PAC10958', 'PAC11051', 'PAC11128', 'PAC11131'], 'start_count':[400.0, 96000.0, 315.0, 45000.0, 2739.0, 2232.0, 2800.0, 283500.0, 280.0, 200000.0, 96000.0, 481.0, 600.0, 18000.0, 400.0, 5500.0, 1200.0, 5850.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 300.0, 4800.0, 1800.0, 1800.0, 108000.0, 500.0, 481.0, 1200.0, 1800.0, 400.0, 600.0, 3300.0, 2800.0, 455.0, 315.0, 5500.0, 36000.0, 5500.0, 96000.0, 5400.0, 5500.0, 2232.0, 2739.0, 96000.0, 283500.0, 18000.0, 72000.0, 1800.0, 300.0, 5500.0], 'received_total': [0.0, 0.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 280.0, 0.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 1800.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3600.0, 0.0, 0.0, 0.0, 1800.0, 2400.0, 400.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1800.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 45000.0, 3600.0, 0.0, 0.0], 'end_count': [240.0, 84000.0, 280.0, 27000.0, 3432.0, 2160.0, 2000.0, 90000.0, 455.0, 108000.0, 96000.0, 437.0, 500.0, 9000.0, 600.0, 5500.0, 1950.0, 4950.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 270.0, 3300.0, 1200.0, 4200.0, 192000.0, 450.0, 350.0, 1890.0, 3600.0, 600.0, 525.0, 2835.0, 1600.0, 420.0, 187.0, 5500.0, 36000.0, 5500.0, 96000.0, 6750.0, 5500.0, 1992.0, 1881.0, 84000.0, 58500.0, 9000.0, 85500.0, 3300.0, 252.0, 5500.0]} df_sample = pd.DataFrame(data=data) For each item_id we need to check if the current (9/1/2019) end_count is greater than the previous (8/25/2019) end_count and we have a currennt received_total of 0 meaning there is a bad count. I have this code that works def check_end_count(df): l = [] for loc, df_loc in df.groupby(['site', 'item_id']): try: ending_count_previous = df_loc['end_count'].iloc[0] ending_count_current = df_loc['end_count'].iloc[1] received_total_current = df_loc['received_total'].iloc[1] if ending_count_current > ending_count_previous and received_total_current == 0: l.append("Ending count discrepancy") l.append("Ending count discrepancy") else: l.append("Good Row") l.append("Good Row") except: l.append("Nothing to compare") df['ending_count_check'] = l return df df_sample = check_end_count(df_sample) But its not that pythonic. Also, in my case I have to check for a series of dates of which I have this tuple list print(sliding_window_dates[:3]) [array(['2019-08-25', '2019-09-01'], dtype=object), array(['2019-09-01', '2019-09-08'], dtype=object), array(['2019-09-08', '2019-09-15'], dtype=object)] So what I am trying to do is the following on the larger dataframe df_list = [] for date1, date2 in sliding_window_dates: df_check = df_test[(df_test['usage_date'] == date1) | (df_test['usage_date'] == date2)] for loc, df_loc in df_check.groupby(['sort_center', 'item_id']): df_list.append(check_end_count(df_loc)) But I again I am doing this in two for loops so I assume there must be a better way to do this. Any suggestions are appreciated.
Whenever I see a problem that requires comparison acrosss dates with particular properties I immediately think "what is the correct dataframe index?". In this case, using a good index and some restructuring makes the problem pretty easy. I did indexed = df_sample.set_index(["site", "item_id", "usage_date"]).unstack("usage_date") and, with current = '2019-09-01' previous = '2019-08-25' We can word the condition almost 1-to-1 with the problem statement: if the current ... end_count is greater than the previous ... end_count and we have a current received_total of 0 ... there is a bad count. bad_rows = (indexed[("end_count", current)] > indexed[("end_count", previous)]) & (indexed[("received_total", current)] == 0) indexed[bad_rows] This gives: start_count received_total end_count usage_date 2019-08-25 2019-09-01 2019-08-25 2019-09-01 2019-08-25 2019-09-01 site item_id ACY PAC10137 200000.0 108000.0 0.0 0.0 108000.0 192000.0 Now, for the multi-date case, you can do this: from itertools import pairwise for previous, current in pairwise(sorted(indexed.columns.levels[1])): indexed[("bad", current)] = (indexed[("end_count", current)] > indexed[("end_count", previous)]) & (indexed[("received_total", current)] == 0) To get out a dataframe in your original form (but with a new bad column), you can just .unstack(). df_with_bad_row_cols = indexed.unstack().reset_index()
3
2
78,976,838
2024-9-12
https://stackoverflow.com/questions/78976838/python-selenium-issue-with-google-chrome-version-128-0-6613-138-profil-screen
i recently ran into the problem, that all my python scripts which utilize the selenium module are broken apparently due to a google chrome update. It seems like selenium/google chrome always asks to select a user profile no matter what options are given inside the python script e.g. "user-data-dir" has no effect at all. This occurs even in headless mode. The original script was able to download a pdf in the background without the "user-data-dir" option. Has anyone run into the same issue and found a solution to it? Below an example code which worked fine before. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options options = Options() options.add_argument('--headless') #options.add_argument(r"--user-data-dir=C:\Users\xxx\AppData\Local\Google\Chrome\User Data\Profile 1") # this option does not make any difference options.add_argument('log-level=3') options.add_experimental_option("prefs", { "download.default_directory": directory, "download.prompt_for_download": False, "download.directory_upgrade": True, "plugins.always_open_pdf_externally": True, }) driver = webdriver.Chrome(options=options) driver.get("http://www.google.com") #original code used a different website Using selenium in python without having to select a user profile.
This is not an actual answer to selenium, but for now my solution is to abandon selenium and instead use playwright which can be used just the same way i need.
2
1
78,992,288
2024-9-17
https://stackoverflow.com/questions/78992288/parameter-problems-of-iloc-functions-in-pandas
I just started to learn pandas, and the call of df.iloc[[1][0]] (df is the pd.DataFrame data type with a shape of (60935, 54)) appeared in a code. Understanding df.iloc[[1 ][0]] should be a line of df, but how should we understand [[1][0]]? Why do the parameters in iloc[] allow the acceptance of two adjacent lists? How to deal with the inside of iloc[]? This is obviously not an index of rows and columns. In addition, I found that when the second number is not 0 or -1, there will be an index out-of-bounds error. Why is this? mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4}, {'a': 100, 'b': 200, 'c': 300, 'd': 400}, {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000}] df = pd.DataFrame(mydict) print(df.iloc[[0][-1]].shape) output(4,) print(df.iloc[[0][0]].shape) output(4,) print(df.iloc[[0]].shape) output(1, 4) print(df.iloc[[0][1]].shape) outputIndexError: list index out of range print(type(df.iloc[[0]])) output<class 'pandas.core.frame.DataFrame'> print(type(df.iloc[[0][0]])) output<class 'pandas.core.series.Series'
You misunderstood the syntax [0][-1] an so on. [0] is a list of length 1 containing the number 0 [1] is still al list of length 1 containing the number 1 [0][-1] means: "take the last element of the list [0]", which is equivalent to .iloc[0] [1][0] means: "take the first element of the list [1]", which is equivalent to .iloc[1] [0][1] means: "take the second element of the list [0], which does not exist because the list [0] does not have a second element. The error you get is not coming from .iloc, it is coming from the wrong list indexing [0][1].
2
1
78,992,321
2024-9-17
https://stackoverflow.com/questions/78992321/getting-argument-of-typing-optional-in-python
I would like to create a typed DataFrame from a Pydantic BaseModel class, let's call it MyModel that has Optional fields. As I create multiple instances of MyModel, some will have Optional fields with None values, and if I initialize a DataFrame with such rows, they will may have inconsistent column dtypes. I'd like thus to cast Optional[TypeX] to TypeX, e.g.: import pydantic import pandas as pd import numpy as np from typing import Optional class MyModel(pydantic.BaseModel): thisfield: int thatfield: Optional[str] ... col_types = {kk: ff.annotation for kk, ff in MyModel.model_fields.items()} pd.DataFrame(np.empty(0, dtype=[tuple(tt) for tt in col_types.items()])) This fails with TypeError: Cannot interpret 'typing.Optional[str]' as a data type. I need a function or method of Optional[X] -> X. Any suggestions other than using repr with regex?
As long as Optional[X] is equivalent to Union[X, None]: from typing import Union, get_args, get_origin def get_optional_arg(typ: type) -> type | None: # make sure typ is really Optional[...], otherwise return None if get_origin(typ) is Union: args = get_args(typ) if len(args) == 2 and args[1] is type(None): return args[0] col_types = { k: get_optional_arg(f.annotation) or f.annotation for k, f in MyModel.model_fields.items() }
2
1
78,987,693
2024-9-15
https://stackoverflow.com/questions/78987693/polars-how-to-find-out-the-number-of-columns-in-a-polars-expression
I'm building a package on top of Polars, and one of the functions looks like this def func(x: IntoExpr, y: IntoExpr): ... The business logic requires that x can include multiple columns, but y must be a single column. What should I do to check and validate this?
You can use the polars.selectors.expand_selector function which lets you evaluate selected columns using either selectors or simple expressions. Note that the drawback here is that you can’t pass in arbitrary expressions, or else the evaluation fails (see the final examples). import polars as pl import polars.selectors as cs from polars.selectors import expand_selector data = { "a1": [1, 2, 3], "a2": [4, 5, 6], "b1": [7, 8, 9], "b2": [10, 11, 12], } df = pl.DataFrame(data) print( expand_selector(df, cs.exclude('b1', 'b2')), # ('a1', 'a2') expand_selector(df, cs.starts_with('b')), # ('b1', 'b2') expand_selector(df, cs.matches('(a|b)1$')), # ('a1', 'b1') # use with expressions expand_selector(..., strict=False) expand_selector(df, pl.exclude('a1', 'a2'), strict=False), # ('b1', 'b2') expand_selector(df, pl.col('b1'), strict=False), # ('b1', ) expand_selector(df, pl.all(), strict=False), # ('a1', 'a2', 'b1', 'b2') sep='\n' ) # anything past an arbitrary selection expression will fail print(expand_selector(df, pl.all() + 1, strict=False)) # Traceback (most recent call last): # File "/home/cameron/.vim-excerpt", line 26, in <module> # expand_selector(df, pl.all() + 1, strict=False), # File "/home/cameron/.pyenv/versions/dutc-site/lib/python3.10/site-packages/polars/selectors.py", line 190, in expand_selector # raise TypeError(msg) # TypeError: expected a selector; found <Expr ['[(*) + (dyn int: 1)]'] at 0x7F835F943D30> instead.
2
2
78,993,284
2024-9-17
https://stackoverflow.com/questions/78993284/how-to-improve-pandas-df-processing-time-on-different-combinations-of-calculated
I got a big dataset, something like 100 K or 1 mil rows, and I got a function that makes vector calculations that take 0.03 sec. Now all my columns before the process can be the same for every iteration. I want to calculate the 2^n combinations of conditions I make. So currently it will take me 2^n * 0.03 s to run it all by looping length and run the function. Is there a better way to improve performance and run all these possibilities vectorized or parallel(not Python CPU parallel. It help by little). The only thing I think of is to create unique per iteration column and make regex calculations, but then the df will be too big. In this example process where each processing takes 0.01ms, the output is: Total number of combinations: 1023. Total time to evaluate all combinations: 20.73 seconds import pandas as pd import numpy as np from itertools import combinations import time # Generate a larger DataFrame with 100,000 rows data = { 'Height': np.random.uniform(150, 200, size=100000), 'Weight': np.random.uniform(50, 100, size=100000), 'Gender': np.random.choice(['Male', 'Female'], size=100000), 'Age': np.random.randint(18, 70, size=100000) } df = pd.DataFrame(data) # Define vectorized functions for each condition with dynamic values def calculate_bmi(height, weight): height_m = height / 100 return weight / (height_m ** 2) def condition_bmi(df, min_bmi, max_bmi): bmi = calculate_bmi(df['Height'], df['Weight']) return (min_bmi <= bmi) & (bmi <= max_bmi) def condition_age(df, min_age, max_age): return (min_age <= df['Age']) & (df['Age'] <= max_age) def condition_height(df, min_height, max_height): return (min_height <= df['Height']) & (df['Height'] <= max_height) def condition_weight(df, min_weight, max_weight): return (min_weight <= df['Weight']) & (df['Weight'] <= max_weight) def condition_gender(df, gender): return df['Gender'] == gender # List of possible dynamic values for each condition (with only 2 values each) dynamic_values = { 'BMI is within the normal range': [(18.5, 24.9), (25.0, 29.9)], 'Age is within the healthy range': [(18, 30), (31, 45)], 'Height is within the normal range': [(150, 160), (161, 170)], 'Weight is within the normal range': [(50, 60), (61, 70)], 'Gender is specified': ['Male', 'Female'] } # Function to create combinations of conditions with dynamic values def create_condition_combinations(dynamic_values): condition_combinations = [] for condition_name, values in dynamic_values.items(): if isinstance(values[0], tuple): # For range conditions for value in values: condition_combinations.append((condition_name, value)) else: # For categorical conditions for value in values: condition_combinations.append((condition_name, value)) return condition_combinations # Generate all possible combinations of conditions and dynamic values def generate_all_combinations(condition_combinations): all_combinations = [] for r in range(1, len(condition_combinations) + 1): for combo in combinations(condition_combinations, r): all_combinations.append(combo) return all_combinations condition_combinations = create_condition_combinations(dynamic_values) all_combinations = generate_all_combinations(condition_combinations) # Calculate the total number of combinations total_combinations = len(all_combinations) print(f"Total number of combinations: {total_combinations}") # Apply a combination of conditions def evaluate_combination(df, combo): combined_condition = pd.Series([True] * len(df)) for condition_name, value in combo: if condition_name == 'BMI is within the normal range': min_bmi, max_bmi = value combined_condition &= condition_bmi(df, min_bmi, max_bmi) elif condition_name == 'Age is within the healthy range': min_age, max_age = value combined_condition &= condition_age(df, min_age, max_age) elif condition_name == 'Height is within the normal range': min_height, max_height = value combined_condition &= condition_height(df, min_height, max_height) elif condition_name == 'Weight is within the normal range': min_weight, max_weight = value combined_condition &= condition_weight(df, min_weight, max_weight) elif condition_name == 'Gender is specified': gender = value combined_condition &= condition_gender(df, gender) return combined_condition # Measure time to run all combinations start_time = time.time() for combo in all_combinations: combo_start_time = time.time() evaluate_combination(df, combo) combo_end_time = time.time() combo_elapsed_time = combo_end_time - combo_start_time end_time = time.time() total_elapsed_time = end_time - start_time print(f"Total time to evaluate all combinations: {total_elapsed_time:.2f} seconds")
I think the best method without touching too much your code is by using polars. When I tested your code I was at : 7.52 seconds and now I am at : 0.45 seconds import polars as pl import numpy as np from itertools import combinations import time # Generate similar data with Polars data = { 'Height': np.random.uniform(150, 200, size=100000), 'Weight': np.random.uniform(50, 100, size=100000), 'Gender': np.random.choice(['Male', 'Female'], size=100000), 'Age': np.random.randint(18, 70, size=100000) } df = pl.DataFrame(data) # Define vectorized functions for each condition def calculate_bmi(df): height_m = df['Height'] / 100 return df['Weight'] / (height_m ** 2) def condition_bmi(df, min_bmi, max_bmi): bmi = calculate_bmi(df) return df.with_columns(((bmi >= min_bmi) & (bmi <= max_bmi)).alias('bmi_condition')) def condition_age(df, min_age, max_age): return df.with_columns(((df['Age'] >= min_age) & (df['Age'] <= max_age)).alias('age_condition')) def condition_height(df, min_height, max_height): return df.with_columns(((df['Height'] >= min_height) & (df['Height'] <= max_height)).alias('height_condition')) def condition_weight(df, min_weight, max_weight): return df.with_columns(((df['Weight'] >= min_weight) & (df['Weight'] <= max_weight)).alias('weight_condition')) def condition_gender(df, gender): return df.with_columns((df['Gender'] == gender).alias('gender_condition')) # List of possible dynamic values for each condition dynamic_values = { 'BMI is within the normal range': [(18.5, 24.9), (25.0, 29.9)], 'Age is within the healthy range': [(18, 30), (31, 45)], 'Height is within the normal range': [(150, 160), (161, 170)], 'Weight is within the normal range': [(50, 60), (61, 70)], 'Gender is specified': ['Male', 'Female'] } # Generate all possible combinations of conditions with dynamic values def create_condition_combinations(dynamic_values): condition_combinations = [] for condition_name, values in dynamic_values.items(): for value in values: condition_combinations.append((condition_name, value)) return condition_combinations condition_combinations = create_condition_combinations(dynamic_values) # Generate all possible combinations of conditions def generate_all_combinations(condition_combinations): all_combinations = [] for r in range(1, len(condition_combinations) + 1): for combo in combinations(condition_combinations, r): all_combinations.append(combo) return all_combinations all_combinations = generate_all_combinations(condition_combinations) total_combinations = len(all_combinations) print(f"Total number of combinations: {total_combinations}") # Evaluate a combination of conditions def evaluate_combination(df, combo): # Start with True for all rows df = df.with_columns(pl.lit(True).alias('combined_condition')) for condition_name, value in combo: if condition_name == 'BMI is within the normal range': min_bmi, max_bmi = value df = condition_bmi(df, min_bmi, max_bmi) df = df.with_columns((df['combined_condition'] & df['bmi_condition']).alias('combined_condition')) elif condition_name == 'Age is within the healthy range': min_age, max_age = value df = condition_age(df, min_age, max_age) df = df.with_columns((df['combined_condition'] & df['age_condition']).alias('combined_condition')) elif condition_name == 'Height is within the normal range': min_height, max_height = value df = condition_height(df, min_height, max_height) df = df.with_columns((df['combined_condition'] & df['height_condition']).alias('combined_condition')) elif condition_name == 'Weight is within the normal range': min_weight, max_weight = value df = condition_weight(df, min_weight, max_weight) df = df.with_columns((df['combined_condition'] & df['weight_condition']).alias('combined_condition')) elif condition_name == 'Gender is specified': gender = value df = condition_gender(df, gender) df = df.with_columns((df['combined_condition'] & df['gender_condition']).alias('combined_condition')) return df['combined_condition'] # Measure the time to run all combinations start_time = time.time() for combo in all_combinations: evaluate_combination(df, combo) end_time = time.time() total_elapsed_time = end_time - start_time print(f"Total time to evaluate all combinations: {total_elapsed_time:.2f} seconds")
3
3
78,992,244
2024-9-17
https://stackoverflow.com/questions/78992244/how-can-i-subclass-logging-logger-without-breaking-filename-in-logging-format
I am trying to write a custom logging.Logger subclass which is mostly working, but I run into issues when trying to use a logging.Formatter that includes the interpolated value %(filename) in the custom format, it prints the filename where my custom subclass is, rather than the filename of the code that called the logging function. I've found a number of tutorials for subclassing Logger but none of them address the effects this has on the filename interpolation. Is there a straightforward solution to this without having to override large sections of logging.Logger? Sample code defining my custom logger: #------------------------------------- # custom_logger.py #------------------------------------- import logging import io class CustomLogger(logging.Logger): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setLevel(logging.DEBUG) # create the record format formatter = logging.Formatter(fmt = "%(filename)s - %(message)s") # create the handler self.stream = io.StringIO() handler = logging.StreamHandler(self.stream) handler.setFormatter(formatter) self.addHandler(handler) def debug(self, msg, *args, **kwargs): super().debug(msg, *args, **kwargs) # do some other stuff ... #------------------------------------- # test.py #------------------------------------- from custom_logger import CustomLogger import logging logging.setLoggerClass(CustomLogger) myLog = logging.getLogger("myLog") myLog.debug("hello world") print(myLog.stream.getvalue()) Expected output: >>> test.py - hello world Actual output: >>> custom_logger.py - hello world
This is already answered in https://stackoverflow.com/a/59492341/2138700 The solution is to use the stacklevel keyword argument when calling the super().debug in your custom_logger code. Here is the relevant section from the documentation The third optional keyword argument is stacklevel, which defaults to 1. If greater than 1, the corresponding number of stack frames are skipped when computing the line number and function name set in the LogRecord created for the logging event. This can be used in logging helpers so that the function name, filename and line number recorded are not the information for the helper function/method, but rather its caller. So the modified code should be import logging import io class CustomLogger(logging.Logger): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setLevel(logging.DEBUG) # create the record format formatter = logging.Formatter(fmt = "%(filename)s - %(message)s") # create the handler self.stream = io.StringIO() handler = logging.StreamHandler(self.stream) handler.setFormatter(formatter) self.addHandler(handler) def debug(self, msg, *args, **kwargs): super().debug(msg, *args, stacklevel=2, **kwargs) Now the output will be test.py - hello world
4
1
78,992,094
2024-9-16
https://stackoverflow.com/questions/78992094/access-class-properties-or-methods-from-within-a-commands-command
I'm building a Discord bot. The bot should store some information into some internal variables to be accessed at a later time. To do so I'm structuring it as a class (as opposed to many examples where the commands are outside a class definition). However, I discovered that when you use the @commands.command(name='test') decorator, the method becomes a kind of "static" method and no longer receives the object as first input. Given this, is there any way I can access class properties (such as an_instance_property in the example below) and/or class methods (such as a_class_method in the example below)? If this is the wrong approach, what could be a better approach for a bot with an internal state? import discord from discord.ext import commands with open('TOKEN', 'r') as f: TOKEN = f.read() class mybot(commands.Bot): def __init__(self): intents = discord.Intents.default() super().__init__(command_prefix="!", intents=intents) self.add_command(self.test) self.an_instance_property = [] # <---- def a_class_method(x): # <---- return x @commands.command(name='test') async def test(ctx, *args): # How can I access self.an_instance_property from here? # How can I call self.a_class_method from here? return bot = mybot() bot.run(TOKEN)
My recommendation is that you avoid defining commands inside your bot class. There is a more appropriate way to do this, which is using cogs/extensions. See this topic where commands are created in a separate file (extension) and only loaded into the bot class: https://stackoverflow.com/a/78166456/14307703 Also know that the Context object always carries the instance of your bot. So you can access all the properties of the class like this: class MyBot(commands.Bot): def __init__(self): intents = discord.Intents.default() super().__init__(command_prefix="!", intents=intents) self.add_command(self.test) self.an_instance_property = [] # <---- def a_class_method(x): # <---- return x @commands.command(name='test') async def test(ctx, *args): # How can I access self.an_instance_property from here? print(ctx.bot.an_instance_property) # <---- return
3
2
78,992,395
2024-9-17
https://stackoverflow.com/questions/78992395/how-to-pandas-fast-nested-for-loop-for-non-numeric-columns
how to pandas fast nested for loop for "non numeric" columns? because this for loop is way to slow: for i in range(len(df1[column_A]): for j in range(len(df2[column_A]): if df1[column_A][i] == df2[column_A][j]: df1[column_B][i] = df2[column_B][j] else: pass so any other way to do it by pandas itself or other libraries? UPDATE: and main goal is: input: df1: name rpm power 0 John 1500 high+ 1 Mary 1400 high- 2 Sally 300 low- 3 Doe 700 medium- 4 July 1000 medium+ df2: name age 0 Peter 77 1 Sally 44 2 Micky 22 3 Sally 34 4 July 50 5 Bob 20 required output is: but i want it df2: name age rpm power 0 Peter 77 0 NA 1 Sally 44 300 low- 2 Micky 22 0 NA 3 Sally 34 300 low- 4 July 50 1000 medium+ 5 Bob 20 0 NA i also add question in official pandas github: https://github.com/pandas-dev/pandas/issues/59824
The nested loop you provided result in O(n^2) complexity, making it slow for larger datasets . Looping over the same range for both i and j, which is unnecessary. Instead you can use pd.merge import pandas as pd # Merge file1 and file2 on column A merged_df = pd.merge(file1, file2, on='column_A') # assuming it is a pandas dataframe # Update file1_column_B with matched values from file2 file1_column_B = merged_df['column_B_y'] pd.merge() function merges two DataFrames ( file1 and file2 ) based on a common column ( column_A ).Also by default, pd.merge() performs an inner merge, which means only rows with matching values in column_A are included in the resulting DataFrame. Time complexity of pd.merge is O(n + m) in best case scenario (concluding it) where n is the number of rows in the left DataFrame (file1) and m is the number of rows in the right DataFrame (file2). However, in the worst-case scenario (e.g., when there are many duplicate values in the merge column), the time complexity can be O(n Γ— m). You can also use numPy argsort + searchsorted import numpy as np sorted = np.argsort(file2['column_A']) match = np.searchsorted(file2['column_A'][sorted], file1['column_A']) file1_column_B = file2['column_B'][sorted[match]] Time Complexity of above sort: argsort: O(n log n) searchsorted: O(m log n) Total: O(n log n + m log n)
2
2
78,991,975
2024-9-16
https://stackoverflow.com/questions/78991975/get-an-a-tag-content-using-beautifulsoup
I'd like to get the content of an <a> tag using BeautifulSoup (version 4.12.3) in Python. I have this code and HTML exemple: h = """ <a id="0"> <table> <thead> <tr> <th scope="col">Person</th> <th scope="col">Most interest in</th> <th scope="col">Age</th> </tr> </thead> <tbody> <tr> <th scope="row">Chris</th> <td>HTML tables</td> <td>22</td> </tr> </table> </a> """ test = bs4.BeautifulSoup(h) test.find('a') # find_all, select => same results But it only returns : <a id="0"> </a> I'd would expect that the content inside <table> would appear between <a> tags. (I don't know if it is common to wrap a table inside an <a> tag but the HTML code I try to read is like so) I need to parse the table content from the <a> tag since I need to link the id="0" to the content of the table. How can I achieve that ? How can I get the <a> tag content with the <table> tag ?
Specify explicitly the parser you want to use (use html.parser). By default it will use the "best" parser available - I pressume lxml which doesn't parse this document well: import bs4 h = """ <a id="0"> <table> <thead> <tr> <th scope="col">Person</th> <th scope="col">Most interest in</th> <th scope="col">Age</th> </tr> </thead> <tbody> <tr> <th scope="row">Chris</th> <td>HTML tables</td> <td>22</td> </tr> </table> </a> """ test = bs4.BeautifulSoup(h, "html.parser") # <-- define parser here out = test.find("a") print(out) Prints: <a id="0"> <table> <thead> <tr> <th scope="col">Person</th> <th scope="col">Most interest in</th> <th scope="col">Age</th> </tr> </thead> <tbody> <tr> <th scope="row">Chris</th> <td>HTML tables</td> <td>22</td> </tr> </tbody></table> </a>
2
1
78,984,046
2024-9-14
https://stackoverflow.com/questions/78984046/partialdependencedisplay-from-estimator-plots-having-lines-with-0-values
Need to evaluate the two way interaction between two variables after regressor model. Used PartialDependenceDisplay.from_estimator to plot but the contour lines inside the plot all have value 0.Not sure what might cause this. Checked the data and model and there are no problems while loading the model and data. Checked the other two variable combinations and they have same issue. from sklearn.inspection import partial_dependence, PartialDependenceDisplay model = load_model(model_path) model_features = model.feature_name_ fig, ax = plt.subplots(figsize=(10,5)) X = training_data[model_features] PartialDependenceDisplay.from_estimator(model, X, features=[('temperature', 'speed')], ax=ax, n_jobs=-1, grid_resolution=20)
Most probably your contour values are all < 0.005. Contour labels are formatted as "%2.2f" and there appears to be no documented way of changing this format. The only workaround I could think of is to retrieve the labels and their values and replace the label texts: import matplotlib.pyplot as plt from matplotlib.text import Text import numpy as np from sklearn.datasets import make_friedman1 from sklearn.ensemble import GradientBoostingRegressor from sklearn.inspection import PartialDependenceDisplay X, y = make_friedman1() clf = GradientBoostingRegressor(n_estimators=10).fit(X, y) pdd = PartialDependenceDisplay.from_estimator(clf, X, [0, (0, 1)]) for c in pdd.axes_[0][1].get_children(): if isinstance(c, Text): try: label_value = float(c.get_text()) except ValueError: continue idx = np.argmin(abs(pdd.contours_[0][1].levels - label_value)) c.set_text(f'{pdd.contours_[0][1].levels[idx]:g}') Update 1 The above method doesn't work if all existing labels are identical. A somewhat unreliable quick and dirty workaround would be to rely on the fact that the label texts are added to the Axes in ascending order. The first and last level are not labelled. This leads to the following example: import matplotlib.pyplot as plt from matplotlib.text import Text from sklearn.datasets import make_friedman1 from sklearn.ensemble import GradientBoostingRegressor from sklearn.inspection import PartialDependenceDisplay X, y = make_friedman1(random_state=42) clf = GradientBoostingRegressor(n_estimators=10).fit(X, y) pdd = PartialDependenceDisplay.from_estimator(clf, X, [0, (0, 1)]) i = 1 for c in pdd.axes_[0][1].get_children(): if isinstance(c, Text) and c.get_text(): c.set_text(f'{pdd.contours_[0][1].levels[i]:g}') i += 1 Update 2 Another (reliable but still hacky) possibility is to overwrite the clabel function used by Scikit with your own version that uses an appropriate format specification. In order to get hold of this function you'll have to provide your own Axes instance to PartialDependenceDisplay.from_estimator: import matplotlib.pyplot as plt from sklearn.datasets import make_friedman1 from sklearn.ensemble import GradientBoostingRegressor from sklearn.inspection import PartialDependenceDisplay fig, axes = plt.subplots(ncols=2) original_clabel = axes[1].clabel def new_clabel(CS, **kwargs): del kwargs['fmt'] return original_clabel(CS, fmt='%2.5f', **kwargs) axes[1].clabel = new_clabel X, y = make_friedman1(random_state=42) clf = GradientBoostingRegressor(n_estimators=10).fit(X, y) pdd = PartialDependenceDisplay.from_estimator(clf, X, [0, (0, 1)], ax=axes)
3
2
78,984,405
2024-9-14
https://stackoverflow.com/questions/78984405/find-duplicate-group-of-rows-in-pandas-dataframe
How can I find duplicates of a group of rows inside of a DataFrame? Or in other words, how can I find the indices of a specific duplicated DataFrame inside of a larger DataFrame? The larger DataFrame: index 0 1 0 0 1 1 2 3 2 4 4 3 0 1 4 2 3 5 2 3 6 0 1 The specific duplicated DataFrame (or group of rows): index 0 1 0 0 1 1 2 3 Indices I am looking for: index 0 1 3 4 (Note that the indices of the duplicated DataFrame do not matter, only the values). import pandas as pd # larger DataFrame lrg_df = pd.DataFrame([[0, 1], [2, 3], [4, 4], [0, 1], [2, 3], [2, 3], [0, 1]]) # group of rows (i.e., duplicated DataFrame) dup_df = pd.DataFrame([[0, 1], [2, 3]]) # get indices of lrg_df that contain dup_df indcs = lrg_df[lrg_df == dup_df].index # Doesn't work of course
You need to check all combinations with a sliding window, using numpy.lib.stride_tricks.sliding_window_view to create a mask and extend the mask with numpy.convolve: import numpy as np from numpy.lib.stride_tricks import sliding_window_view as swv n = len(dup_df) mask = (swv(lrg_df, n, axis=0) == dup_df.to_numpy().T ).all((1,2)) out = lrg_df[np.convolve(mask, np.ones(n))>0] Output: 0 1 0 0 1 1 2 3 3 0 1 4 2 3 And if you want the indices: indices = lrg_df.index[np.convolve(mask, np.ones(n))>0] Output: Index([0, 1, 3, 4], dtype='int64') Intermediates: # swv(lrg_df, n, axis=0) == dup_df.to_numpy().T array([[[ True, True], [ True, True]], [[False, False], [False, False]], [[False, False], [False, False]], [[ True, True], [ True, True]], [[False, True], [False, True]], [[False, False], [False, False]]]) # mask array([ True, False, False, True, False, False])
2
1
78,991,070
2024-9-16
https://stackoverflow.com/questions/78991070/column-manipulation-based-on-headline-values-within-rows
I have a Pandas dataframe with a column that contains different types of values and I want to create a new column out of it based on the information inside that column. Every few rows there is a kind of "headline" row that should define that values for the following rows until the next headline row that then defines the values for the next rows and so on. To understand better, here is an example: import pandas as pd import pandas as pd data = {'AA': ['', '', '', 'V_525-124', 'gsdgsd', 'hdfjhdf', 'gsdhsdhsd', 'gsdgsd', 'V_535-623', 'hosdfjk', 'hjodfjh', 'hjsdfjo', 'V_563-534', 'hojhdfhjdf', 'hodfjhjdfj', 'hofoj', 'hkdfphdf']} df = pd.DataFrame(data) print(df) I want to create a new column BB that would look like that: import pandas as pd data = {'AA': ['', '', '', 'V_525-124', 'gsdgsd', 'hdfjhdf', 'gsdhsdhsd', 'gsdgsd', 'V_535-623', 'hosdfjk', 'hjodfjh', 'hjsdfjo', 'V_563-534', 'hojhdfhjdf', 'hodfjhjdfj', 'hofoj', 'hkdfphdf'], 'BB': ['', '', '', 'V_525-124', 'V_525-124', 'V_525-124', 'V_525-124', 'V_525-124', 'V_535-623', 'V_535-623', 'V_535-623', 'V_535-623', 'V_563-534', 'V_563-534', 'V_563-534', 'V_563-534', 'V_563-534']} df = pd.DataFrame(data) print(df) The number of rows under each "headline" varies, so the script should sort of check whether the next row is a headline-type, then add the headline value to column BB and then move on down the table until a new headline is detected. I can only think of a for-loop with indices and if-statements but I am sure Pandas offers a more elegant solution. The "headlines" all start with 'V_' if that helps.
You can use where and ffill (forward fill) without the need for loops: df['AA'].where(df['AA'].str.startswith('V_')).ffill().fillna('') str.startswith to identify rows where AA column starts with 'V_'. where to keep identified headline rows in BB column and set other rows to NaN. ffill to forward fill the last valid headline value down the column until the next headline is identified. fillna('') to replace remaining NaN values with empty strings import pandas as pd data = {'AA': ['', '', '', 'V_525-124', 'gsdgsd', 'hdfjhdf', 'gsdhsdhsd', 'gsdgsd', 'V_535-623', 'hosdfjk', 'hjodfjh', 'hjsdfjo', 'V_563-534', 'hojhdfhjdf', 'hodfjhjdfj', 'hofoj', 'hkdfphdf']} df = pd.DataFrame(data) df['BB'] = df['AA'].where(df['AA'].str.startswith('V_')).ffill().fillna('') print(df)
1
4
78,990,589
2024-9-16
https://stackoverflow.com/questions/78990589/how-to-merge-and-match-different-length-dataframes-lists-in-python-pandas
I have over 12 dataframes that I want to merge into a single dataframe, where row values match for each column (or null if they don't exist). Each dataframe has a different number of rows, but will never repeat values. The goal is to both identify common values and missing values. Eg.df1 id label 1 a-1 2 b-2 3 z-10 Eg.df2 id label 1 b-2 2 d-4 3 e-5 Eg.df3 id label 1 a-1 2 d-4 3 f-6 Desired output Eg.final id df1 df2 df3 1 a-1 null a-1 2 b-2 b-2 null 3 null d-4 d-4 4 null e-5 null 5 null null f-6 6 z-10 null null I've investigated join, but these all seem to collapse values. insert seemed plausible, but I can't rectify the different row sizes/matching values to the same row. I want to maintain each df as it's own column.
For multiple dataframes, you can use merge with reduce: from functools import reduce reduce(lambda left, right: pd.merge(left, right, on='label', how='outer'), map(lambda d: d[1].drop(columns='id') .assign(**{ f'df{d[0]}':lambda x: x['label'] }), enumerate(dfs, 1)) ).assign(id=lambda x:range(1, 1+len(x))).drop(columns='label') # this is just to drop the existing `label` and assign new `id` Out: df1 df2 df3 id 0 a-1 NaN a-1 1 1 b-2 b-2 NaN 2 2 NaN d-4 d-4 3 3 NaN e-5 NaN 4 4 NaN NaN f-6 5 5 z-10 NaN NaN 6 Another method is join on the index, like: renamed_dfs = list(map(lambda d: d[1].drop(columns='id') .assign(**{ f'df{d[0]}':lambda x: x['label'] }) .set_index('label'), enumerate(dfs, 1) )) renamed_dfs[0].join(renamed_dfs[1:], how='outer').reset_index(drop=True).reset_index() Output: index df1 df2 df3 0 0 a-1 NaN a-1 1 1 b-2 b-2 NaN 2 2 z-10 NaN NaN 3 3 NaN d-4 d-4 4 4 NaN e-5 NaN 5 5 NaN NaN f-6
3
1
78,987,052
2024-9-15
https://stackoverflow.com/questions/78987052/simplify-polygon-shapefile-to-reduce-file-size-in-python
I have a polygon shapefile which was vectorized from a raster layer (second image). This shapefile contains thousands of features with just one column which represent polygon level (ranging 1-5). The file size is massive and so I was trying to use Shapely's simplify tool to reduce the file size. The aim is to get to a similar result to ArcGIS pro's simplify polygon tool, which looks great and reduced the file size by 80% (third image). So far the Shapely tool had produced a pretty similar result but it's still not perfect because bordering polygons do not align great smoothly (see first image). I assume I need to use some sort of smoothing or interpolation function to sort it, but I can't seem to find one that works. the code I have so far: from shapely.geometry import shape, mapping from shapely.ops import transform from shapely.validation import make_valid import fiona import pyproj # Define the projections project_to_utm = pyproj.Transformer.from_crs("EPSG:4326", "EPSG:32633", always_xy=True).transform project_to_wgs84 = pyproj.Transformer.from_crs("EPSG:32633", "EPSG:4326", always_xy=True).transform # Open your shapefile with fiona.open(r"shapefile.shp", 'r') as source: schema = source.schema crs = source.crs features = list(source) # Reproject to UTM, simplify, then reproject back to WGS84 simplified_features = [] for feature in features: # Reproject to UTM geom = shape(feature['geometry']) geom_utm = transform(project_to_utm, geom) # Simplify in UTM simplified_geom_utm = geom_utm.simplify(tolerance=0.5, preserve_topology=True) # Fix any invalid geometries (self-intersections) if not simplified_geom_utm.is_valid: simplified_geom_utm = make_valid(simplified_geom_utm) # Reproject back to WGS84 simplified_geom_wgs84 = transform(project_to_wgs84, simplified_geom_utm) # Append the simplified and validated geometry simplified_features.append({ 'geometry': mapping(simplified_geom_wgs84), 'properties': feature['properties'] }) # Save the simplified polygons to a new shapefile with fiona.open(r"shapfile_simplified.shp", 'w', driver='ESRI Shapefile', schema=schema, crs=crs) as output: for feature in simplified_features: output.write(feature)
The problem you are facing is that shapely at the moment only can simplify geometries one by one. Because of this, gaps and slivers can/will appear between adjacent polygons because different points might be removed on the adjacent borders of the polygons. To avoid this, you need "topology-aware" simplification. This typically means that first the common boundaries are determined. Then these common boundary lines are simplified. Finally the polygons are reconstructed. This way the common boundaries will stay common without gaps and slivers. A library that supports this way of simplifying is topojson, via the toposimplify function. Because toposimplify requires all geometries to be in memory anyway, I used geopandas to load and save the geometries in the sample script as this will be easier and more efficient. Sample code: import geopandas as gpd import topojson input_path = r"shapefile.shp" output_path = r"shapefile_simplified.shp" # Read the shapefile input_gdf = gpd.read_file(input_path) # Reproject to UTM input_gdf = input_gdf.to_crs(32633) # Convert to topology, simplify and convert back to GeoDataFrame topo = topojson.Topology(input_gdf, prequantize=False) topo_simpl = topo.toposimplify(0.5) simpl_gdf = topo_simpl.to_gdf() # Fix any invalid geometries (self-intersections) simpl_gdf.geometry = simpl_gdf.geometry.make_valid() # Reproject back to WGS84 simpl_gdf = simpl_gdf.to_crs(4326) # Write to output file simpl_gdf.to_file(output_path)
2
1
78,989,038
2024-9-16
https://stackoverflow.com/questions/78989038/why-do-imaginary-number-calculations-behave-differently-for-exponents-up-to-100
The imaginary number i, or j in Python, means the square root of -1. So, i to the 4th or any multiple of 4 should be positive 1. >>> (1j)**4 (1+0j) >>> (1j)**96 (1+0j) >>> (1j)**100 (1+0j) Up until this point all is good, but once we get past 100, Python just bugs out. For example: >>> (1j)**104 (1+7.842691359635767e-15j) This messed up my calculations so much in an unexpected way. What explains this? I'm using Python 3.8.19.
See the code, for integer exponents up to 100 it uses a different algorithm: complex_pow(PyObject *v, PyObject *w, PyObject *z) { ... // Check whether the exponent has a small integer value, and if so use // a faster and more accurate algorithm. if (b.imag == 0.0 && b.real == floor(b.real) && fabs(b.real) <= 100.0) { p = c_powi(a, (long)b.real); } else { p = _Py_c_pow(a, b); } ... The special algorithm for the small exponents uses exponentiation by squaring, just multiplying 1j with itself and with the multiplication results, which is accurate: c_powu(Py_complex x, long n) { Py_complex r, p; long mask = 1; r = c_1; p = x; while (mask > 0 && n >= mask) { if (n & mask) r = _Py_c_prod(r,p); mask <<= 1; p = _Py_c_prod(p,p); } return r; } The general algorithm for exponents larger than 100 and non-integer exponents uses trigonometry and ends up involving inaccurate floats: _Py_c_pow(Py_complex a, Py_complex b) { ... vabs = hypot(a.real,a.imag); len = pow(vabs,b.real); at = atan2(a.imag, a.real); phase = at*b.real; if (b.imag != 0.0) { len /= exp(at*b.imag); phase += b.imag*log(vabs); } r.real = len*cos(phase); r.imag = len*sin(phase); ... The at value for 1j mathematically is Ο€/2, which isn't representable exactly as float. And then the phase and sin and thus the imaginary party of the result become inaccurate as well. See Why does math.cos(math.pi/2) not return zero? for more about that. (I just noticed you mentioned using Python 3.8.19, while I looked up the current version, 3.12.6. The code slightly differs, but irrelevantly so, and the explanation is correct for both.)
5
13
78,980,518
2024-9-13
https://stackoverflow.com/questions/78980518/pandas-generate-columns-of-cumsums-based-on-variable-names-in-two-different-colu
I have a dataframe as follows: import pandas import numpy df = pandas.DataFrame( data= {'s1' :numpy.random.choice( ['A', 'B', 'C', 'D', 'E'], size=20 ), 's2' :numpy.random.choice( ['A', 'B', 'C', 'D', 'E'], size=20 ), 'val':numpy.random.randint(low=-1, high=3, size=20)}, ) I want to generate two result columns that provide a cumulative sum of a value (val) based on the categories in 's1' and/or 's2'. A category ('A, 'B', 'C' etc) can appear in either s1 or s2.The first time a category appears in either s1 or s2, its value starts at zero, then next time it appears its value would be sum of previous values (val) Dataframe example could look as follows: s1 s2 val ans1 ans2 0 E B 1 0.0 0.0 1 E C 1 1.0 0.0 2 E A 2 2.0 0.0 3 B A 0 1.0 2.0 4 E B 1 4.0 1.0 5 B C 1 2.0 1.0 I can generate the correct answer columns (ans1 and ans2 - corresponding to set1 and set2 columns) as follows: temp={} df['ans1'] = numpy.nan df['ans2'] = numpy.nan for idx, row in df.iterrows(): if row['s1'] in temp: df.loc[idx,'ans1'] = temp[ row['s1'] ] temp[ row['s1'] ] = temp[ row['s1'] ] + row['val'] else: temp[ row['s1'] ] = row['val'] df.loc[idx,'ans1'] = 0 if row['s2'] in temp: df.loc[idx,'ans2'] = temp[ row['s2'] ] temp[ row['s2'] ] = temp[ row['s2'] ] + row['val'] else: temp[ row['s2'] ] = row['val'] df.loc[idx,'ans2'] = 0 using 'temp' as a dictionary to hold the running totals of each category (A-E) I can get the two answer columns... What i cant do is find a solution to this without iterating over each row of the dataframe. I dont can an issue in the case with only s1 - where i can use .groupby().cumsum().shift(1) and get the correct values in correct rows, but cannot find a solution where there are two sets s1 and s2 (or more as I have multiple sensors to track), so i am hoping there is a general more vectorised solution that will work?
What you want is a shifted cumulated sum after flattening the input dataset. Use melt, groupby.transform with shift+cumsum, then restore the original shape with pivot df[['ans1', 'ans2']] = (df .melt('val', ['s1', 's2'], ignore_index=False).sort_index(kind='stable') .assign(S=lambda x: x.groupby('value')['val'].transform(lambda x: x.shift(fill_value=0).cumsum())) .pivot(columns='variable', values='S') ) NB. the operation is applied in the lexicographic order of the columns (here s1 is before s2), not the original order of the columns. If you need a custom order you must use ordered categoricals. order = ['s1', 's2'] df[['ans1', 'ans2']] = (df .melt('val', ['s1', 's2'], ignore_index=False) .assign(variable=lambda x: pd.Categorical(x['variable'], categories=order, ordered=True)) .sort_values(by='variable', kind='stable').sort_index(kind='stable') .assign(S=lambda x: x.groupby('value')['val'].transform(lambda x: x.shift(fill_value=0).cumsum())) .pivot(columns='variable', values='S') ) Output: s1 s2 val ans1 ans2 0 E A 2 0 0 1 A E -1 2 2 2 D C 2 0 0 3 D B 0 2 0 4 D A 1 2 1 5 B B 2 0 2 6 D B 2 3 4 7 C A -1 2 2 8 E B 1 1 6 9 A E 2 1 2 Used input: np.random.seed(0) N = 10 df = pandas.DataFrame( data= {'s1' :numpy.random.choice(['A', 'B', 'C', 'D', 'E'], size=N), 's2' :numpy.random.choice(['A', 'B', 'C', 'D', 'E'], size=N), 'val':numpy.random.randint(low=-1, high=3, size=N)},)
4
2
78,989,203
2024-9-16
https://stackoverflow.com/questions/78989203/calling-another-py-from-the-converted-exe-file
I just want to ask how I can call another .py from the converted .exe file? I have a main Python code called main_2.py file that acts as the main interface for the program that I created, which in this case, is the one that will be converted into .exe. One of its main functions is to open up another python code called analysis.py which in this case, is another GUI itself that will do other tasks. My question is, can the convert .exe file called the .py directly, or should I convert the analysis.py into .exe as well?
It should be enough to import the functions from the second file, as explained here! After that, auto-py-to-exe will automatically create an exe file with everything included needed to run it! Another way is to add other py scripts as additional files!
2
2
78,988,304
2024-9-15
https://stackoverflow.com/questions/78988304/split-on-regex-more-than-a-character-maybe-variable-width-and-keep-the-separa
In GNU awk, there is a four argument version of split that can optionally keep all the separators from the split in a second array. This is useful if you want to reconstruct a select subset of columns from a file where the delimiter may be more complicated than just a single character. Suppose I have the following file: # sed makes the invisibles visible... # βˆ™ is a space; \t is a literal tab; $ is line end $ sed -E 's/\t/\\t/g; s/ /βˆ™/g; s/$/\$/' f.txt a\tβˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ aβˆ™βˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ βˆ™βˆ™βˆ™aβˆ™βˆ™βˆ™bβˆ™c\tdβˆ™_βˆ™e$ aβˆ™βˆ™βˆ™b_c\tdβˆ™_βˆ™e\t$ abcd$ Here I have a field comprised of anything other than the delimiter character set, and a delimiter of one or more characters of the set [\s_]. With gawk, you can do: gawk '{ printf "[" n=split($0, flds, /[[:space:]_]+/, seps) for(i=1; i<=n; i++) printf "[\"%s\", \"%s\"]%s", flds[i], seps[i], i<n ? ", " : "]" ORS } ' f.txt Prints (where the first element is the field, the second is the match to the delimiter regexp): [["a", " "], ["b", " "], ["c", " "], ["d", " _ "], ["e", ""]] [["a", " "], ["b", " "], ["c", " "], ["d", " _ "], ["e", ""]] [["", " "], ["a", " "], ["b", " "], ["c", " "], ["d", " _ "], ["e", ""]] [["a", " "], ["b", "_"], ["c", " "], ["d", " _ "], ["e", " "], ["", ""]] [["abcd", ""]] Ruby's str.split, unfortunately, does not have the same functionality. (Neither does Python's or Perl's.) What you can do is capture the match string from the delimiter regexp: irb(main):053> s="a b c d _ e" => "a b c d _ e" irb(main):054> s.split(/([\s_]+)/) => ["a", " ", "b", " ", "c", " ", "d", " _ ", "e"] Then use that result with .each_slice(2) and replace the nil's with '': irb(main):055> s.split(/([\s_]+)/).each_slice(2).map{|a,b| [a,b]} => [["a", " "], ["b", " "], ["c", " "], ["d", " _ "], ["e", nil]] irb(main):056> s.split(/([\s_]+)/).each_slice(2).map{|a,b| [a,b]}.map{|sa| sa.map{|e| e.nil? ? "" : e} } => [["a", " "], ["b", " "], ["c", " "], ["d", " _ "], ["e", ""]] Which allows gawk's version of split to be replicated: ruby -ne 'p $_.gsub(/\r?\n$/,"").split(/([\s_]+)/).each_slice(2). map{|a,b| [a,b]}.map{|sa| sa.map{|e| e.nil? ? "" : e} }' f.txt Prints: [["a", "\t "], ["b", " "], ["c", "\t"], ["d", " _ "], ["e", ""]] [["a", " "], ["b", " "], ["c", "\t"], ["d", " _ "], ["e", ""]] [["", " "], ["a", " "], ["b", " "], ["c", "\t"], ["d", " _ "], ["e", ""]] [["a", " "], ["b", "_"], ["c", "\t"], ["d", " _ "], ["e", "\t"]] [["abcd", ""]] So the same output (other than the line with trailing \t which gawk has as an empty field, delimiter combination.) In Python, roughly the same method also works: python3 -c ' import sys, re from itertools import zip_longest with open(sys.argv[1]) as f: for line in f: lp=re.split(r"([\s_]+)", line.rstrip("\r\n")) print(list(zip_longest(*[iter(lp)]*2, fillvalue="")) ) ' f.txt I am looking for a general algorithm to replicate the functionality of gawk's four argument split in Ruby/Python/Perl/etc. The Ruby and Python I have here works. Most of solutions (other than for gawk) to I want to split on this delimiter and keep the delimiter? involve a unique regex more complex than simply matching the delimiter. Most seem to be either scanning for a field, delimiter combination or use lookarounds. I am specifically trying to use a simple regexp that matches the delimiter only without lookarounds. With roughly the same regexp I would have used with GNU awk. So stated generally: Take a regexp matching the delimiter fields (without having to think much about the data fields) and put inside a capturing group; Take the resulting array of [field1, delimiter1, field2, delimiter2, ...] and create array of [[field1, delimiter1], [field2, delimiter2], ...] That method is easily used in Ruby (see above) and Python (see above) and Perl (I was too lazy to write that one...) Is this the best way to do this?
With splitting you always have one more field than the delimiters, which is why you have to fill in an empty string as the delimiter for the last field. A simpler way to achieve the filling would be to always append an empty string to the list returned by the split so that you can use the itertools.batched function (available since Python 3.12, or as a recipe beforehand) to produce easy pairings: import re from io import StringIO from itertools import batched file = StringIO('''a\t b c\td _ e a b c\td _ e a b c\td _ e a b_c\td _ e\t abcd''') for line in file: print(list(batched(re.split(r"([\s_]+)", line.rstrip('\r\n')) + [''], 2))) This outputs: [('a', '\t '), ('b', ' '), ('c', '\t'), ('d', ' _ '), ('e', '')] [('a', ' '), ('b', ' '), ('c', '\t'), ('d', ' _ '), ('e', '')] [('', ' '), ('a', ' '), ('b', ' '), ('c', '\t'), ('d', ' _ '), ('e', '')] [('a', ' '), ('b', '_'), ('c', '\t'), ('d', ' _ '), ('e', '\t'), ('', '')] [('abcd', '')] Demo here
10
7