question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,218,490
2024-11-23
https://stackoverflow.com/questions/79218490/problem-with-recognizing-single-cell-tables-in-pdfplumber
I have sample medical report and on top of each page in pdf there is a table that contains personal information. I have been trying to remove/crop the personal information table from that sample sample_pdf from all pages by finding layout values of the table. I am new to pdfplumber and not sure if that's the right approach but below is the code that I have tried and I am not able to get layout values of the table even when I am able to get red box on the table using pdfplumber. Code that I have tried: sample_data = [] sample_path = r"local_path_file" with pdfplumber.open(sample_path) as pdf: pages = pdf.pages for p in pages: sample_data.append(p.extract_tables()) print(sample_data) pages[0].to_image() I am able to identify the first table from it by using below code pages[0].to_image().debug_tablefinder() Now when I try below code to extract tables then I am not getting anything with pdfplumber.open(sample_path) as pdf: pages = pdf.pages[0] print(pages.extract_tables()) output: [] Update There is an issue when working on this particular sample pdf but when I used a similar pdf report I was able to crop it based on boundaries like this: pages[0].find_tables()[0].bbox output: (25.19059366666667, 125.0, 569.773065, 269.64727650000003) This shows the part that I want to get rid of: p0.crop((25.19059366666667, 125.0, 569.773065, 269.64727650000003)).to_image().debug_tablefinder() Below takes y0 = 269.64, where the top table ends, to almost the bottom of the page y1 = 840, and from the leftmost part x0 = 0 of the page to nearly the right edge x1 = 590: p0.crop((0, 269.0, 590, 840)).to_image() There is an issue when working on this particular sample pdf but when I used a similar pdf report I was able to crop it based on boundaries. This is what I used: pages[0].find_tables()[0].bbox output: (25.19059366666667, 125.0, 569.773065, 269.64727650000003) # this shows the part that I want to get rid off p0.crop((25.19059366666667, 125.0, 569.773065, 269.64727650000003)).to_image().debug_tablefinder() # below taking y0 value from where top table ends (269.64) to almost bottom of page 840 # x0 from leftmost part (0) of page and x1 as (590) to almost right end of page p0.crop((0, 269.0, 590, 840)).to_image()
Understanding the Issue pdfplumber 0.11.4 The issue arises because pdfplumber filters out tables with a single cell. This behavior is controlled by the following line in the library's source code: # File: pdfplumber/table.py def cells_to_tables(cells: List[T_bbox]) -> List[List[T_bbox]]: ... filtered = [t for t in _sorted if len(t) > 1] # single-cell tables are excluded here return filtered Ad-hoc Workaround We can modify the locally installed package to allow single-cell tables by replacing filtered with _sorted (use caution, as this has not been tested but works for this specific case): # pdfplumber/table.py def cells_to_tables(cells: List[T_bbox]) -> List[List[T_bbox]]: ... return _sorted # Return all tables, including single-cell ones For a more robust approach, we could make a feature enhancement, like adding an allow_one_cell_table option to the TableSettings class and then taking it into account when extracting tables: table_settings = {"allow_one_cell_table": True} # fictitious property page.extract_tables(table_settings) While this is not currently supported, discussions on this topic can be found in these GitHub issues: Issue #236 Issue #309 Alternative Approach If modifying the code isn't an option, we can manually inspect the PDF structure. For the provided example, the single-cell table at the top of the page can be found as the first rectangular object. Here's how we can identify and visualize it (this snippet works for the first page of the document, but you can adapt it for other pages as needed): rt = pdf.pages[0].rects[0] bbox = (rt['x0'], rt['top'], rt['x1'], rt['bottom']) page.crop(bbox).to_image(resolution=400).show() P.S. Regarding your main goal - removing data from a PDF - pdfplumber might not be the best choice. It's designed for data extraction, not PDF editing.
3
1
79,224,478
2024-11-25
https://stackoverflow.com/questions/79224478/how-can-i-enforce-a-minimum-age-constraint-and-manage-related-models-in-django
I am working on a Django project where I need to validate a model before saving it, based on values in its related models. I came up with this issue while extracting an app from an project using an old Django version (3.1) to a separate Django 5.1 project, then there error "ValueError: 'Model...' instance needs to have a primary key value before this relationship can be used" raised on all validation classes that used related model data. For demonstration and simplification purposes, I have a Reservation model that references multiple Guest objects via a foreign key. For the reservation to be valid and be saved, all guests linked to it must be at least 18 years old. However, none of these records (neither the reservation nor the guests) have been saved to the database yet. I need to perform this validation efficiently and cleanly, preferably in a way that keeps the validation logic separated from the models themselves. How can I approach this validation scenario? What are the best practices for validating unsaved foreign key relationships in Django? Here is a simplified version of my setup: File: models.py from django.db import models class Reservation(models.Model): check_in_date = models.DateField() check_out_date = models.DateField() def __str__(self): return f"Reservation from {self.check_in_date} to {self.check_out_date}" class Guest(models.Model): name = models.CharField(max_length=255) age = models.PositiveIntegerField() reservation = models.ForeignKey( Reservation, related_name="guests", on_delete=models.CASCADE ) def __str__(self): return f"{self.name} ({self. Age} years old)" File: validation.py from django.core.exceptions import ValidationError def validate_reservation_and_guests(reservation): """ Validate that all guests in the reservation are at least 18 years old. """ for guest in reservation.guests.all(): if guest.age < 18: raise ValidationError("All guests must be at least 18 years old.") Question: What is the best way to structure this kind of validation in Django admin? I am open to using custom model methods, form validation, or signals, but I prefer to keep the logic in a separate file for better organization. Are there other approaches I should consider? Any examples or advice would be greatly appreciated!
You can add a MinValueValidator [Django-doc]: from django.core.validators import MinValueValidator class Guest(models.Model): name = models.CharField(max_length=255) age = models.PositiveIntegerField( validators=[ MinValueValidator(18, 'All guests must be at least 18 years old.') ] ) reservation = models.ForeignKey( Reservation, related_name='guests', on_delete=models.CASCADE ) def __str__(self): return f"{self.name} ({self. Age} years old)" All ModelForms derived from this model will thus check that the age is at least 18. Since a ModelAdmin uses a ModelForm, these thus also validate this. In the ModelAdmin, you can then move the Guests to an inline [Django-doc]: from django.contrib import admin from myapp.models import Guest, Reservation class GuestInline(admin.TabularInline): model = Guest class ReservationAdmin(admin.ModelAdmin): inlines = [ GuestInline, ] admin.site.register(Reservation, ReservationAdmin) but I prefer to keep the logic in a separate file for better organization. Are there other approaches I should consider? A model is the main responsible to ensure the data is valid, so adding validators to the model fields is the probably the best way to do this. All sorts of "products" arising from models like ModelForms, ModelSerializers, ModelAdmins, ModelResources, etc. will then all see the validators and act accordingly.
4
3
79,223,802
2024-11-25
https://stackoverflow.com/questions/79223802/how-do-i-get-custom-colors-using-color-chooser-from-tkinter
In the askcolor function of class Chooser in the pop up you can add color to a custom colors list. I was wondering if I can somehow get the values of those colors, but I could not find a way how to do it. Code for the pop up: from tkinter import colorchooser color_code = colorchooser.askcolor() Pop up screen shot:
The function tkinter.colorchooser.askcolor() doesn't directly provide access to the custom colors list in its popup. The function's main role is to return the selected color. The closest you can get is keeping track of the colors by creating a list (custom_colors = []) and then every time color_code = colorchooser.askcolor() is called, .append to the list. E.g.: from tkinter import colorchooser, Tk, Button root = Tk() custom_colors = [] def choose_color(): color_code = colorchooser.askcolor() if color_code[1]: if color_code[1] not in custom_colors: custom_colors.append(color_code[1]) print(custom_colors) choose_button = Button(root, text="Choose Color", command=choose_color) choose_button.pack(pady=20) root.mainloop()
1
2
79,219,311
2024-11-24
https://stackoverflow.com/questions/79219311/async-python-function-as-a-c-callback
I'm in a situation where I need to use a C library that calls a python function for IO purposes so I don't have to create a C equivalent for the IO. The python code base makes extensive use of asyncio and all the IO goes through queues. And the C code is loaded as a dll using ctypes The problem is you can't await from a C callback. Is there anyway to use an async python function as a C callback? Below is essentially how it will work. Is my only option to do something like the non_async_callback or is there someway to await from C using ctypes. python import asyncio import time import ctypes DLL_PATH="./bin/test.dll" dll = ctypes.CDLL(DLL_PATH) incoming_queue=asyncio.Queue() outgoing_queue=asyncio.Queue() c_func = ctypes.CFUNCTYPE(ctypes.c_uint8, ctypes.c_uint8) async def python_callback(test_num): await outgoing_queue.put(test_num) test_num = await incoming_queue.get() return test_num def non_async_callback(test_num): while True: try: outgoing_queue.put_nowait(test_num) break except asyncio.QueueFull: time.sleep(0.1) continue while True: try: test_num = incoming_queue.get_nowait(test_num) break except asyncio.QueueFull: time.sleep(0.1) continue return test_num # Called at some point during initialization def setup_callback(): dll.setup_callback(c_func(python_callback)) C uint8_t (CALLBACK*)(uint8_t); CALLBACK py_callback void setup_callbacks(void* callback) { py_callback = callback; } // Called from somewhere else in the C code. uint8_t python_callback(uint8_t value) { uint8_t result = py_callback(value); // Do something with result }
From a bit of searching, I believe async def python_callback(test_num): await outgoing_queue.put(test_num) test_num = await incoming_queue.get() return test_num can be wrapped with something like loop = None # No loop has been created initially def non_async_callback(test_num): global loop if not loop: # Only create a new loop on first callback loop = asyncio.new_event_loop() test_num = loop.run_until_complete(python_callback(test_num)) return test_num If non_async_callback might be called concurrently from multiple threads, then just use: def non_async_callback(test_num): test_num = asyncio.run(python_callback(test_num)) return test_num Update Based on the OP's comment it appears that we initially have a Python script calling a C function and that C function ultimately results in a callback being invoked. Thus the callback is occurring on the same thread that the original Python script (and thus its event loop) is executing. Consequently, the problem becomes calling a Python coroutine from a regular Python function without blocking the current event loop. This requires creating a separate thread running a separate event loop: import asyncio import threading # Create a new thread running a new event_loop: _new_event_loop = asyncio.new_event_loop() # Run the new event loop on a new daemon thread: threading.Thread(target=_new_event_loop.run_forever, name="Async Runner", daemon=True).start() async def python_callback(test_num): print('starting') await asyncio.sleep(1) # Emulate doing something print('ending') return test_num * test_num def non_async_callback(test_num): # Invoke the callback on a different thread, since no # further awaits can be done on this thread until this # funtion returns: return asyncio.run_coroutine_threadsafe(python_callback(test_num), _new_event_loop).result() async def main(): ... # Ultimately our callback gets invoked on the same thread: test_num = non_async_callback(4) print(test_num) asyncio.run(main()) Prints: starting ending 16
1
1
79,221,744
2024-11-25
https://stackoverflow.com/questions/79221744/ipopt-solution-by-gekko-in-comparison-to-grg-algorithm-used-within-the-excel-sol
The aim is to compute the thermodynamically equilibrated composition of the mixture at 1000K based on Gibbs' energy of formation of the reaction products and educts (steam+C2H6 in a molar ratio of 4:1) for the ethane steam gasification reaction as following: from gekko import GEKKO m = GEKKO(remote=True) x = m.Array(m.Var,13,value=1,lb=1e-27,ub=20.0) H2,H2O,CO,O2,CO2,CH4,C2H6,C2H4,C2H2,lamda1,lamda2,lamda3,summe= x H2.value = 3.0 H2O.value = 1.0 CO.value = 0.5 O2.value = 0.001 CO2.value = 0.5 CH4.value = 0.1 C2H6.value = 0.000215 C2H4.value = 0.00440125 C2H2.value = 0.0041294 summe.value = 8.0 lamda1.value = 1.0 lamda2.value = 1.0 lamda3.value = 1.0 eq1 = m.Param(value=14) eq2 = m.Param(value=4) eq3 = m.Param(value=2) summe = m.Var(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2) lamda2 = m.Var((-1)*m.log(H2 / summe) / 2) lamda1 = m.Var(46.03 / 1.9872 - m.log(H2O / summe) + 2 * lamda2) lamda3 = m.Var(47.942 / 1.9872 - m.log(CO / summe) + lamda1) m.Equation(m.exp(-4.61 / 1.9872 - 4 * lamda2 - lamda3) * summe == CH4) m.Equation(m.exp(-28.249 / 1.9872 - 4 * lamda2 - 2 * lamda3) * summe == C2H4) m.Equation(m.exp(-40.604 / 1.9872 - 2 * lamda2 - 2 * lamda3) * summe == C2H2) m.Equation(m.exp(-26.13 / 1.9872 - 6 * lamda2 - 2 * lamda3) * summe == C2H6) m.Equation(m.exp(94.61 / 1.9872 - 2 * lamda1 - lamda3) * summe == CO2) m.Equation(m.exp(-2 * lamda1) * summe == O2) m.Equation(2 * CO2 + CO + 2 * O2 + H2O == eq2) m.Equation(4 * CH4 + 4 * C2H4 + 2 * C2H2 + 2 * H2 + 2 * H2O + 6 * C2H6 == eq1) m.Equation(CH4 + 2 * C2H4 + 2 * C2H2 + CO2 + CO + 2 * C2H6 == eq3) m.Minimize((summe-(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2))**2) m.options.IMODE = 3 #IPOPT m.options.MAX_ITER = 1000 m.options.OTOL = 1e-10 m.options.RTOL = 1e-10 m.solve() print('x: ', x) print('Objective: ',m.options.OBJFCNVAL) EXIT: Optimal Solution Found. The solution was found. The final value of the objective function is 81.0000000000000 --------------------------------------------------- Solver : IPOPT (v3.12) Solution time : 1.169999998819549E-002 sec Objective : 81.0000000000000 Successful solution --------------------------------------------------- x: [[5.797458326] [1.202541674] [1.202541674] [1.6791020526e-21] [0.79745832603] [1.6503662452e-22] [3.2694282596e-27] [1.1255712085e-27] [1e-27] [2.185089969] [2.185089969] [2.185089969] [9.5605307091]] Then using GRG as optimiser: Both solution still differs. And interestingly, the output from a (constraint) root finder is still different, but converges at the same time: Total number of equations: 13 Number of implicit equations: 4 Number of explicit equations: 9 Solution method CONSTRAINED Convergence tolerance: 1e-07 # of iterations used: 8 CO 1.685236 H2 5.118105 H2O 1.387357 SUM 8.899115 C2H2 0.0041294 C2H4 0.0044224 C2H6 0.0092403 CH4 0.2269213 CO2 0.0522585 lamda1 1.537016 lamda2 0.2765838 lamda3 2.53957 O2 0.4114448 Here is a comparison of the solutions: Solution Vector x GEKKO EXCEL GRG Root finder GEKKO-final H2: 1 5.797458326 5.344360712 5.118105 5.219 H2O: 2 1.202541674 1.521995663 1.387357 1.681 CO: 3 1.202541674 1.388351672 1.685236 1.581 O2: 4 1.6791020526e-21 5.82E-21 0.4114448 1e-5 CO2: 5 0.79745832603 0.544826 0.0522585 0.3693 CH4: 6 1.6503662452e-22 0.0668213 0.2269213 0.05 C2H6: 7 3.2694282596e-27 1.70E-07 0.0092403 1e-5 C2H4: 8 1.1255712085e-27 9.74E-08 0.0044224 1e-5 C2H2: 9 1e-27 3.25E-10 0.0041294 1e-5 lamda1: 10 2.185089969 24.3878385 1.537016 lamda2: 11 2.185089969 0.253110984 0.2765838 lamda3: 12 2.185089969 1.558979624 2.53957 summe: 13 9.5605307091 8.866356107 8.899115 8.90
The variables lamda1 ,lamda2, lamda3, summe are defined twice and the equations associated with those variables are only used to initialize the second definition of the variables. H2,H2O,CO,O2,CO2,CH4,C2H6,C2H4,C2H2,lamda1,lamda2,lamda3,summe= x summe = m.Var(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2) lamda2 = m.Var((-1)*m.log(H2 / summe) / 2) lamda1 = m.Var(46.03 / 1.9872 - m.log(H2O / summe) + 2 * lamda2) lamda3 = m.Var(47.942 / 1.9872 - m.log(CO / summe) + lamda1) Switching them to Intermediate() definitions is an easy way to fix the problem. summe = m.Intermediate(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2) lamda2 = m.Intermediate((-1)*m.log(H2 / summe) / 2) lamda1 = m.Intermediate(46.03 / 1.9872 - m.log(H2O / summe) + 2 * lamda2) lamda3 = m.Intermediate(47.942 / 1.9872 - m.log(CO / summe) + lamda1) Here is the complete script: from gekko import GEKKO import numpy as np m = GEKKO(remote=True) x = m.Array(m.Var,9,value=1,lb=1e-27,ub=20.0) H2,H2O,CO,O2,CO2,CH4,C2H6,C2H4,C2H2=x H2.value = 3.0 H2O.value = 1.0 CO.value = 0.5 O2.value = 0.001 CO2.value = 0.5 CH4.value = 0.1 C2H6.value = 0.000215 C2H4.value = 0.00440125 C2H2.value = 0.0041294 eq1 = m.Param(value=14) eq2 = m.Param(value=4) eq3 = m.Param(value=2) summe = m.Intermediate(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2) lamda2 = m.Intermediate((-1)*m.log(H2 / summe) / 2) lamda1 = m.Intermediate(46.03 / 1.9872 - m.log(H2O / summe) + 2 * lamda2) lamda3 = m.Intermediate(47.942 / 1.9872 - m.log(CO / summe) + lamda1) m.Equation(m.exp(-4.61 / 1.9872 - 4 * lamda2 - lamda3) * summe == CH4) m.Equation(m.exp(-28.249 / 1.9872 - 4 * lamda2 - 2 * lamda3) * summe == C2H4) m.Equation(m.exp(-40.604 / 1.9872 - 2 * lamda2 - 2 * lamda3) * summe == C2H2) m.Equation(m.exp(-26.13 / 1.9872 - 6 * lamda2 - 2 * lamda3) * summe == C2H6) m.Equation(m.exp(94.61 / 1.9872 - 2 * lamda1 - lamda3) * summe == CO2) m.Equation(m.exp(-2 * lamda1) * summe == O2) m.Equation(2 * CO2 + CO + 2 * O2 + H2O == eq2) m.Equation(4 * CH4 + 4 * C2H4 + 2 * C2H2 + 2 * H2 + 2 * H2O + 6 * C2H6 == eq1) m.Equation(CH4 + 2 * C2H4 + 2 * C2H2 + CO2 + CO + 2 * C2H6 == eq3) m.Minimize((summe-(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2))**2) m.options.IMODE = 3 #IPOPT m.options.MAX_ITER = 1000 m.options.OTOL = 1e-10 m.options.RTOL = 1e-10 m.solve() print('x: ', x) print('Objective: ',m.options.OBJFCNVAL) print(f'H2: {H2.value[0]}') print(f'H2O: {H2O.value[0]}') print(f'CO: {CO.value[0]}') print(f'O2: {O2.value[0]}') print(f'CO2: {CO2.value[0]}') print(f'CH4: {CH4.value[0]}') print(f'C2H6: {C2H6.value[0]}') print(f'C2H4: {C2H4.value[0]}') print(f'C2H2: {C2H2.value[0]}') The objective value is now 0 with 0 degrees of freedom as required for root finding. The solution is: The solution was found. The final value of the objective function is 0.000000000000000E+000 --------------------------------------------------- Solver : IPOPT (v3.12) Solution time : 1.099999999860302E-002 sec Objective : 0.000000000000000E+000 Successful solution --------------------------------------------------- x: [[5.0] [2.0] [2.0] [1.0421441742e-21] [3.9704669403e-23] [2.1920286233e-23] [1e-27] [1e-27] [1e-27]] Objective: 0.0 H2: 5.0 H2O: 2.0 CO: 2.0 O2: 1.0421441742e-21 CO2: 3.9704669403e-23 CH4: 2.1920286233e-23 C2H6: 1e-27 C2H4: 1e-27 C2H2: 1e-27 This solution doesn't look quite right for the water-gas shift reaction. I had to go back and refresh my memory for the equilibrium composition of a reacting mixture containing ethane (Cβ‚‚H₆) and steam (Hβ‚‚O) at 1000 K. 1. Steam Reforming Reaction The primary reaction for hydrogen production: C2H6 + 2H2O β†’ 2CO + 5H2 Purpose: Converts hydrocarbons into carbon monoxide (CO) and hydrogen (Hβ‚‚) in the presence of steam. Thermodynamic Effect: Highly endothermic, favored at high temperatures. 2. Water-Gas Shift Reaction A secondary reaction between carbon monoxide and steam: CO + H2O ↔ CO2 + H2 Purpose: Converts CO to COβ‚‚ and produces additional Hβ‚‚. Thermodynamic Effect: Exothermic, favored at lower temperatures but occurs at equilibrium even at high temperatures. 3. Other Hydrocarbon Reactions The system also includes the decomposition and partial oxidation of hydrocarbons: Ethylene Formation (Cβ‚‚Hβ‚„): C2H6 β†’ C2H4 + H2 Acetylene Formation (Cβ‚‚Hβ‚‚): C2H4 β†’ C2H2 + H2 4. Carbon Deposition Carbon deposition can occur under certain conditions: CO β†’ C (solid) + CO2 or CH4 β†’ C (solid) + 2H2 Purpose: Represents undesirable side reactions where carbon deposits as a solid. Thermodynamic Effect: Can occur at lower hydrogen-to-carbon or oxygen-to-carbon ratios, depending on system conditions. Thermodynamic Modeling The equilibrium composition is calculated by minimizing the Gibbs free energy of the system: G = Ξ£ (ni * (giΒ° + RT * ln(ni / ntotal))) Where: ni: Molar amount of species i. giΒ°: Standard Gibbs free energy of formation for species i. ntotal: Total moles of all species. Constraints: Mass balances for carbon, hydrogen, and oxygen. Bounds on mole amounts to avoid non-physical solutions. Here is a complete script based on these equations: from gekko import GEKKO # Create GEKKO model m = GEKKO(remote=True) # Define variables for species molar amounts n_C2H6 = m.Var(value=1.0, lb=0, ub=1) # Ethane n_H2O = m.Var(value=4.0, lb=0, ub=1) # Water n_CO = m.Var(value=0.0, lb=0, ub=1) # Carbon monoxide n_CO2 = m.Var(value=0.0, lb=0, ub=1) # Carbon dioxide n_H2 = m.Var(value=0.0, lb=0, ub=1) # Hydrogen n_CH4 = m.Var(value=0.0, lb=0, ub=1) # Methane n_C = m.Var(value=0.0, lb=0, ub=1) # Solid carbon n_C2H4 = m.Var(value=0.0, lb=0, ub=1) # Ethylene n_C2H2 = m.Var(value=0.0, lb=0, ub=1) # Acetylene # Universal gas constant R = 1.9872 # cal/(K mol) # Gibbs free energies of formation at 1000K gibbs_energies = { "C2H6": -26.13, # Ethane "H2O": 46.03, # Water (vapor) "CO": 47.942, # Carbon monoxide "CO2": 94.61, # Carbon dioxide "H2": 0.0, # Hydrogen gas "CH4": -4.61, # Methane "C": 0.0, # Solid carbon "C2H4": -28.249, # Ethylene "C2H2": -40.604 # Acetylene } # Define total moles total_moles = ( n_C2H6 + n_H2O + n_CO + n_CO2 + n_H2 + n_CH4 + n_C + n_C2H4 + n_C2H2 ) # Define the objective function (Gibbs free energy minimization) m.Obj( n_C2H6 * (gibbs_energies["C2H6"]/R + m.log(n_C2H6 / total_moles + 1e-10)) + n_H2O * (gibbs_energies["H2O"]/R + m.log(n_H2O / total_moles + 1e-10)) + n_CO * (gibbs_energies["CO"]/R + m.log(n_CO / total_moles + 1e-10)) + n_CO2 * (gibbs_energies["CO2"]/R + m.log(n_CO2 / total_moles + 1e-10)) + n_H2 * (gibbs_energies["H2"]/R + m.log(n_H2 / total_moles + 1e-10)) + n_CH4 * (gibbs_energies["CH4"]/R + m.log(n_CH4 / total_moles + 1e-10)) + n_C * (gibbs_energies["C"]/R + m.log(n_C / total_moles + 1e-10)) + n_C2H4 * (gibbs_energies["C2H4"]/R + m.log(n_C2H4 / total_moles + 1e-10)) + n_C2H2 * (gibbs_energies["C2H2"]/R + m.log(n_C2H2 / total_moles + 1e-10)) ) # Mass balance constraints # Carbon balance: 4(C2H6) = CO + CO2 + CH4 + C + C2H4 + C2H2 m.Equation(4*n_C2H6 == n_CO + n_CO2 + n_CH4 + n_C + n_C2H4 + n_C2H2) # Oxygen balance: 3*H2O = CO + CO2 m.Equation(3*n_H2O == n_CO + n_CO2) # Hydrogen balance with 4:1 ratio: C2H6 + 4*H2O = 2*H2 + CH4 + C2H4 + C2H2 m.Equation(4*n_H2O + n_C2H6 == 2*n_H2 + n_CH4 + n_C2H4 + n_C2H2) # Total mole constraint for mole fraction result m.Equation(total_moles == 1) # Solve the model m.solve(disp=True) # Extract and display results results = { "C2H6 (mol)": n_C2H6.value[0], "H2O (mol)": n_H2O.value[0], "CO (mol)": n_CO.value[0], "CO2 (mol)": n_CO2.value[0], "H2 (mol)": n_H2.value[0], "CH4 (mol)": n_CH4.value[0], "C (mol)": n_C.value[0], "C2H4 (mol)": n_C2H4.value[0], "C2H2 (mol)": n_C2H2.value[0] } [print(f'{r}: {results[r]:.4}') for r in results] with results: C2H6 (mol): 0.1993 H2O (mol): 0.003599 CO (mol): 0.0108 CO2 (mol): 0.0 H2 (mol): 0.0 CH4 (mol): 2.915e-09 C (mol): 0.5726 C2H4 (mol): 0.0004254 C2H2 (mol): 0.2133
3
1
79,221,652
2024-11-25
https://stackoverflow.com/questions/79221652/polars-how-to-extract-last-non-null-value-on-a-given-column
I'd like to perform the following: Input: df = pl.DataFrame({ "a": [1,15,None,20,None] }) Output: df = pl.DataFrame({ "a": [1,15,None,20,None], "b": [0,14,None,5,None] }) That is, from: A 1 15 None 20 None to: A B 1 0 15 14 None None 20 5 None None So, what it does: If the value of "A" is null, then value of B (output column) is also Null If "A" has some value, please retrieve the last Non-Null value in "A", and then subtract the current value in "A" with the previous Non-Null value I'd like to perform this in python's polars dataframe library, but I can't seem to find a solution. I've tried the following question: How to select the last non-null value from one column and also the value from another column on the same row in Polars? But unfortunately, this does not answer the original problem, since the question performs an aggregation of an entire column, and then takes the last value of that column. What I'd like to do is not to aggregate an entire column, but simply to subtract a current value with a previous non-null value. I have also tried to use rolling: df = df.with_row_index().rolling( index_column = 'index', period = '???i').agg(pl.col("A").last()) But, of course, that does not work because the occurence of Null Values cannot be determined (i.e. it is not periodic, so I don't know how many indexes before the current entry contains a non-null value in "A"). Does anyone knows how to do so? Thanks!
You can use a combination of shift and forward_fill to get the last non-null value. So with your input, this looks like df = pl.DataFrame({ "a": [1, 15, None, 20, None] }) df.with_columns( # current row value for "a" minus the last non-null value # as the first row has no previous non-null value, # fill it with the first value (1) ( pl.col("a") - pl.col("a").shift(fill_value=pl.col("a").first()).forward_fill() ).alias("b") ) # shape: (5, 2) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ a ┆ b β”‚ # β”‚ --- ┆ --- β”‚ # β”‚ i64 ┆ i64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════║ # β”‚ 1 ┆ 0 β”‚ # β”‚ 15 ┆ 14 β”‚ # β”‚ null ┆ null β”‚ # β”‚ 20 ┆ 5 β”‚ # β”‚ null ┆ null β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
4
2
79,221,813
2024-11-25
https://stackoverflow.com/questions/79221813/snakemake-how-to-implement-function-using-wildcards
I am trying to using snakemake to output some files from a specific job. Basically I have different channels of a process, that span different mass ranges. Depending then on the {channel, mass} pair I have to run jobs for different values of "norms". I wanted to to this using: import numpy as np import pickle as pkl particle_masses = { 5: 4.18, # Bottom quark mass ~4.18 GeV however there is an issue with the charon spectra below 10 GeV 8: 80.3, # W boson mass ~80.379 GeV 11: 1.777, # Tau lepton mass ~1.777 GeV 12: 0, # Electron neutrino mass ~0 (neutrino masses are very small) 14: 0, # Muon neutrino mass ~0 16: 0 # Tau neutrino mass ~0 } mass_values = np.logspace(0, np.log10(499), num=25).tolist() # Masked mass dictionary with rounded values masked_mass_dict = {} for channel, min_mass in particle_masses.items(): # Apply mask to exclude values below the particle mass threshold masked_mass = [round(m, 2) for m in mass_values if m >= max(min_mass, 3)] masked_mass_dict[channel] = masked_mass channels = list(masked_mass_dict.keys()) rule all: input: expand( f"{DATA_LOC}/signal/channel_{{channel}}/trial_distributions/{{mass}}_trial_distrib_{{norm}}.h5", channel=channels, mass=lambda wildcards: masked_mass_dict[wildcards.channel], norm=lambda wildcards: get_norms({"mass": wildcards.mass, "channel": wildcards.channel}), ) rule compute_trial_distribution: input: signal_file=f"{DATA_LOC}/signal/channel_{{channel}}/mc_distrib/{{mass}}_mc_distrib.h5", output: norm_file=f"{DATA_LOC}/signal/channel_{{channel}}/trial_distributions/{{mass}}_trial_distrib_{{norm}}.h5" shell: """ ... """ def get_norms(dict): """ Get the norm values (sensitivity) for the given channel and mass from the pre- loaded sensitivity dictionary. """ channel = dict["channel"] mass = dict["mass"] channel = int(channel) mass = float(mass) #channel = int(wildcards.channel) #mass = float(wildcards.mass) # Load sensitivity dictionary dictionary_file = "path/" with open(dictionary_file, "rb") as f: sensitivities_dict = pkl.load(f) # Get the sensitivity data for the specified channel if channel not in sensitivities_dict: raise ValueError(f"Channel {channel} not found in sensitivity dictionary.") sensitivity_mass_array = sensitivities_dict[channel] mass_index = np.where(sensitivity_mass_array[0] == mass)[0] if len(mass_index) == 0: raise ValueError(f"Mass {mass} not found for channel {channel}") # Calculate norms azimov_norm = sensitivity_mass_array[1][mass_index[0]] norms = np.linspace(0.1 * azimov_norm, 10 * azimov_norm, 50) norms = np.insert(norms, 0, 0) # Include a norm value of 0 for background-only distribution norms = np.array([0]) # Convert to list for use in Snakemake params or shell return norms.tolist() However it seems that this does not work. I am not sure how to correctly implement this... This is the error I get: InputFunctionException in rule all in file /home/egenton/upgrade_solar_WIMP/scripts/Snakefile, line 45: Error: AttributeError: 'Wildcards' object has no attribute 'channel' Wildcards: If anybody knows how this can be solved that would be very useful ! I tried inputting the channel mass pairs to expand the norms from the get_norm function but that did not work as the getnorm function does recognize the wildcard.
Rule all cannot have wildcards (i.e. its Wildcards object doesn't have attributes representing wildcard values). This comes from the fact that it doesn't have an output "consumed" by downstream rules. Wildcards are determined by matching patterns in output sections of rules with files required in the input of downstream rules. In your case, this will happen when executing instances of rule compute_trial_distribution, matching its output file pattern with one of the concrete files found in the input of all. The wildcards values will exist at the level of compute_trial_distribution, and can be used to compute its input (or in a params or shell, or run directive...). For the input of all, you should define an explicit list of all the final files you want. This can often be done using expand, but providing lists of values for the parameters instead of lambda functions. However for what you want, I think it is simpler to just use plain Python to compute what you want: # Define get_norms before using it def get_norms(...): # ... distribs = [] for channel in channels: mass = masked_mass_dict[channel] for norm in get_norms({"mass": mass, "channel": channel}): distribs.append( f"{DATA_LOC}/signal/channel_{channel}/trial_distributions/{mass}_trial_distrib_{norm}.h5" rule all: input: distribs In my opinion, snakemake would be much less confusing for beginners if expand was replaced with plain Python constructs in the tutorials and examples. This would be the opportunity to learn some Python basics, which can turn out very useful to write non-completely-straightforward workflows. Side note: Couldn't you define get_norms as a function with two parameters (channel and mass) instead of a function taking a dictionary?
2
1
79,212,337
2024-11-21
https://stackoverflow.com/questions/79212337/vscode-add-custom-autocomplete-to-known-external-classes
I'm working with Python in an environment where there are some classes which are defined externally (as in, I don't have access to the files where these classes are defined). So I can import these classes and use them, but since VSCode can't resolve the import, there's no autocomplete for them. What I would want: a way to tell VSCode which attributes and methods these classes have, so it can autocomplete for me whenever I use them (same as if I had these classes defined in my workspace and imported normally). Is this possible somehow?
Autocomplete and IntelliSense are provided for all files within the current working folder in VSCode. They're also available for Python packages that are installed in standard locations. To enable IntelliSense for packages that are installed in non-standard locations, add those locations to the python.autoComplete.extraPaths collection in your settings.json file. Ref:https://code.visualstudio.com/docs/python/settings-reference#_autocomplete-settings
1
2
79,220,668
2024-11-24
https://stackoverflow.com/questions/79220668/is-it-possible-to-teach-toml-kit-how-to-dump-an-object
I am generating TOML files with several tables using TOML Kit without any problem in general. So far all the values were either strings or numbers, but today I first bumped into a problem. I was trying to dump a pathlib.Path object and it fails with a ConvertError Unable to convert an object of <class 'pathlib.WindowsPath'> to a TOML item. I fixed right away, adding a str in front, but I was thinking to do something valid in general. Is there a way to teach TOML Kit how to convert a custom object to a valid TOML value? In the case of Path, would be extremely easy.
You want https://tomlkit.readthedocs.io/en/latest/api/#tomlkit.register_encoder Which you would use like this: from pathlib import Path from typing import Any import tomlkit from tomlkit.items import Item, String, ConvertError class PathItem(String): def unwrap(self) -> Path: return Path(super().unwrap()) def path_encoder(obj: Any) -> Item: if isinstance(obj, Path): return PathItem.from_raw(str(obj)) else: # we cannot convert this, but give other custom converters a # chance to run raise ConvertError tomlkit.register_encoder(path_encoder) obj = {'path': Path('/foo/bar')} document = tomlkit.dumps(obj) assert document == '''\ path = "/foo/bar" '''
1
2
79,220,482
2024-11-24
https://stackoverflow.com/questions/79220482/how-to-get-information-out-of-onetoonefield-and-use-this-information-in-admin
I making my first site, this is my first big project so I stuck at one issue and can't solve it.. Here part of my code with error: models.py ... class GField(models.Model): golf_club_name = models.CharField( primary_key=True, max_length=100, help_text='Enter a golf club (Gfield) name', ) golf_field_par = models.PositiveIntegerField() def __str__(self): return f'{self.golf_club_name}' class PlayerInstance(models.Model): ... golf_field_playing = models.OneToOneField(GField, default = 'for_nothoing',on_delete=models.RESTRICT, help_text='where is it taking part?') now_at_hole = models.IntegerField() today = models.IntegerField() R1 = models.IntegerField() R2 = models.IntegerField() R3 = models.IntegerField() R4 = models.IntegerField() R5 = models.IntegerField() all_rounds = [R1, R2, R3, R4, R5] counter = sum(1 for value in all_rounds if value != 0) par_value = gfield_playing.golf_field_par total = sum(all_rounds[:counter]) if par_value * counter <= total: to_par = f'+{total - par_value * counter}' if par_value*counter>total: to_par = f'-{par_value * counter - total}' ... When I saving this code it sending: par_value = gfield_playing.field_par ^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'OneToOneField' object has no attribute 'field_par' And about admin.py that sending error, when I tried some solves from internet, I actually made the 'to_par' well, and fixed Error: models.py ... R2 = models.IntegerField() R3 = models.IntegerField() R4 = models.IntegerField() R5 = models.IntegerField() def calculate_to_par(self): all_rounds = [self.R1, self.R2, self.R3, self.R4, self.R5] counter = sum(1 for value in all_rounds if value != 0) par_value = self.golf_field_playing.golf_field_par total = sum(all_rounds[:counter]) if par_value * counter <= total: self.to_par = f'+{total - par_value * counter}' if par_value*counter>total: self.to_par = f'-{par_value * counter - total}' return self.to_par ... but, when I tried to add 'to_par' in admin.py list_display it gave error like <cant use 'to_par' because it is not callable> or smth like this, so after week of trying to solve it I'm here to take some help from experts. I said what I tried above.
Define a property [python-doc]: class PlayerInstance(models.Model): # … golf_field_playing = models.OneToOneField( GField, default='for_nothoing', on_delete=models.RESTRICT, help_text='where is it taking part?', ) now_at_hole = models.IntegerField() today = models.IntegerField() R1 = models.IntegerField() R2 = models.IntegerField() R3 = models.IntegerField() R4 = models.IntegerField() R5 = models.IntegerField() @property def to_par(self): all_rounds = [self.R1, self.R2, self.R3, self.R4, self.R5] counter = sum(1 for value in all_rounds if value != 0) par_value = self.gfield_playing.golf_field_par total = sum(all_rounds[:counter]) if par_value * counter <= total: return f'+{total - par_value * counter}' else: return f'-{par_value * counter - total}' In case you thus fetch the .to_par from a PlayerInstance, then it will thus run the function behind the property and return that result. Note: Models normally have no Instance suffix. An object from a class is always called an "instance", by naming the class …Instance, one gets the impression that the variable is an instance of a class.
2
2
79,217,465
2024-11-23
https://stackoverflow.com/questions/79217465/odoo-api-returns-invalid-credentials-even-with-correct-username-and-password
I’m working with Odoo (version 15) and trying to implement a login API. I have created a controller that checks the username and password, but it always returns "Invalid credentials" even when I use the correct login information. Here’s my code: # -*- coding: utf-8 -*- from odoo import http class TestApi(http.Controller): @http.route("/api/check_login", methods=["POST"], type="json", auth="public", csrf=False) def check_login(self, **kwargs): username = kwargs.get('username') password = kwargs.get('password') if username == "admin" and password == "admin": # Replace with actual validation return { "message": "User is logged in.", "username": username } else: return { "error": "Invalid credentials." } Steps Taken: .I tested the API using Postman with a POST request to http:///api/check_login. .I used the following JSON body: { "username": "admin", "password": "admin" } .I confirmed that the credentials work in the Odoo web interface. Questions: What might be causing the API to not recognize valid credentials? Are there any additional configurations I should check? Is there a better method to handle user authentication in Odoo?
The issue is that the original code expected data in kwargs, which only works for query parameters or form-encoded data. Since the request sent contains a JSON payload in the request body, the username and password were not being retrieved. The modified code resolves this by explicitly parsing the JSON data from the request body using json.loads(), ensuring the credentials are correctly accessed and processed. # -*- coding: utf-8 -*- from odoo import http import json class TestApi(http.Controller): @http.route("/api/check_login", methods=["POST"], type="json", auth="public", csrf=False) def check_login(self): request_data = json.loads(request.httprequest.data) username = request_data.get('username') password = request_data.get('password') if username == "admin" and password == "admin": # Replace with actual validation return { "message": "User is logged in.", "username": username } else: return { "error": "Invalid credentials." }
2
2
79,219,211
2024-11-24
https://stackoverflow.com/questions/79219211/matplotlib-polar-chart-not-showing-all-xy-ticks
Issue 1: The x-ticks (pie pieces) aren't ordered from 0 to 24 (my bad, should be 1) Issue 2: all y-ticks (rings) aren't showing Issue 3: Someone seems to have eaten a part of the polar chart ... I expect to see 31 rings, and 24 "pie pieces".
import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(111, projection='polar') ax.set_xticks(np.arange(1, 25) * np.pi / 12) ax.set_xticklabels([str(number) for number in range(1, 25)]) ax.set_yticks(range(1, 32)) ax.set_yticklabels([str(number) for number in range(1, 32)]) ax.grid(True) ax.scatter(np.pi / 12 * 24, 31) ax.set_ylim(0, 32) plt.show()
1
1
79,218,266
2024-11-23
https://stackoverflow.com/questions/79218266/is-there-a-way-to-return-the-highest-value-in-excel
The Problem I'm working directly on a Excel sheet with the Python extension for Excel and I'm trying to find out which is the highest number of a list of cells. This is the code for the function that I wrote: def getMax(celle_voti_lista) -> int: valori = [v for v in celle_voti_lista if isinstance(v, (int, float))] return max(valori) if valori else 0 And this is the function call: max(xl("D11:D23")) But everytime I run this code, It gives me 0 instead of the actual highest number. What should I do?
try something like this: from openpyxl import load_workbook def xl(range_str, file_path, sheet_name): wb = load_workbook(file_path) sheet = wb[sheet_name] start_cell, end_cell = range_str.split(":") start_row, start_col = int(start_cell[1:]), start_cell[0].upper() end_row, end_col = int(end_cell[1:]), end_cell[0].upper() cells = [] for row in sheet[f"{start_col}{start_row}":f"{end_col}{end_row}"]: for cell in row: cells.append(cell.value) return cells def getMax(celle_voti_lista) -> int: valori = [v for v in celle_voti_lista if isinstance(v, (int, float))] return max(valori) if valori else 0 and then: file_path = "data.xlsx" sheet_name = "Sheet1" max_value = getMax(xl("D11:D23", file_path, sheet_name)) print(max_value)
2
0
79,218,262
2024-11-23
https://stackoverflow.com/questions/79218262/filter-pandas-dataframe-by-multiple-thresholds-defined-in-a-dictionary
I want to filter a DataFrame against multiple thresholds, based on the ID's prefix. Ideally I'd configure these thresholds with a dictionary e.g. minimum_thresholds = { 'alpha': 3, 'beta' : 5, 'gamma': 7, 'default': 4 } For example: data = { 'id': [ 'alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba', 'alpha-205acbf0-64ba-40ad-a026-cc1c6fc06a6f', 'beta-76ece555-e336-42d8-9f8d-ee92dd90ef19', 'beta-6c91c1cc-1025-4714-a2b2-c30b2717e3c4', 'gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4', 'gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3', 'pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6' ], 'freq': [4, 2, 1, 4, 7, 9, 8] } id freq 0 alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba 4 1 alpha-205acbf0-64ba-40ad-a026-cc1c6fc06a6f 2 2 beta-76ece555-e336-42d8-9f8d-ee92dd90ef19 1 3 beta-6c91c1cc-1025-4714-a2b2-c30b2717e3c4 4 4 gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4 7 5 gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3 9 6 pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6 8 I would then get an output like: id freq 0 alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba 4 1 gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4 7 2 gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3 9 3 pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6 8 I could do this bluntly by looping through each threshold, but it feels like there must be a more Pythonic way?
Another possible solution, whose steps are: First, the id column is split at each hyphen using the str.split method, extracting the first part of each split with str[0]. Then, the resulting first parts are mapped to their corresponding threshold values using the map function, referencing the thresholds dictionary. If a value is not found in thresholds, the default threshold is used. The freq column is then compared to these threshold values using the ge method, which checks if freq is greater than or equal to the threshold. Finally, the dataframe is filtered to include only rows where this condition is met. df[df['freq'] .ge(df['id'].str.split('-').str[0] .map(lambda x: thresholds.get(x, thresholds['default'])))] Output: id freq 0 alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba 4 4 gamma-f650fd43-03d3-440c-8e14-da18cdeb78d4 7 5 gamma-a8cb84b5-e94c-46f7-b2c5-135b59dcd1e3 9 6 pi-8189aff9-ea1c-4e22-bcf4-584821c9dfd6 8
1
2
79,216,975
2024-11-23
https://stackoverflow.com/questions/79216975/how-to-uninstall-specific-opencv
I am getting an error on running cv2.imshow() cv2.imshow("Image", image) cv2.error: OpenCV(4.9.0) /io/opencv/modules/highgui/src/window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage' and following suggestions in this forum I did sudo apt install python3-opencv which installed opencv4.5d. the same code still threw an error and this time I saw the error was coming from opencv4.9. I do not remember installing it, but I found this usr/local/lib/python3.10/dist-packages/opencv_python_headless-4.9.0.80.dist-info /usr/local/lib/python3.10/dist-packages/opencv_python_headless.libs How do I remove 4.9 or make python import 4.5 and not 4.9?? thanks everybody.
Your problem isn't the way the package was installed. Your problem is that you installed a headless package when you didn't want that. Run pip3 list. Look for your headless OpenCV. That step is optional but you'll learn something. Remove the headless package with pip3 uninstall opencv-python-headless. Install a regular package with pip3 install opencv-python. Stick with installing OpenCV using pip. Don't use apt to get OpenCV. The package coming from apt is usually stale. Feel free to remove that old OpenCV 4.5 that you got from apt. It'll be something like sudo apt remove python3-opencv. In case you need an older version of OpenCV, you can get that from pip too. Browse the release history of the package, click on the version you need. Up top there's an install command you can copy. Then you can pip3 install opencv-python==4.5.5.64 If you want multiple versions of a pip package side by side, you'll have to start learning about virtual environments.
1
3
79,216,200
2024-11-22
https://stackoverflow.com/questions/79216200/linear-interpolation-lookup-of-a-dataframe
I have a dataframe using pandas, something like: d = {'X': [1, 2, 3], 'Y': [220, 187, 170]} df = pd.DataFrame(data=d) the dataframe ends up like X Y 1 220 2 187 3 170 I can get the y value for an x value of 1.0 using df[df['X'] == 1.0]['Y'] which returns 220 But is there a way to get a linearly interpolated value of Y for an X value between values of X? For example, if I had an X value of 1.5, I would want it to return an interpolated value of 203.5. I tried the interpolate function, but it permanently adjusts the data in the dataframe. I could also write a separate function that would calculate this, but I was wondering if there was a native function in pandas.
You can use np.interp for this: import numpy as np X_value = 1.5 np.interp(X_value, df['X'], df['Y']) # 203.5 Make sure that df['X'] is monotonically increasing. You can use the left and right parameters to customize the return value for out-of-bounds values: np.interp(0.5, df['X'], df['Y'], left=np.inf, right=-np.inf) # inf # because 0.5 < df['X'].iloc[0] By default, out-of-bounds values will correspond to the closest valid X value: np.interp(10, df['X'], df['Y']) # 170 # i.e., match for df['X'].iloc[-1]
5
2
79,215,119
2024-11-22
https://stackoverflow.com/questions/79215119/how-can-i-convert-the-datatype-of-a-numpy-array-sourced-from-an-awkward-array
I have a numpy array I converted from awkward array by to_numpy() function, and the resulting array has the datatype: dtype=[('phi', '<f8'), ('eta', '<f8')]). I want to make it a regular tuple of (float32, float32) because otherwise this does not convert into a tensorflow tensor I tried the regular asdtype functions but all I get is errors >>> array = ak.Array([{"phi": 1.1, "eta": 2.2}, {"phi": 3.3, "eta": 4.4}]) >>> ak.to_numpy(array) array([(1.1, 2.2), (3.3, 4.4)], dtype=[('phi', '<f8'), ('eta', '<f8')])
I believe your problem is equivalent to this: you have some Awkward Array with record structure, >>> array = ak.Array([{"phi": 1.1, "eta": 2.2}, {"phi": 3.3, "eta": 4.4}]) and when you convert that with ak.to_numpy, it turns the record fields into NumPy structured array fields: >>> ak.to_numpy(array) array([(1.1, 2.2), (3.3, 4.4)], dtype=[('phi', '<f8'), ('eta', '<f8')]) ML libraries like TensorFlow and PyTorch want the feature vectors to not have fields with names, but instead be 2D arrays in which the second dimension ranges over all of the features. If all of the NumPy structured array dtypes are identical, as they're all <f8 in this example, you could view it: >>> ak.to_numpy(array).view("<f8").reshape(len(array), -1) array([[1.1, 2.2], [3.3, 4.4]]) But this is unsafe. If, for example, some of your fields are 32-bit and others are 64-bit, or some are integers and others are floating-point, view will just reinterpret the memory, losing the meaning of the numbers: >>> bad = np.array([(1, 2, 3.3), (4, 5, 6.6)], dtype=[("x", "<i4"), ("y", "<i4"), ("z", "<f8")]) >>> bad.view("<f8").reshape(len(bad), -1) array([[4.24399158e-314, 3.30000000e+000], [1.06099790e-313, 6.60000000e+000]]) (z's 3.3 and 6.6 are preserved, but x and y get merged into a single field and the raw memory gets interpreted as floats.) Instead, we should make the structure appropriate in Awkward, which has the tools to do exactly this sort of thing, and afterward convert it to NumPy (and from there to TensorFlow or PyTorch). So, we're starting with an array of records with named fields: >>> array <Array [{phi: 1.1, eta: 2.2}, {...}] type='2 * {phi: float64, eta: float64}'> We want the named fields to go away and make these individual arrays. That's ak.unzip. >>> ak.unzip(array) (<Array [1.1, 3.3] type='2 * float64'>, <Array [2.2, 4.4] type='2 * float64'>) (The first in the tuple is from phi, the second is from eta.) We want to get values for each field together into the same input vector for the ML model. That is, 1.1 and 2.2 should be in a vector [1.1, 2.2] and 3.3 and 4.4 should be in a vector [3.3, 4.4]. That's a concatenation of the arrays in this tuple, but not an axis=0 concatenation that would make [1.1, 3.3, 2.2, 4.4]; it has to be a concatenation in a higher axis=1. That axis doesn't exist yet, but we can always make length-1 axes with np.newaxis. >>> ak.unzip(array[:, np.newaxis]) (<Array [[1.1], [3.3]] type='2 * 1 * float64'>, <Array [[2.2], [4.4]] type='2 * 1 * float64'>) Now ak.concatenate with axis=1 will concatenate [1.1] and [2.2] into [1.1, 2.2], etc. >>> ak.concatenate(ak.unzip(array[:, np.newaxis]), axis=1) <Array [[1.1, 2.2], [3.3, 4.4]] type='2 * 2 * float64'> So in the end, here's a one-liner that you can pass to TensorFlow that will work even if your record fields have different dtypes: >>> ak.to_numpy(ak.concatenate(ak.unzip(array[:, np.newaxis]), axis=1)) array([[1.1, 2.2], [3.3, 4.4]]) Or, actually, maybe you can skip the ak.to_numpy and go straight to ak.to_tensorflow.
2
4
79,212,852
2024-11-21
https://stackoverflow.com/questions/79212852/constraint-to-forbid-nan-in-postgres-numeric-columns-using-django-orm
Postgresql allows NaN values in numeric columns according to its documentation here. When defining Postgres tables using Django ORM, a DecimalField is translated to numeric column in Postgres. Even if you define the column as bellow: from django.db import models # You can insert NaN to this column without any issue numeric_field = models.DecimalField(max_digits=32, decimal_places=8, blank=False, null=False) Is there a way to use Python/Django syntax to forbid NaN values in this scenario? The Postgres native solution is to probably use some kind of constraint. But is that possible using Django syntax? Edit: As willeM_ Van Onsem pointed out, Django does not allow NaN to be inserted to DecimalField natively. However, the DB is manipulated from other sources as well, hence, the need to have an extra constraint at the DB level (as opposed to Django's built-in application level constraint).
I don't have a PostgreSQL database to test against but you can try creating a database constraint using a lookup based on the IsNull looukup: from decimal import Decimal from django.db.models import ( CheckConstraint, DecimalField, Field, Model, Q, ) from django.db.models.lookups import ( BuiltinLookup, ) @Field.register_lookup class IsNaN(BuiltinLookup): lookup_name = "isnan" prepare_rhs = False def as_sql(self, compiler, connection): if not isinstance(self.rhs, bool): raise ValueError( "The QuerySet value for an isnan lookup must be True or False." ) sql, params = self.process_lhs(compiler, connection) if self.rhs: return "%s = 'NaN'" % sql, params else: return "%s <> 'NaN'" % sql, params class Item(Model): numeric_field = DecimalField( max_digits=32, decimal_places=8, blank=False, null=False, ) class Meta: constraints = [ CheckConstraint( check=Q(numeric_field__isnan=False), name="numeric_field_not_isnan", ), ]
1
2
79,214,742
2024-11-22
https://stackoverflow.com/questions/79214742/polars-python-filter-list-column-using-a-boolean-list-column-but-keeping-list
I would like to get elements from a list dtype column using another boolean list column and keeping the original size of the list (as oppose to this solution). Starting from this dataframe: df = pl.DataFrame({ 'identity_vector': [[True, False], [False, True]], 'string_vector': [['name1', 'name2'], ['name3', 'name4']] }) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ identity_vector ┆ string_vector β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[bool] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════════║ β”‚ [true, false] ┆ ["name1", "name2"] β”‚ β”‚ [false, true] ┆ ["name3", "name4"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The objective is to get this output: shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ identity_vector ┆ string_vector ┆ filtered_strings β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[bool] ┆ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════════β•ͺ══════════════════║ β”‚ [true, false] ┆ ["name1", "name2"] ┆ ["name1", null] β”‚ β”‚ [false, true] ┆ ["name3", "name4"] ┆ [null, "name4"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which I can get using the block of code below and map_elements, but the solution is sub-optimal for performance reasons: df.with_columns( filtered_strings=pl.struct(["string_vector", "identity_vector"]).map_elements( lambda row: [s if keep else None for s, keep in zip(row["string_vector"], row["identity_vector"])] ) ) Do you have any suggestion on how to improve the performance of this process?
Kind of standard pl.Expr.explode() / calculate / pl.Expr.implode() route: df.with_columns( pl.when( pl.col.identity_vector.explode() ).then( pl.col.string_vector.explode() ).otherwise(None) .implode() .over(pl.int_range(pl.len())) .alias("filtered_strings") ) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ identity_vector ┆ string_vector ┆ filtered_strings β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[bool] ┆ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════════β•ͺ══════════════════║ β”‚ [true, false] ┆ ["name1", "name2"] ┆ ["name1", null] β”‚ β”‚ [false, true] ┆ ["name3", "name4"] ┆ [null, "name4"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ There're also other possible approaches, for example using pl.Expr.list.eval() and pl.Expr.list.gather() df.with_columns( pl.col.string_vector.list.gather( pl.col.identity_vector.list.eval( pl.when(pl.element()).then(pl.int_range(pl.len())) ) ).alias("filtered_strings") ) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ identity_vector ┆ string_vector ┆ filtered_strings β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[bool] ┆ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════════β•ͺ══════════════════║ β”‚ [true, false] ┆ ["name1", "name2"] ┆ ["name1", null] β”‚ β”‚ [false, true] ┆ ["name3", "name4"] ┆ [null, "name4"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Or, if you know length of your lists or it's relatively small, you can create columns for each list index and then use pl.Expr.list.get() and pl.concat_list(). l = 2 df.with_columns( filtered_strings = pl.concat_list( pl.when( pl.col.identity_vector.list.get(i) ).then( pl.col.string_vector.list.get(i) ) for i in range(2) ) ) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ identity_vector ┆ string_vector ┆ filtered_strings β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[bool] ┆ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════════════β•ͺ══════════════════║ β”‚ [true, false] ┆ ["name1", "name2"] ┆ ["name1", null] β”‚ β”‚ [false, true] ┆ ["name3", "name4"] ┆ [null, "name4"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ All solutions use pl.when() to set value to null when condition is not met.
2
3
79,206,427
2024-11-20
https://stackoverflow.com/questions/79206427/remove-unused-node-in-python-plotly
What im trying here is to create a relationship between Tasks. Some of them are connected directly to each other while others are passing through this big box i circled instead of connecting directly(which is what i need). How can i remove this node? def generate_links_and_nodes(dataframe): cleaned_links = [] for _, row in dataframe.iterrows(): q10_tasks = set(row['q10'].split(', ')) q3_tasks = set(row['q3'].split(', ')) q11_tasks = set(row['q11'].split(', ')) # Create links between q10 and q3 for q10 in q10_tasks: for q3 in q3_tasks: if q10 != q3: cleaned_links.append((q10, q3)) # Create links between q3 and q11 for q3 in q3_tasks: for q11 in q11_tasks: if q3 != q11: cleaned_links.append((q3, q11)) # DataFrame from links links_df = pd.DataFrame(cleaned_links, columns=["source", "target"]) # Collect unique nodes unique_nodes = sorted(set(pd.concat([links_df['source'], links_df['target']]))) node_indices = {node: i for i, node in enumerate(unique_nodes)} # Map sources and targets to node indices sources = links_df['source'].map(node_indices).tolist() targets = links_df['target'].map(node_indices).tolist() values = [1] * len(links_df) # Default weight of 1 for each link return sources, targets, values, unique_nodes # Generate the Sankey diagram inputs sources, targets, values, nodes = generate_links_and_nodes(df) # Create the Sankey diagram fig = go.Figure(data=[go.Sankey( node=dict( pad=25, thickness=70, line=dict(color="black", width=0.5), label=nodes # Only sub-tasks are shown ), link=dict( source=sources, target=targets, value=values ) )]) Sample data of query results . these are the results of my database when df_q10 = pd.read_sql_query(query_q10, conn) df_q3 = pd.read_sql_query(query_q3, conn) df_q11 = pd.read_sql_query(query_q11, conn) taking place q3 0 T4.2 1 T4.2, T4.3, T4.4 2 T2.3 3 T2.2 4 T6.3 5 T6.3 6 T6.3 7 T4.1, T4.2 8 T1.3 9 T1.2 10 T1.3 11 T1.3 12 T7.3 13 T2.3 14 T2.1 q10 0 1 2 3 4 T6.2 5 T6.2 6 7 T1.1, T3.1, T3.2, T4.4, T5.1 8 9 10 11 12 T7.1 13 T2.1, T2.2, T2.4, T3.2 14 q11 0 1 T1.1, T1.3, T3.1, T3.2 2 3 4 5 6 7 T1.1, T1.3, T3.1, T3.2 8 9 10 11 12 T7.2 13 14
To eliminate the intermediary nodes you need to identify nodes acting as unnecessary passthroughs and bypassing them to create direct links between relevant tasks. So, in my example , intermediary nodes from the q3 column are identified as those that connect q10, the starting tasks, to q11, the ending tasks, and that add no context or relationships. I flag these as intermediary and the links passing through them are replaced by direct connections between the corresponding q10 and q11 nodes. I post the necessary addition to your code as well as plots for your way (with intermediary links) and without: import pandas as pd import plotly.graph_objects as go data = { "q10": ["A, B", "C, D", "E, F"], "q3": ["X", "Y, X", "Z"], "q11": ["G, H", "I, J", "K"] } df = pd.DataFrame(data) def generate_links_and_nodes(dataframe, remove_intermediates=True): cleaned_links = [] for _, row in dataframe.iterrows(): q10_tasks = set(row['q10'].split(', ')) q3_tasks = set(row['q3'].split(', ')) q11_tasks = set(row['q11'].split(', ')) for q10 in q10_tasks: for q3 in q3_tasks: cleaned_links.append((q10, q3)) for q3 in q3_tasks: for q11 in q11_tasks: cleaned_links.append((q3, q11)) if remove_intermediates: direct_links = [] intermediates = set(task for _, row in dataframe.iterrows() for task in row['q3'].split(', ')) for source, target in cleaned_links: if source in intermediates and target in intermediates: continue if source in intermediates: for q10_task in row['q10'].split(', '): for q11_task in row['q11'].split(', '): direct_links.append((q10_task.strip(), q11_task.strip())) else: direct_links.append((source, target)) cleaned_links = direct_links links_df = pd.DataFrame(cleaned_links, columns=["source", "target"]) unique_nodes = sorted(set(pd.concat([links_df['source'], links_df['target']]))) node_indices = {node: i for i, node in enumerate(unique_nodes)} sources = links_df['source'].map(node_indices).tolist() targets = links_df['target'].map(node_indices).tolist() values = [1] * len(links_df) return sources, targets, values, unique_nodes sources_with, targets_with, values_with, nodes_with = generate_links_and_nodes(df, remove_intermediates=False) sources_without, targets_without, values_without, nodes_without = generate_links_and_nodes(df, remove_intermediates=True) fig_with = go.Figure(data=[go.Sankey( node=dict( pad=25, thickness=20, line=dict(color="black", width=0.5), label=nodes_with ), link=dict( source=sources_with, target=targets_with, value=values_with ) )]) fig_with.update_layout(title_text="With Intermediate Nodes", font_size=10) fig_with.show() fig_without = go.Figure(data=[go.Sankey( node=dict( pad=25, thickness=20, line=dict(color="black", width=0.5), label=nodes_without ), link=dict( source=sources_without, target=targets_without, value=values_without ) )]) fig_without.update_layout(title_text="Without Intermediate Nodes", font_size=10) fig_without.show() Which gives and Edit: With your posted data This, I think is applicatble to your data: import pandas as pd import plotly.graph_objects as go data = { "q10": ["", "", "", "", "T6.2", "T6.2", "", "T1.1, T3.1, T3.2, T4.4, T5.1", "", "", "", "", "T7.1", "T2.1, T2.2, T2.4, T3.2", ""], "q3": ["T4.2", "T4.2, T4.3, T4.4", "T2.3", "T2.2", "T6.3", "T6.3", "T6.3", "T4.1, T4.2", "T1.3", "T1.2", "T1.3", "T1.3", "T7.3", "T2.3", "T2.1"], "q11": ["", "T1.1, T1.3, T3.1, T3.2", "", "", "", "", "", "T1.1, T1.3, T3.1, T3.2", "", "", "", "", "T7.2", "", ""] } df = pd.DataFrame(data) def generate_links_and_nodes(dataframe, remove_intermediates=True): cleaned_links = [] for _, row in dataframe.iterrows(): q10_tasks = set(row['q10'].split(', ')) if row['q10'] else set() q3_tasks = set(row['q3'].split(', ')) if row['q3'] else set() q11_tasks = set(row['q11'].split(', ')) if row['q11'] else set() for q10 in q10_tasks: for q3 in q3_tasks: cleaned_links.append((q10, q3)) for q3 in q3_tasks: for q11 in q11_tasks: cleaned_links.append((q3, q11)) if remove_intermediates: direct_links = [] intermediates = set(task for _, row in dataframe.iterrows() for task in row['q3'].split(', ') if row['q3']) for source, target in cleaned_links: if source in intermediates and target in intermediates: continue if source in intermediates: for q10_task in dataframe[dataframe['q3'].str.contains(source, na=False)]['q10']: for q11_task in dataframe[dataframe['q3'].str.contains(source, na=False)]['q11']: if q10_task and q11_task: for t10 in q10_task.split(', '): for t11 in q11_task.split(', '): direct_links.append((t10.strip(), t11.strip())) else: direct_links.append((source, target)) cleaned_links = direct_links links_df = pd.DataFrame(cleaned_links, columns=["source", "target"]) unique_nodes = sorted(set(pd.concat([links_df['source'], links_df['target']]))) node_indices = {node: i for i, node in enumerate(unique_nodes)} sources = links_df['source'].map(node_indices).tolist() targets = links_df['target'].map(node_indices).tolist() values = [1] * len(links_df) return sources, targets, values, unique_nodes sources_with, targets_with, values_with, nodes_with = generate_links_and_nodes(df, remove_intermediates=False) sources_without, targets_without, values_without, nodes_without = generate_links_and_nodes(df, remove_intermediates=True) fig_with = go.Figure(data=[go.Sankey( node=dict( pad=25, thickness=20, line=dict(color="black", width=0.5), label=nodes_with ), link=dict( source=sources_with, target=targets_with, value=values_with ) )]) fig_with.update_layout(title_text="With Intermediate Nodes", font_size=10) fig_with.show() fig_without = go.Figure(data=[go.Sankey( node=dict( pad=25, thickness=20, line=dict(color="black", width=0.5), label=nodes_without ), link=dict( source=sources_without, target=targets_without, value=values_without ) )]) fig_without.update_layout(title_text="Without Intermediate Nodes", font_size=10) fig_without.show() which gives:
3
2
79,210,901
2024-11-21
https://stackoverflow.com/questions/79210901/methods-to-reduce-a-tensor-embedding-to-x-y-z-coordinates
I have a model from hugging face and would like to use it for performing word comparisons. At first I thought of performing a series of similarity calculations across words of interest but quickly I found that this problem would exponentially grow as the number of words expanded as well. A solution I thought about is plotting a skip gram where all words result on a 2 dimensional plane and then can simply perform clustering on the coordinates to find similar words. The problem here is that this requires a bert model and a low embedding layer that can be mapped. As I have a pretrained model, I don't know if I can create a skip gram with from it. I was hoping to calculate the embedding and through the use of a transformation, convert the embedding into coordinates that I can plot myself. I though do not know if this is possible or reasonable I tried to do it though with the code below from sklearn.manifold import TSNE from transformers import AutoModel, AutoTokenizer # target word word = ["Slartibartfast"] # model setup model = 'Alibaba-NLP/gte-multilingual-base' tokenizer = AutoTokenizer.from_pretrained(model) auto_model = AutoModel.from_pretrained(model, trust_remote_code=True) # embbed and calculate batch_dict = self.tokenizer(text_list, max_length=8192, padding=True, truncation=True, return_tensors='pt') result = auto_model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0][:768] # transform to coordinates clayer = TSNE(n_components=3, learning_rate='auto', init='random', perplexity=50) embedding_numpy = embeddings.detach().numpy() clayer.fit_transform(embedding_numpy) # crashes here saying perplexity must be less than n_samples
After more through reading, it was brough to my attention that it would be impossible to use TSNE in the manner which I was hoping as the dimensions generated by TSNE is only representative of the training data. Further fitting with new data or transformation of data not within the training set would result in outputs that are not on a similar range and thus noncomparable. I found a replacement to TSNE which is called umap. umap is also for dimension reduction but it can be fitted multiple times and data can be transformed along the same range. I will explore umap and see if it will work for what I need.
2
0
79,213,461
2024-11-22
https://stackoverflow.com/questions/79213461/what-makes-printnp-half500-2-differs-from-printfnp-half500-2
everyone. I've been learning floating-point truncation errors recently. But I found print(np.half(500.2)) and print(f"{np.half(500.2)}") yield different results. Here are the logs I got in IPython. In [11]: np.half(500.2) Out[11]: np.float16(500.2) In [12]: print(np.half(500.2)) 500.2 In [13]: print(f"{np.half(500.2)}") 500.25 I use half.hpp in c++ to compare results with numpy. It seems that 500.2 should be truncated into 500.25 instead of itself. In binary formats, 500.0 is 0b0_01000_1111010000. So the next float16 number should be 0b_01000_1111010001, which is 500.25 in deximal format. So what makes print(np.half(500.2)) differs from print(f"{np.half(500.2)}")? Hope to see your answers.
print calls __str__, while an f-string calls __format__. __format__ with an empty format spec is usually equivalent to __str__, but not all types implement it that way, and numpy.half is one of the types that implements different behavior: In [1]: import numpy In [2]: x = numpy.half(500.2) In [3]: str(x) Out[3]: '500.2' In [4]: format(x, '') Out[4]: '500.25'
1
4
79,208,817
2024-11-20
https://stackoverflow.com/questions/79208817/get-a-single-series-of-classes-instead-of-one-series-for-each-class-with-pandas
I have a DataFrame with 3 column of zeroes and ones corresponding to 3 different classes. I want to get a single series of zeroes, ones, and twos depending of the class of the entry (0 for the first class, 1 for the second one and 2 for the third one): >>> results.head() HOME_WINS DRAW AWAY_WINS ID 0 0 0 1 1 0 1 0 2 0 0 1 3 1 0 0 4 0 1 0 What I want : >>> results.head() SCORE ID 0 2 1 1 2 2 3 0 4 1
Multiply by a dictionary, sum and convert to_frame: d = {'HOME_WINS': 0, 'DRAW': 1, 'AWAY_WINS': 2} out = df.mul(d).sum(axis=1).to_frame(name='SCORE') Or using a dot product: d = {'HOME_WINS': 0, 'DRAW': 1, 'AWAY_WINS': 2} out = df.dot(pd.Series(d)).to_frame(name='SCORE') Or, if there is exactly one 1 per row, with from_dummies: d = {'HOME_WINS': 0, 'DRAW': 1, 'AWAY_WINS': 2} out = pd.from_dummies(df)[''].map(d).to_frame(name='SCORE') Output: SCORE ID 0 2 1 1 2 2 3 0 4 1
3
5
79,212,904
2024-11-21
https://stackoverflow.com/questions/79212904/why-is-tz-naive-timestamp-converted-to-integer-while-tz-aware-is-kept-as-timesta
Understandable and expected (tz-aware): import datetime import numpy as np import pandas as pd aware = pd.DatetimeIndex(["2024-11-21", "2024-11-21 12:00"], tz="UTC") eod = datetime.datetime.combine(aware[-1].date(), datetime.time.max, aware.tz) aware, eod, np.concat([aware, [eod]]) returns (DatetimeIndex(['2024-11-21 00:00:00+00:00', '2024-11-21 12:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None), datetime.datetime(2024, 11, 21, 23, 59, 59, 999999, tzinfo=datetime.timezone.utc), array([Timestamp('2024-11-21 00:00:00+0000', tz='UTC'), Timestamp('2024-11-21 12:00:00+0000', tz='UTC'), datetime.datetime(2024, 11, 21, 23, 59, 59, 999999, tzinfo=datetime.timezone.utc)], dtype=object)) note Timestamps (and a datetime) in the return value of np.concat. Unexpected (tz-naive): naive = pd.DatetimeIndex(["2024-11-21", "2024-11-21 12:00"]) eod = datetime.datetime.combine(naive[-1].date(), datetime.time.max, aware.tz) naive, eod, np.concat([naive, [eod]]) returns (DatetimeIndex(['2024-11-21 00:00:00', '2024-11-21 12:00:00'], dtype='datetime64[ns]', freq=None), datetime.datetime(2024, 11, 21, 23, 59, 59, 999999), array([1732147200000000000, 1732190400000000000, datetime.datetime(2024, 11, 21, 23, 59, 59, 999999)], dtype=object)) note intergers (and a datetime) in the return value of np.concat. why do I get integers in the concatenated array for a tz-naive index? how do I avoid it? I.e., how do I append EOD to a tz-naive DatetimeIndex? PS. Interestingly enough, at the numpy level the indexes are identical: np.testing.assert_array_equal(aware.values, naive.values)
From Data type promotion in NumPy When mixing two different data types, NumPy has to determine the appropriate dtype for the result of the operation. This step is referred to as promotion or finding the common dtype. In typical cases, the user does not need to worry about the details of promotion, since the promotion step usually ensures that the result will either match or exceed the precision of the input. np.concat() accepts a casting keyword argument (casting="same_kind" default). If using casting='no' fails naive_no = np.concat([naive, [eod]], casting='no') TypeError: Cannot cast array data from dtype('<M8[ns]') to dtype('O') according to the rule 'no' See Array-protocol type strings. In both cases the type is object naive_sk = np.concat([naive, [eod]], casting='same_kind') print(naive_sk.dtype, naive_sk) Result object [1732147200000000000 1732190400000000000 datetime.datetime(2024, 11, 21, 23, 59, 59, 999999, tzinfo=<DstTzInfo 'America/New_York' LMT-1 day, 19:04:00 STD>)] python 3.9 pandas 2.2.2
3
1
79,211,584
2024-11-21
https://stackoverflow.com/questions/79211584/no-solution-found-in-gekko-when-modelling-a-tank-level
I'm trying to simulate the level of a tank that has two inlet flows and one outlet. The idea is that after this will be used in a control problem. However, I can't get it to work with gekko but it did work with scypy odeint. #set time tmax=60*6 i = 0 t = np.linspace(i,tmax,int(tmax/10)+1) # minutes #Assign values d2_h0 = 13.377 #initial height m inf1 = np.array([32.6354599 , 32.41882451, 32.08460871, 32.11487071, 32.71570587, 32.59923999, 31.66669464, 30.11240896, 29.31222725, 29.35761197, 29.62183634, 29.67505582, 29.24057325, 29.13853518, 29.48321724, 29.61703173, 29.49874306, 28.99679947, 29.24003156, 29.40070153, 29.70169004, 29.2913545 , 29.47371801, 29.91566467, 31.31636302, 31.6771698 , 31.65268326, 31.06637255, 31.39147377, 31.88083331, 32.59566625, 32.70952861, 32.78859075, 32.87391027, 32.97800064, 32.99872208, 33.02946218]) inf2 = np.array([66.91262309, 67.16797638, 67.77143351, 66.85663605, 67.43820954, 67.96041107, 68.7215627 , 68.91900635, 69.20062764, 68.29413096, 68.56461334, 67.67184957, 68.84806824, 67.61451467, 69.58069102, 71.284935 , 75.60562642, 74.83906555, 74.06419373, 71.20425161, 69.60981496, 69.45553589, 70.35860697, 71.17754873, 72.16390737, 72.0528005 , 72.49635569, 73.09021505, 72.7195816 , 71.9975001 , 70.13828532, 71.11123403, 72.16157023, 73.27675883, 71.9024353 , 71.17524719, 70.34394582]) eff = np.array(([110.97348786, 108.6726354 , 109.4272232 , 110.57080078, 114.20512136, 114.84948222, 113.96173604, 110.81165822, 110.4366506 , 111.61210887, 112.75804393, 111.23046112, 108.35852305, 108.21724955, 110.47168223, 112.10458374, 109.28511048, 107.31727092, 108.55026245, 111.30213165, 111.88119253, 110.62695313, 111.76373037, 115.09386699, 115.75547282, 113.47773488, 107.95795441, 106.46175893, 105.83562978, 109.9902064 , 110.59869131, 110.49962108, 109.35623678, 108.35690053, 107.0867513 , 104.34462484, 103.1198527 ])) from gekko import GEKKO #Create gekko model m = GEKKO(remote=False) m.time = t qin1 = m.Param(value=inf1) qin2 = m.Param(value=inf2) Ac=m.Const(value = 226.98) # m2,Cross section Area qout = m.Param(value=eff) h1 = m.Var(value=d2_h0, lb=0) m.Equation(h1.dt() == (qin1 + qin2 - qout)/Ac) m.options.IMODE = 4 m.options.SOLVER = 3 # Solve the model m.solve() What I get is: Exception: @error: Solution Not Found If I remove the lowerbound it is able to find a solution but it is not correct. Below I have a comparison with the real value and what I get with odeint. The results I get:. What am I doing wrong?
There is no solution because the lower bound on tank level height is set to 0 with: h1 = m.Var(value=d2_h0, lb=0) When this constraint is removed, it is solved successfully but with an unrealistic solution with a negative height. Overflow or complete drainage can be included in the model by adding slack variables: # slack variables s_in = m.Var(value=0,lb=0) s_out = m.Var(value=0,lb=0) # slack variable m.Minimize(1e-3*(s_in+s_out)) They are normally zero but can be used by the optimizer to maintain feasibility for overflow or complete drainage. Here is the complete code with the slack variables and an upper and lower limit on tank fluid height. import numpy as np from gekko import GEKKO import matplotlib.pyplot as plt #set time tmax=60*6 i = 0 t = np.linspace(i,tmax,int(tmax/10)+1) # minutes #Assign values d2_h0 = 13.377 #initial height m inf1 = np.array([32.6354599 , 32.41882451, 32.08460871, 32.11487071, 32.71570587, 32.59923999, 31.66669464, 30.11240896, 29.31222725, 29.35761197, 29.62183634, 29.67505582, 29.24057325, 29.13853518, 29.48321724, 29.61703173, 29.49874306, 28.99679947, 29.24003156, 29.40070153, 29.70169004, 29.2913545 , 29.47371801, 29.91566467, 31.31636302, 31.6771698 , 31.65268326, 31.06637255, 31.39147377, 31.88083331, 32.59566625, 32.70952861, 32.78859075, 32.87391027, 32.97800064, 32.99872208, 33.02946218]) inf2 = np.array([66.91262309, 67.16797638, 67.77143351, 66.85663605, 67.43820954, 67.96041107, 68.7215627 , 68.91900635, 69.20062764, 68.29413096, 68.56461334, 67.67184957, 68.84806824, 67.61451467, 69.58069102, 71.284935 , 75.60562642, 74.83906555, 74.06419373, 71.20425161, 69.60981496, 69.45553589, 70.35860697, 71.17754873, 72.16390737, 72.0528005 , 72.49635569, 73.09021505, 72.7195816 , 71.9975001 , 70.13828532, 71.11123403, 72.16157023, 73.27675883, 71.9024353 , 71.17524719, 70.34394582]) eff = np.array(([110.97348786, 108.6726354 , 109.4272232 , 110.57080078, 114.20512136, 114.84948222, 113.96173604, 110.81165822, 110.4366506 , 111.61210887, 112.75804393, 111.23046112, 108.35852305, 108.21724955, 110.47168223, 112.10458374, 109.28511048, 107.31727092, 108.55026245, 111.30213165, 111.88119253, 110.62695313, 111.76373037, 115.09386699, 115.75547282, 113.47773488, 107.95795441, 106.46175893, 105.83562978, 109.9902064 , 110.59869131, 110.49962108, 109.35623678, 108.35690053, 107.0867513 , 104.34462484, 103.1198527 ])) from gekko import GEKKO #Create gekko model m = GEKKO(remote=False) m.time = t qin1 = m.Param(value=inf1) qin2 = m.Param(value=inf2) Ac=m.Const(value = 226.98) # m2,Cross section Area qout = m.Param(value=eff) h1 = m.Var(value=d2_h0, lb=0, ub=20) # slack variables s_in = m.Var(value=0,lb=0) s_out = m.Var(value=0,lb=0) # slack variable m.Minimize(1e-3*(s_in+s_out)) m.Equation(h1.dt() == (qin1 + qin2 - qout + s_in - s_out)/Ac) m.options.IMODE = 6 m.options.SOLVER = 3 # Solve the model m.solve() plt.figure(figsize=(6,3.5)) plt.subplot(2,1,1) plt.plot(m.time,h1,'k-',label='height') plt.grid(); plt.legend(); plt.ylabel('height (m)') plt.subplot(2,1,2) plt.plot(m.time,qin1,'b-',label=r'$q_{in,1}$') plt.plot(m.time,qin2,'k:',label=r'$q_{in,2}$') plt.plot(m.time,qout,'r--',label=r'$q_{out}$') plt.grid(); plt.legend(); plt.ylabel(r'flow ($m^3$/min)') plt.xlabel('Time (min)') plt.tight_layout() plt.savefig('tank.png',dpi=300) plt.show()
2
0
79,212,853
2024-11-21
https://stackoverflow.com/questions/79212853/swig-hello-world-importerror-dynamic-module-does-not-define-module-export-func
This is supposed to be the absolute minimum Hello World using SWIG, C, and setuptools. But the following exception is raised when the module is imported: >>> import hello Traceback (most recent call last): File "<python-input-0>", line 1, in <module> import hello ImportError: dynamic module does not define module export function (PyInit_hello) This is the directory structure: README.md pyproject.toml src src/hc src/hc/hello.c src/hc/hello.i Here is the pyproject.toml [build-system] requires = ["setuptools>=75.6"] build-backend = "setuptools.build_meta" [project] name = "helloc" version = "0.0.1" authors = [ { name = "Foo" } ] description = "Hello world SWIG C" readme = "README.md" requires-python = ">=3.13" classifiers = [ "Development Status :: 1 - Planning", "License :: Public Domain", "Natural Language :: English", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.13" ] [tool.setuptools] ext-modules = [ { name = "hello", sources = ["src/hc/hello.c", "src/hc/hello.i"] } ] Here is hello.c #include <stdio.h> void say_hello() { printf("Hello, World!\n"); } And here is the interface file: %module hello %{ void say_hello(); %} void say_hello();
Your extension module should be named _hello (notice the leading "_"). pyproject.toml: # ... [tool.setuptools] ext-modules = [ { name = "_hello", sources = ["src/hc/hello.c", "src/hc/hello.i"] } ] Check: [SO]: c program SWIG to python gives 'ImportError: dynamic module does not define init function' (@CristiFati's answer) contains the SWIG related details [SO]: How to solve Python-C-API error "This is an issue with the package mentioned above, not pip."? (@CristiFati's answer) contains info about building with SetupTools
2
2
79,208,862
2024-11-20
https://stackoverflow.com/questions/79208862/in-a-jupyter-notebook-open-in-vs-code-how-can-i-quickly-navigate-to-the-current
This feels like a useful feature but haven't been able to find a setting / extension that offers this capability.
Just figured it out - you can use the "Go To" button in the top toolbar in vscode (pic below). You'll need the jupyter extension installed.
2
0
79,212,165
2024-11-21
https://stackoverflow.com/questions/79212165/how-does-pandas-series-nbytes-work-for-strings-results-dont-seem-to-match-expe
The help doc for pandas.Series.nbytes shows the following example: s = pd.Series(['Ant', 'Bear', 'Cow']) s 0 Ant 1 Bear 2 Cow dtype: object s.nbytes 24 << end example >> How is that 24 bytes? I tried looking at three different encodings, none of which seems to yield that total. print(s.str.encode('utf-8').str.len().sum()) print(s.str.encode('utf-16').str.len().sum()) print(s.str.encode('ascii').str.len().sum()) 10 26 10
Pandas nbytes does not refer to the bytes required to store the string data encoded in specific formats like UTF-8, UTF-16, or ASCII. It refers to the total number of bytes consumed by the underlying array of the Series data in memory. Pandas stores a NumPy array of pointers to these Python objects when using the object dtype. On a 64-bit system, each pointer/reference takes 8 bytes. 3 Γ— 8 bytes =24 bytes. Link: nbyte source code Link: ndarray documentation
1
3
79,206,684
2024-11-20
https://stackoverflow.com/questions/79206684/how-to-mark-repeated-entries-as-true-starting-from-the-second-occurrence-using-n
Problem I have a NumPy array and need to identify repeated elements, marking the second occurrence and beyond as True, while keeping the first occurrence as False. For example, given the following array: np.random.seed(100) a = np.random.randint(0, 5, 10) # Output: [0 0 3 0 2 4 2 2 2 2] I want to get the following output: [False True False True False False True True True True] How can I achieve this using NumPy functions only, without using any loops or extra libraries? What did you try and what were you expecting? I was able to get it working with a loop, but I wanted to solve it using only NumPy functions. I tried implementing np.cumsum with masks, but I couldn’t make much progress. Here's the solution I came up with using one loop: np.random.seed(100) a = np.random.randint(0, 5, 10) print(a) uniques, first_indices = np.unique(a, return_index=True) all_occurrences = np.zeros_like(a, dtype=bool) for i in range(len(a)): all_occurrences[i] = np.any(a[:i] == a[i]) all_occurrences[first_indices] = False print(all_occurrences)
To reveal that the problem is actually less complicated than it may seem at first glance, the question could be rephrased as follows: Mark all first occurrences of values with False. This leads to a bit of a simplified version of EuanG's answerΒΉ: def find_repeated(a): mask = np.ones_like(a, dtype=bool) mask[np.unique(a, return_index=True)[-1]] = False return mask Steps: (1) Initialize the result mask as an all-True array of appropriate shape. (2) Find the indices of the first occurrences in the given array a. (3) Only set these indices to False in the result mask. To make the code also work with n-dimensional arrays, we need to add an extra step of unraveling the result of np.unique(), as it returns the indices into the flattened given array a: def find_repeated(a): mask = np.ones_like(a, dtype=bool) mask[np.unravel_index(np.unique(a, return_index=True)[-1], a.shape)] = False return mask In either case: We can directly use the indices (np.unique(…, return_index=True)[-1]) for indexing the mask array. No need for catching the empty-array case here, as it is handled implicitly. ΒΉ) Yes, I find EuanG's answer perfectly acceptable as well. No, I did not downvote it.
4
3
79,211,816
2024-11-21
https://stackoverflow.com/questions/79211816/value-based-partial-slicing-with-non-existing-keys-is-now-deprecated
When running the snippet of example code below with pandas 2.2.3, I get an error saying KeyError: 'D' index = pd.MultiIndex.from_tuples( [('A', 1), ('A', 2), ('A', 3), ('B', 1), ('B', 2), ('B', 2)], names=['letter', 'number'] ) df = pd.DataFrame({'value': [10, 20, 30, 40, 50, 60]}, index=index) idx = pd.IndexSlice result = df.loc[idx[['A', 'D'], [1,2]], :] Does pandas offer any alternatives for searching a multi-index with values that don't exist? If I run the same code using pandas 1.5.3, I get the expected value: value letter number A 1 10 2 20
When you run this code with pandas 1.5.3 you should in fact receive a FutureWarning: FutureWarning: The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. series.loc[label, sequence] will raise if any members of 'sequence' or not present in the index's second level. To retain the old behavior, use series.index.isin(sequence, level=1) (Note that it should read: "are not present".) So, let's indeed use Index.isin to allow boolean indexing: m = (df.index.isin(['A', 'D'], level='letter') & df.index.isin([1, 2], level='number')) out = df.loc[m, :] Output: value letter number A 1 10 2 20 If you have many different conditions, you could consider creating a dictionary and use np.logical_and + reduce: dict_isin = { 'letter': ['A', 'D'], 'number': [1, 2] } m = np.logical_and.reduce( [df.index.isin(v, level=k) for k, v in dict_isin.items()] ) out2 = df.loc[m, :] out2.equals(out) # True
2
0
79,207,488
2024-11-20
https://stackoverflow.com/questions/79207488/how-do-i-represent-sided-boxplot-in-seaborn-when-boxplots-are-already-grouped
I'm seeking for a way to represent two sided box plot in seaborn. I have 2 indexes (index1 and index2) that I want to represent according to two information info1 (a number) and info2 (a letter) My issue is the boxplot I have are already grouped together, and I don't understand how manage the last dimension? for now I can just represent both indexes separately in two panels (top and middle) what I would like is the box plot of the two indexes being represented just aside Something like this for instance: I don't know if it is easily doable Here a short example: import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt fig = plt.figure() ax1 = plt.subplot(3, 1, 1) ax2 = plt.subplot(3, 1, 2) ax3 = plt.subplot(3, 1, 3) index1 = np.random.random((4,100,4)) intex2 = np.random.random((4,100,4))/2. info1 = np.zeros(shape=index1.shape,dtype='object') info1[0,:,:] = 'One' info1[1,:,:] = 'Two' info1[2,:,:] = 'Three' info1[3,:,:] = 'Four' info2 = np.zeros(shape=index1.shape, dtype='object') info2[:, :, 0] = 'A' info2[:, :, 1] = 'B' info2[:, :, 2] = 'C' info2[:, :, 3] = 'D' df = pd.DataFrame( columns=['Info1', 'Info2', 'Index1', 'Index2'], data=np.array( (info1.flatten(), info2.flatten(), index1.flatten(), intex2.flatten())).T) sns.boxplot(x='Info1', y='Index1', hue="Info2", data=df, ax=ax1) ax1.set_title('Index1') ax1.set_ylim([0, 1]) sns.boxplot(x='Info1', y='Index2', hue="Info2", data=df, ax=ax2) ax2.set_ylim([0, 1]) ax2.set_title('Index2') # sns.boxplot(x='Info1', y='Index1', hue="Info2", data=df, ax=ax3) ax3.set_ylim([0, 1]) ax3.set_title('Index1 + Index2') plt.show()
To create an additional grouping in Seaborn, the idea is to let Seaborn create a grid of subplots (called FacetGrid in Seaborn). The function sns.catplot(kind='box', ...) creates such a FacetGrid for boxplots. The col= parameter takes care of putting each Info1 in a separate subplot. To use Index1/Index2 as hue, both columns need to be merged (e.g. via pd.melt(...)). In total, the catplot allows 4 groupings: on x, hue, col and row. Here is how the code and plot could look like. Unfortunately, you can't force such an catplot into a previously created figure. import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt index1 = np.random.random((4, 100, 4)) intex2 = np.random.random((4, 100, 4)) / 2. info1 = np.zeros(shape=index1.shape, dtype='object') info1[0, :, :] = 'One' info1[1, :, :] = 'Two' info1[2, :, :] = 'Three' info1[3, :, :] = 'Four' info2 = np.zeros(shape=index1.shape, dtype='object') info2[:, :, 0] = 'A' info2[:, :, 1] = 'B' info2[:, :, 2] = 'C' info2[:, :, 3] = 'D' df = pd.DataFrame( columns=['Info1', 'Info2', 'Index1', 'Index2'], data=np.array( (info1.flatten(), info2.flatten(), index1.flatten(), intex2.flatten())).T) df_long = df.melt(id_vars=['Info1', 'Info2'], value_vars=['Index1', 'Index2'], var_name='Index') sns.catplot(data=df_long, kind='box', col='Info1', x='Info2', y='value', hue='Index', height=3, aspect=1) plt.show() To have more similar colors, the palette= parameter can set the colors of your choice. E.g. palette='tab20'. sns.catplot(data=df_long, kind='box', col='Info1', x='Info2', y='value', height=3, aspect=1, hue='Index', palette=['steelblue', 'lightblue']) To make things more colorful, you can loop through the boxes and color them individually. hue_order= makes sure Index1 will be at the left, and allows the legend to be omitted. The 'tab20' colormap (used as palette) contains alternating dark and light colors. g = sns.catplot(data=df_long, kind='box', col='Info1', x='Info2', y='value', height=3, aspect=1, hue='Index', hue_order=['Index1', 'Index2'], legend=False) for ax in g.axes.flat: num_hues = len(ax.containers) boxes_per_hue = len(ax.containers[0].boxes) colors = sns.color_palette('tab20', n_colors=num_hues * boxes_per_hue) for hue_id, boxes in enumerate(ax.containers): for box, color in zip(boxes.boxes, colors[hue_id::num_hues]): box.set_color(color)
2
2
79,205,654
2024-11-20
https://stackoverflow.com/questions/79205654/rounding-coordinates-to-centre-of-grid-square
Im currently trying to collect some weather data from an API, and to reduce the amount of API calls im trying to batch the calls on 0.5degree longitude and latitude chunks due to its resolution. I had this code def round_to_grid_center(coordinate,grid_spacing=0.5 ): offset = grid_spacing / 2 return round(((coordinate - offset) / grid_spacing)) * grid_spacing + offset but this function rounded values at 0.5 down to 0.25 instead of up to 0.75, so i added this fix. It works for me, but I'm sure there is a better more efficient method to round the coordinates to their closest grid square centre. Please let me know! def round_to_grid_center(coordinate,grid_spacing=0.5 ): #temp fix for round down error if ((coordinate % 0.5)== 0 and (coordinate % 1 )!= 0): offset = grid_spacing / 2 return round(((coordinate + 0.01 - offset) / grid_spacing)) * grid_spacing + offset else: offset = grid_spacing / 2 return round(((coordinate - offset) / grid_spacing)) * grid_spacing + offset
Given: round half to even The round() function uses "round half to even" rounding mode, as mentioned in the Built-in Types doc, section Numeric types (emphasis by me): Operation Result round(x[, n]) x rounded to n digits, rounding half to even. If n is omitted, it defaults to 0. "Rounding half to even" means that a floating point number with a decimal part of .5 is rounded towards the closest even integer rather than the closest greater integer. For example, both round(1.5) and round(2.5) will produce 2. In entry 5 (coordinate=38.5, grid_spacing=0.5, offset=0.25), you will consequently get round((38.5-0.25)/0.5)) = round(76.5) = 76, and thus a rounded-down result for the part of your calculation before spacing and offset correction. The Wikipedia article on rounding provides as motivation for this rounding mode: This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative, provided they are neither mostly even nor mostly odd. If one needs further convincing that this rounding mode makes sense, one might want to have a look at the very detailed answer to this question ("rounding half to even" is called "banker's rounding" there). Required: round half up In any case, what you want is round half up instead. You can follow the answers to this question for potential solutions, e.g. rather than using round(x), you could use int(x + .5) or float(Decimal(x).to_integral_value(rounding=ROUND_HALF_UP)). Altogether, this could look as follows: from decimal import Decimal, ROUND_HALF_UP values = [(33.87, 151.21), (33.85, 151.22), ( 38.75, 149.85), (35.15, 150.85), (38.50, 149.87), (-38.50, 149.95)] def round_to_grid_center(coordinate, grid_spacing=0.5): offset = grid_spacing / 2 return round((coordinate - offset) / grid_spacing) * grid_spacing + offset def round_with_int(coordinate, grid_spacing=0.5): offset = grid_spacing / 2 return int(.5 + ((coordinate - offset) / grid_spacing)) * grid_spacing + offset def round_with_decimal(coordinate, grid_spacing=0.5): offset = grid_spacing / 2 return float(Decimal((coordinate - offset) / grid_spacing).to_integral_value(rounding=ROUND_HALF_UP)) * grid_spacing + offset for round_current in [round_to_grid_center, round_with_int, round_with_decimal]: print(f"\n{round_current.__name__}():") for i, (v1, v2) in enumerate(values): print(f"{i+1}: {v1}β†’{round_current(v1)}, {v2}β†’{round_current(v2)}") Which prints: round_to_grid_center(): 1: 33.87β†’33.75, 151.21β†’151.25 2: 33.85β†’33.75, 151.22β†’151.25 3: 38.75β†’38.75, 149.85β†’149.75 4: 35.15β†’35.25, 150.85β†’150.75 5: 38.5β†’38.25, 149.87β†’149.75 6: -38.5β†’-38.75, 149.95β†’149.75 round_with_int(): 1: 33.87β†’33.75, 151.21β†’151.25 2: 33.85β†’33.75, 151.22β†’151.25 3: 38.75β†’38.75, 149.85β†’149.75 4: 35.15β†’35.25, 150.85β†’150.75 5: 38.5β†’38.75, 149.87β†’149.75 6: -38.5β†’-38.25, 149.95β†’149.75 round_with_decimal(): 1: 33.87β†’33.75, 151.21β†’151.25 2: 33.85β†’33.75, 151.22β†’151.25 3: 38.75β†’38.75, 149.85β†’149.75 4: 35.15β†’35.25, 150.85β†’150.75 5: 38.5β†’38.75, 149.87β†’149.75 6: -38.5β†’-38.75, 149.95β†’149.75 Note how the values differ for negative numbers though (which I included as entry 6): with int(), "up" means "towards positive infinity"; with Decimal, "up" means "away from zero". Last but not least – be aware of numerical imprecision in floating point representation and arithmetic: depending on the size of coordinate and the value of grid_spacing, round-off error may lead to unexpected results.
1
3
79,209,784
2024-11-21
https://stackoverflow.com/questions/79209784/conversationsummarybuffermemory-is-not-fully-defined-you-should-define-basec
I am attempting to use LangChain's ConversationSummaryBufferMemory and running into this error: pydantic.errors.PydanticUserError: `ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`. This is what my code looks like: memory = ConversationSummaryBufferMemory( llm=llm, input_key="input", output_key="output", max_token_limit=args.get("max_tokens", DEFAULT_MAX_TOKENS), memory_key="chat_history", ) I am using langchain==0.3.7. Has anyone else encountered this?
pydantic library has been updated. pip install pydantic==2.9.2 should help
2
3
79,209,425
2024-11-21
https://stackoverflow.com/questions/79209425/build-a-wheel-and-install-package-version-depending-on-os
I have several python packages that need to be installed on various os/environments. These packages have dependencies and some of them like Polars needs a different package depending on the OS, for example: polars-lts-cpu on MacOS (Darwin) and polars on all the other OS. I use setuptools to create a whl file, but the dependencies installed depend on the OS where the wheel file was created. Here is my code: import platform from setuptools import find_packages, setup setup( ... install_requires=["glob2>=0.7", "numpy>=1.26.4", "polars>=1.12.0" if platform.system() != "Darwin" else "polars-lts-cpu>=1.12.0"] ...) As mentioned above, this code installs the version of Polars according to the OS where the wheel file was created, not according to where the package will be installed. How can I fix this?
Use declarative environment markers as described in PEP 496 and PEP 508: install_requires=[ "polars>=1.12.0; platform_system!='Darwin'", "polars-lts-cpu>=1.12.0; platform_system=='Darwin'", ]
3
3
79,208,029
2024-11-20
https://stackoverflow.com/questions/79208029/how-can-i-fix-filenotfounderror-in-python-when-writing-to-a-file-in-a-onedrive-d
My OS is Windows, newest version and all updated, I think the issue lies in the path or something to do with OneDrive. Using the code: file = "dataset.csv" with open(file, "w") as f: f.write(data) I get the error: FileNotFoundError: [Errno 2] No such file or directory: 'dataset.csv', but this is an error I have never had before... I have enabled long paths in the Windows registry and rebooted and it still does not work. However, I have tried to run this code in different paths of my pc, and it does work if I run it in the path or with a shorter path than: C:\Users\user\OneDrive - UniversityX XXXXXXXXXX XXXXXXXX\XXXXXXXXX, where X is also part of the folder name as it is my unversity`s name. If I run it after that path it just stops working and outputs that error every time I run the code in VSCode. Update: This does not only not work in VSCode, but if I use matlab or any other program to save a file, it does not let me save the file either as it says that the file cannot be found. Does anyone know how I can fix this, please?
I found out that it was a Windows Security issue in the end... I enabled controlled folder access. To disable this follow these steps: Settings > Update & Security (Windows 10) or Privacy & Security (Windows 11) > Windows Security > Virus & threat protection. Under Virus & threat protection settings, select Manage settings. Under Controlled folder access, select Manage Controlled folder access. Switch the Controlled folder access setting to Off. This option apparently blocks any program from creating new files in certain directories.
1
3
79,208,182
2024-11-20
https://stackoverflow.com/questions/79208182/segmentation-fault-when-executing-a-python-script-in-a-c-program
I need to execute some python and C at the same time. I tried using Python.h: #include <Python.h> int python_program(char* cwd) { char* python_file_path; FILE* fd; int run; python_file_path = malloc(sizeof(char) * (strlen(cwd) + strlen("src/query.py") + 1)); strcpy(python_file_path, cwd); strcat(python_file_path, "src/query.py"); fd = fopen(python_file_path, "r"); Py_Initialize(); run = PyRun_AnyFile(fd, "query.py"); //this part is where the bug occur i think Py_Finalize(); free(python_file_path); } int main(int argc, char *argv[]) { char cwd_buffer[64]; getcwd(cwd_buffer, sizeof(cwd_buffer)); python_program(cwd_buffer); return 0; } ...but there's an error with segmentation fault. 26057 segmentation fault (core dumped) ../spotify-viewer-cli I isolated the Python.h part and it's the problem. So how can I execute the python file in my C program?
Golden rule: error handling is not an option but a hard requirement in programming (pointed out by answers and comments). Failing to include it might work for a while, but almost certainly will come back and bite in the ass at a later time, and it will do it so hard that someone (unfortunately, often not the same person who wrote the faulty code) will spend much more time (than writing it in the 1st place) fixing subtle errors (or crashes). Also, reading the documentation for the used functions, might save precious time too, avoiding all kinds of errors generated by passing to them arguments based on some false assumptions. Same case here (Undefined Behavior): [Man7]: getcwd(3) doesn't end the path with a separator (/) Computed script path doesn't exist fopen fails (returns NULL) PyRun_AnyFile SegFaults I created a MCVE ([SO]: How to create a Minimal, Reproducible Example (reprex (mcve))), and also added some printf statements useful to identify the culprit (the preferred option would be to go step by step using a debugger (e.g.: [SourceWare]: GDB: The GNU Project Debugger)). dir00/code00.py #!/usr/bin/env python import os import sys def main(*argv): print(f"From Python - file: {os.path.abspath(__file__)}") if __name__ == "__main__": print( "Python {:s} {:03d}bit on {:s}\n".format( " ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform, ) ) rc = main(*sys.argv[1:]) #print("\nDone.\n") #sys.exit(rc) main00.c: #include <errno.h> #include <stdio.h> #include <unistd.h> #include <Python.h> #define PY_SCRIPT "code00.py" #define FULL_PY_SCRIPT "dir00/" PY_SCRIPT int runPyFile(const char *wd) { char *script; FILE *fp; int res; script = malloc(sizeof(char) * (strlen(wd) + strlen(FULL_PY_SCRIPT) + 2)); if (!script) { printf("malloc error: %d\n", errno); return -1; } strcpy(script, wd); strcat(script, "/"); // @TODO - cfati strcat(script, FULL_PY_SCRIPT); printf("script path: %s\n", script); if (access(script, F_OK)) { // Extra check printf("Script doesn't exist\n"); return -2; } fp = fopen(script, "r"); if (!fp) { printf("fopen error: %d\n", errno); free(script); return -3; } free(script); Py_Initialize(); res = PyRun_SimpleFile(fp, PY_SCRIPT); // Call this function directly (skip PyRun_AnyFile layer) if (res) { printf("PyRun_SimpleFile error\n"); } Py_Finalize(); fclose(fp); return res; } int main(int argc, char *argv[]) { char cwd[PATH_MAX]; if (!getcwd(cwd, sizeof(cwd))) { printf("getcwd error: %d\n", errno); return -1; } printf("cwd (check its end): %s\n", cwd); int res = runPyFile(cwd); if (res) { // Some extra handling (or exit function if it's more complex) } else { printf("Script ran fine\n"); } printf("\nDone.\n\n"); return res; } Output: [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackExchange/StackOverflow/q079208182]> ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit prompt]> tree . +-- dir00 Β¦ +-- code00.py +-- main00.c 1 directory, 2 files [064bit prompt]> [064bit prompt]> PY_VER="3.11" [064bit prompt]> gcc -fPIC -I/usr/include/python${PY_VER} -o test${PY_VER} -L/usr/lib/$(uname -m)-linux-gnu main00.c -lpython${PY_VER} [064bit prompt]> ls dir00 main00.c test3.11 [064bit prompt]> [064bit prompt]> ./test${PY_VER} cwd (check its end): /mnt/e/Work/Dev/StackExchange/StackOverflow/q079208182 script path: /mnt/e/Work/Dev/StackExchange/StackOverflow/q079208182/dir00/code00.py Python 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0] 064bit on linux From Python - file: /mnt/e/Work/Dev/StackExchange/StackOverflow/q079208182/code00.py Script ran fine Done.
2
0
79,208,254
2024-11-20
https://stackoverflow.com/questions/79208254/how-to-specify-column-name-with-the-suffix-based-on-another-column-value
#Column X contains the suffix of one of V* columns. Need to put the value from V(X) in column Y. import pandas as pd import numpy as np # sample dataframes df = pd.DataFrame({ 'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'V1': [2,3,4,50,6,7,8], 'V2': [3,3,3,3,3,3,3], 'V3': [7,15,8,9,10,11,12], 'X': [2,3,1,3,3,1,2] }) # Expected output: EMPLID V1 V2 V3 X Y 0 12 2 3 7 2 3 1 13 3 3 15 3 15 2 14 4 3 8 1 4 3 15 50 3 9 3 9 4 16 6 3 10 3 10 5 17 7 3 11 1 7 6 18 8 3 12 2 3 Example code I've tried (all got syntax error): df['Y'] = df[f"V + df['X']"] Any suggestion is appreciated. Thank you.
The canonical way would be to use indexing lookup, however since the column names and X values are a bit different, you first need to convert to string and prepend V: idx, cols = pd.factorize('V'+df['X'].astype(str)) df['Y'] = df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx] Output: EMPLID V1 V2 V3 X Y 0 12 2 3 7 2 3 1 13 3 3 15 3 15 2 14 4 3 8 1 4 3 15 50 3 9 3 9 4 16 6 3 10 3 10 5 17 7 3 11 1 7 6 18 8 3 12 2 3
1
2
79,207,951
2024-11-20
https://stackoverflow.com/questions/79207951/python-wheel-entry-point-not-working-as-expected-on-windows
I'm tryring to setup a python wheel for my testrunner helper script to make it easyly acessible from everywhere on my Windows machine. Therefore I configure a console entry point in my setup.py. I can see the generated entry point in the entry_points.txt but if I'm trying to invoke my script I get the error message. No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed My installer folder tree looks like this setup.py README.md LICENSE.txt testrunner/ __init__.py testrunner.py templates/ testDesc.json The testrunner.py looks like this def runTests(): print("Hello World") #More not relevant code here if __name__ == '__main__': runTests() The setup.py content from setuptools import find_packages, setup from pathlib import Path # read the contents of your README file this_directory = Path(__file__).parent long_description = (this_directory / "README.md").read_text() setup( name='testrunner', version='0.3.0', packages=find_packages(include=['testrunner']), description='C/C++ test runner', long_description=long_description, long_description_content_type='text/markdown', author='Me', license="Proprietary", license_files = ('LICENSE.txt',), entry_points={ 'console_scripts': ['testrunner = testrunner:main'] }, classifiers=[ 'Topic :: Software Development :: Build Tools', 'Topic :: Software Development :: Compilers', 'Private :: Do Not Upload', 'Operating System :: Microsoft :: Windows', 'Intended Audience :: Developers', 'Intended Audience :: Science/Research ', 'Intended Audience :: Education ', 'Natural Language :: English', 'License :: Other/Proprietary License', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', ], install_requires=[], package_data={'':['templates\\*.json']}, ) And finally the __init__.py from .testrunner import main, runTests I build the wheel with the command: py -m pip wheel --no-deps -w dist . After installation with pip and checking the content in the site-packages directory executing py -m testrunner -h results in No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed
No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed You don't have __main__.py so you cannot do python -m testrunner. To fix the problem: echo "from .testrunner import runTests" >testrunner/__main__.py echo "runTests()" >>testrunner/__main__.py The second problem: from .testrunner import main, runTests You don't have main in testrunner.py. Code if __name__ == '__main__': doesn't create any main. You only have runTests to import. So change the import to from .testrunner import runTests and change your setup.py: entry_points={ 'console_scripts': ['testrunner = testrunner:runTests'] },
3
1
79,207,871
2024-11-20
https://stackoverflow.com/questions/79207871/replace-last-two-row-values-in-a-grouped-polars-dataframe
I need to replace the last two values in the value column of a pl.DataFrame with zeros, whereby I need to group_by the symbol column. import polars as pl df = pl.DataFrame( {"symbol": [*["A"] * 4, *["B"] * 4], "value": range(8)} ) shape: (8, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 1 β”‚ β”‚ A ┆ 2 β”‚ β”‚ A ┆ 3 β”‚ β”‚ B ┆ 4 β”‚ β”‚ B ┆ 5 β”‚ β”‚ B ┆ 6 β”‚ β”‚ B ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Here is my expected outcome: shape: (8, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 1 β”‚ β”‚ A ┆ 0 β”‚<-- replaced β”‚ A ┆ 0 β”‚<-- replaced β”‚ B ┆ 4 β”‚ β”‚ B ┆ 5 β”‚ β”‚ B ┆ 0 β”‚<-- replaced β”‚ B ┆ 0 β”‚<-- replaced β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
You can use pl.Expr.head() with pl.len() to get data without last two rows. pl.Expr.append() and pl.repeat() to pad it with zeroes. df.with_columns( pl.col.value.head(pl.len() - 2).append(pl.repeat(0, 2)) .over("symbol") ) shape: (8, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 1 β”‚ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 0 β”‚ β”‚ B ┆ 4 β”‚ β”‚ B ┆ 5 β”‚ β”‚ B ┆ 0 β”‚ β”‚ B ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Alternatively, you can use pl.when() to create conditional column. pl.int_range() with pl.len() to affect only first n - 2 rows. df.with_columns( pl.when(pl.int_range(pl.len()) < pl.len() - 2).then(pl.col.value) .otherwise(0) .over("symbol") ) shape: (8, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 1 β”‚ β”‚ A ┆ 0 β”‚ β”‚ A ┆ 0 β”‚ β”‚ B ┆ 4 β”‚ β”‚ B ┆ 5 β”‚ β”‚ B ┆ 0 β”‚ β”‚ B ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
3
1
79,207,102
2024-11-20
https://stackoverflow.com/questions/79207102/python-opencv-projectpoints-neutral-position-off-center
I want to draw 3D Positions in a webcam Image using OpenCV's projectPoints function. During testing I noticed, that I always have a certain offset from the real object. This is most obvious when trying to project the origin (0,0,0) to the image center. The image shape is (2988, 5312, 3), the big red dot is the image center at (1494, 2656) and the small red dot with the lines attached is the projection of the origin (0,0,0) with NO translation and NO rotation leading to (1476, 2732). The main question: Why is the projected point not in the middle of the image? I determined the projected point like so: origin_2d, jacobian = cv2.projectPoints( np.array([(0.0, 0.0, 0.0)]), np.array([(0.0, 0.0, 0.0)]), np.array([(0.0, 0.0, 0.0)]), mtx, dist, ) So the origin (0,0,0) should be projected to 2D with no translation and no rotation and thus appear in the image center, right? I obtained the camera mtx and dist by executing a camera calibration routine as described here OpenCV docs:Camera Calibration using the chessboard you see in the image. Is there something wrong with my calibration? Or am I misunderstanding something in the projection process? Thanks and best regards, Felix
The image shape is (2988, 5312, 3), the big red dot is the image center at (1494, 2656) You calibrated that camera, right? Then that's not the optical center. The optical center is in the camera matrix, along with the focal length. (1476, 2732) That looks more like it, although I'd have expected these numbers to also have digits after the decimal point.
3
1
79,198,230
2024-11-17
https://stackoverflow.com/questions/79198230/django-dask-integration-usage-and-progress
About performance & best practice Note, the entire code for the question below is public on Github. Feel free to check out the project! https://github.com/b-long/moose-dj-uv/pull/3 I'm trying to workout a simple Django + Dask integration, where one view starts a long-running process and another view is able to check the status of that work. Later on, I might enhance this in a way that get_task_status (or some other Django view function) is able to return the output of the work. I'm using time.sleep(2) to intentionally mimic a long-running bit of work. Also, it's important to see the overall work status as "running". To that end, I'm also using a time.sleep() in my test, which feels very silly. Here's the view code: from uuid import uuid4 from django.http import JsonResponse from dask.distributed import Client import time # Initialize Dask client client = Client(n_workers=8, threads_per_worker=2) NUM_FAKE_TASKS = 25 # Dictionary to store futures with task_id as key task_futures = {} def long_running_process(work_list): def task_function(task): time.sleep(2) return task futures = [client.submit(task_function, task) for task in work_list] return futures def start_task(request): work_list = [] for t in range(NUM_FAKE_TASKS): task_id = str(uuid4()) # Generate a unique ID for the task work_list.append( {"address": f"foo--{t}@example.com", "message": f"Mail task: {task_id}"} ) futures = long_running_process(work_list) dask_task_id = futures[0].key # Use the key of the first future as the task ID # Store the futures in the dictionary with task_id as key task_futures[dask_task_id] = futures return JsonResponse({"task_id": dask_task_id}) def get_task_status(request, task_id): futures = task_futures.get(task_id) if futures: if not all(future.done() for future in futures): progress = 0 return JsonResponse({"status": "running", "progress": progress}) else: results = client.gather(futures, asynchronous=False) # Calculate progress, based on futures that are 'done' progress = int((sum(future.done() for future in futures) / len(futures)) * 100) return JsonResponse( { "task_id": task_id, "status": "completed", "progress": progress, "results": results, } ) else: return JsonResponse({"status": "error", "message": "Task not found"}) I've written a test, which completes in about 5.5 seconds: from django.test import Client from django.urls import reverse import time def test_immediate_response_with_dask(): client = Client() response = client.post(reverse("start_task_dask"), data={"data": "foo"}) assert response.status_code == 200 assert "task_id" in response.json() task_id = response.json()["task_id"] response2 = client.get(reverse("get_task_status_dask", kwargs={"task_id": task_id})) assert response2.status_code == 200 r2_status = response2.json()["status"] assert r2_status == "running" attempts = 0 max_attempts = 8 while attempts < max_attempts: time.sleep(1) try: response3 = client.get( reverse("get_task_status_dask", kwargs={"task_id": task_id}) ) assert response3.status_code == 200 r3_status = response3.json()["status"] r3_progress = response3.json()["progress"] assert r3_progress >= 99 assert r3_status == "completed" break # Exit the loop if successful except Exception: attempts += 1 if attempts == max_attempts: raise # Raise the last exception if all attempts failed My question is, is there a more performant way to implement this same API? What if NUM_FAKE_TASKS = 10000? Am I wasting cycles? Edit: How to view progress percentage? Thanks to @GuillaumeEB for the tip. So, we know that the following is blocking: client.gather(futures, asynchronous=False) But, it seems like this also doesn't behave the way expect: client.gather(futures, asynchronous=True) Is there some way that I could use client.persist() or client.compute(), to see incremental progress? I know that I can't persist a list of <class 'distributed.client.Future'> , and using client.compute(futures) also seems to behave incorrectly (jumping the progress from 0 to 100).
I think the solution you are looking for is as_completed: https://docs.dask.org/en/latest/futures.html#waiting-on-futures. You can also iterate over the futures as they complete using the as_completed function
1
1
79,201,789
2024-11-19
https://stackoverflow.com/questions/79201789/why-does-pandas-rolling-method-return-a-series-with-a-different-dtype-to-the-ori
Just curious why the Pandas Series rolling window method doesn't preserve the data-type of the original series: import numpy as np import pandas as pd x = pd.Series(np.ones(6), dtype='float32') x.dtype, x.rolling(window=3).mean().dtype Output: (dtype('float32'), dtype('float64'))
x.rolling(window=3) gives you a pandas.core.window.rolling.Rolling object. help(pandas.core.window.rolling.Rolling.mean) includes the note: Returns ------- Series or DataFrame Return type is the same as the original object with ``np.float64`` dtype. that's the little why. The big why it would do such a thing, I don't know. Perhaps it's a way to keep from losing precision since you can always choose to convert to float32 again.
1
1
79,204,288
2024-11-19
https://stackoverflow.com/questions/79204288/handle-dns-timeout-with-call-to-blob-client-upload-blob
While using the Azure storage SDK in Python, I have been unable to override what appears to be a default 90-second timeout to catch a DNS exception occurring within a call to blob_client.upload_blob(). I am looking for a way to override this with a shorter time interval (i.e. 5 seconds). The following code illustrates this issue using a fictitious account name which DNS cannot resolve. I am using a timeout argument in the call to upload_blob, and I understand from reviewing documentation this enforces a server-side threshold, not a client-side threshold. I have not been successful in getting a client-side threshold to be enforced. This issue appears similar to this unanswered question: How to handle timeout for Uploading a blob in Azure Storage using Python SDK?. The one (not accepted) solution suggests using a timeout threshold within the call to upload_blob. As noted above (and shown within the code below), this is not producing the desired effect. from azure.core.exceptions import AzureError from azure.storage.blob import BlobServiceClient # Define Azure Storage Blob connection details connection_string = "DefaultEndpointsProtocol=https;AccountName=test;AccountKey=removed==;EndpointSuffix=core.windows.net" container_name = "containername" blob_name = "blobname" local_file_path = "c:/temp/test.txt" # Create the BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(connection_string) # Function to perform the blob upload def upload_blob_process(): try: with open(local_file_path, "rb") as data: blob_client = blob_service_client.get_blob_client(container_name, blob_name) blob_client.upload_blob(data, timeout=5) print("Blob uploaded successfully!") except AzureError as e: print(f"Azure error occurred: {e}") except Exception as e: print(f"An error occurred: {e}") upload_blob_process()
While using the Azure storage SDK in Python, I have been unable to override what appears to be a default 90-second timeout to catch a DNS exception occurring within a call to blob_client.upload_blob(). I am looking for a way to override this with a shorter time interval (i.e. 5 seconds). As, I mentioned in comment, The timeout parameter in upload_blob() applies to each individual HTTP request to the Azure Blob service and it does not affect DNS resolution or connection establishment timeouts but instead limits the server-side operation duration for each request. thanks, I'm aware. Do you know of any way to apply a DNS timeout on top of this? Unfortunately, there is no built-in way in the Azure Storage SDK to configure a DNS or connection establishment timeout directly. You can use a workaround by using the dns.resolver library to perform DNS resolution before initiating the upload, with a custom timeout for DNS resolution. This will help you achieve the same result. Code: import dns.resolver from azure.storage.blob import BlobServiceClient import time connection_string = "DefaultEndpointsProtocol=https;AccountName=venkat326;AccountKey=xxxx=;EndpointSuffix=core.windows.net" container_name = "result" blob_name = "test.csv" local_file_path = "path to file" # DNS timeout in seconds DNS_TIMEOUT = 5 # Function to resolve DNS with timeout def resolve_dns_with_timeout(connection_string, timeout=DNS_TIMEOUT): blob_service_client = BlobServiceClient.from_connection_string(connection_string) account_url = blob_service_client.url hostname = account_url.split("//")[1].split("/")[0] resolver = dns.resolver.Resolver() resolver.lifetime = timeout resolver.timeout = timeout try: resolver.resolve(hostname, "A") print(f"DNS resolution for {hostname} succeeded.") except dns.resolver.Timeout: raise Exception(f"DNS resolution for {hostname} timed out after {timeout} seconds.") except dns.resolver.NXDOMAIN: raise Exception(f"The domain {hostname} does not exist.") except Exception as e: raise Exception(f"DNS resolution failed for {hostname}: {e}") # Function to upload blob with DNS resolution def upload_blob_with_dns_check(connection_string, container_name, blob_name, local_file_path): start_time = time.time() try: resolve_dns_with_timeout(connection_string) blob_service_client = BlobServiceClient.from_connection_string(connection_string) blob_client = blob_service_client.get_blob_client(container_name, blob_name) with open(local_file_path, "rb") as data: blob_client.upload_blob(data) print("Blob uploaded successfully!") except Exception as e: elapsed_time = time.time() - start_time print(f"An error occurred: {e}") print(f"Elapsed time before timeout or failure: {elapsed_time:.2f} seconds") # Run the upload process upload_blob_with_dns_check(connection_string, container_name, blob_name, local_file_path) If the DNS is wrong, you'll get output like:. An error occurred: The domain venkat326.blob.core.windows.net does not exist. Elapsed time before timeout or failure: 0.13 seconds If the DNS is correct, you'll get output like: DNS resolution for venkat326123.blob.core.windows.net succeeded. Blob uploaded successfully!
1
2
79,203,282
2024-11-19
https://stackoverflow.com/questions/79203282/setting-rgb-value-for-a-numpy-array-using-boolean-indexing
I have an array with shape (100, 80, 3) which is an rgb image. I have a boolean mask with shape (100, 80). I want each pixel where the mask is True to have value of pix_val = np.array([0.1, 0.2, 0.3]). cols = 100 rows = 80 img = np.random.rand(rows, cols, 3) mask = np.random.randint(2, size=(rows, cols), dtype=np.bool_) px = np.array([0.1, 0.2, 0.3]) for ch in range(3): img[:, :, ch][mask] = px[ch] I thought broadcasting: img[mask[:, :, None]] = px would work. But it did not. I am looking for a vectorized (efficient) way to implement it.
I'll attempt to explain why your indexing attempt didn't work. Make a smaller 3d array, and 2d mask: In [1]: import numpy as np In [2]: img = np.arange(24).reshape(2,3,4) In [3]: mask = np.array([[1,0,1],[0,1,1]],bool);mask Out[3]: array([[ True, False, True], [False, True, True]]) Using @mozway's indexing, produces a (4,4) array. The first first 4 is the number of True values in the mask, the second is the trailing dimension: In [4]: img[mask] Out[4]: array([[ 0, 1, 2, 3], [ 8, 9, 10, 11], [16, 17, 18, 19], [20, 21, 22, 23]]) With your indexing attempt, we get an error. (You really should have shown the error message): In [5]: img[mask[:,:,None]] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In[5], line 1 ----> 1 img[mask[:,:,None]] IndexError: boolean index did not match indexed array along dimension 2; dimension is 4 but corresponding boolean dimension is 1 With this None, mask dimension is (2,3,1). That last 1 doesn't match the 4 of img. broadcasting doesn't apply in this context. Now if we attempt to use mask in a multiplication, the (2,3,4) and (2,3) don't broadcast together: In [6]: img*mask --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[6], line 1 ----> 1 img*mask ValueError: operands could not be broadcast together with shapes (2,3,4) (2,3) But (2,3,1) does broadcast with (2,3,4), producing a select number of 0 rows: In [7]: img*mask[:,:,None] Out[7]: array([[[ 0, 1, 2, 3], [ 0, 0, 0, 0], [ 8, 9, 10, 11]], [[ 0, 0, 0, 0], [16, 17, 18, 19], [20, 21, 22, 23]]]) As I commented, using a boolean mask is equivalent to indexing with nonzero arrays: In [13]: I,J = np.nonzero(mask); I,J Out[13]: (array([0, 0, 1, 1], dtype=int64), array([0, 2, 1, 2], dtype=int64)) In [14]: img[I,J,:] Out[14]: array([[ 0, 1, 2, 3], [ 8, 9, 10, 11], [16, 17, 18, 19], [20, 21, 22, 23]]) In the assignment expresion, a size (4,) value can broadcast to the (n,4) indexed img[mask]. Now if we were attempting to mask other dimensions we might need to make a px[:,None,:] or something like that.
1
2
79,204,500
2024-11-19
https://stackoverflow.com/questions/79204500/mypy-with-pydantic-field-validator
With pydantic, is there a way for mypy to be hinted so it doesn't raise an error in this scenario, where there's a field_validator modifying the type? class MyModel(BaseModel): x: int @field_validator("x", mode="before") @classmethod def to_int(cls, v: str) -> int: return len(v) MyModel(x='test')
If you're always inserting a string there, it might be better to use a computed_field. Something along this, maybe? class MyModel(BaseModel): input: str @computed_field def x(self) -> int: return len(self.input) I think it's very counterintuitive if you see the model with the int declaration while it would raise type errors if you put an integer inside a JSON at that place.
1
2
79,201,663
2024-11-18
https://stackoverflow.com/questions/79201663/split-columns-containing-lists-from-csv-into-separate-csv-files-with-pandas
I have CSV files with multiple columns of data retrieved from APIs, where each cell may contain either a single value or a list/array. The size of these lists is consistent across each column (e.g., a column named ALPHANUMS having a row containing a list like "['A', 'B', '4']" has the same list size of a column named COLOR having a row containing a list "['red', 'blue', 'green']", but the list sizes can vary per CSV file depending on the API response. I would like to use pandas to create separate CSV files for each element in a list column, while retaining the rest of the data in each file. Here's an example of what the data might look like from this mockup function: import random import csv # Predefined lists for NAME, CARS, and PHONE OS NAMES = ["John Doe", "Jane Smith", "Alice Johnson", "Bob Brown", "Charlie Davis", "Eve White", "David Wilson", "Emma Taylor", "Frank Harris", "Grace Clark"] CAR_BRANDS = ["Toyota", "Ford", "BMW", "Tesla", "Honda", "Chevrolet", "Nissan", "Audi"] PHONE_OS = ["Android", "iOS"] def create_csv(file_name, num_records): cur_random_list_size = random.randint(1, min(len(NAMES), len(CAR_BRANDS))) with open(file_name, mode='w', newline='') as file: writer = csv.writer(file) writer.writerow(["ID", "NAME", "MONTH", "CARS", "PHONE OS"]) for i in range(num_records): record = { "id" : i + 1, "name": [NAMES[n] for n in range(cur_random_list_size)], "month": random.randint(1,12), "cars": [random.choice(CAR_BRANDS) for _ in range(cur_random_list_size)], "phone": random.choice(PHONE_OS) } writer.writerow(record.values()) print(f"CSV file '{file_name}' created with {num_records} records.") create_csv("people_data.csv", 5) ID NAME MONTH CARS PHONE OS 1 "['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']" 2 "['Toyota', 'Nissan', 'Nissan', 'Nissan', 'Audi', 'Honda']" iOS 2 "['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']" 4 "['Nissan', 'Ford', 'Honda', 'Toyota', 'Ford', 'Honda']" iOS 3 "['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']" 8 "['BMW', 'Honda', 'Tesla', 'Tesla', 'Tesla', 'Nissan']" Android 4 "['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']" 3 "['Tesla', 'Audi', 'Chevrolet', 'Audi', 'Chevrolet', 'BMW']" iOS 5 "['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']" 8 "['Ford', 'Tesla', 'BMW', 'Toyota', 'Nissan', 'Ford']" Android And ideally, I'd like to separate this into five individual csv files, as an example for john_doe_people_data.csv: ID NAME MONTH CARS PHONE OS 1 John Doe 2 Toyota iOS 2 John Doe 4 Nissan iOS 3 John Doe 8 BMW Android 4 John Doe 3 Tesla iOS 5 John Doe 8 Ford Android All in all, how can I use pandas to create separate CSV files for each element in a list column, while keeping the rest of the data in each file?
I ended up using a combination of explode,map, and ast.literal_eval to break out the columns with string lists into different CSV files. Instead of hard-coding column names like NAME or CARS, the program now dynamically checks which columns contain string representations of lists. This is done by iterating over all columns and using the map_check_if_list_literal function to identify list-like columns and later convert to literals with map_convert_list applied element-wise : import random import csv import pandas as pd import ast # Predefined lists for NAME, CARS, and PHONE OS NAMES = ["John Doe", "Jane Smith", "Alice Johnson", "Bob Brown", "Charlie Davis", "Eve White", "David Wilson", "Emma Taylor", "Frank Harris", "Grace Clark"] CAR_BRANDS = ["Toyota", "Ford", "BMW", "Tesla", "Honda", "Chevrolet", "Nissan", "Audi"] PHONE_OS = ["Android", "iOS"] def create_csv(file_name, num_records): cur_random_list_size = random.randint(1, min(len(NAMES), len(CAR_BRANDS))) with open(file_name, mode='w', newline='') as file: writer = csv.writer(file) writer.writerow(["ID", "NAME", "MONTH", "CARS", "PHONE OS"]) for i in range(num_records): record = { "id" : i + 1, "name": [NAMES[n] for n in range(cur_random_list_size)], "month": random.randint(1,12), "cars": [random.choice(CAR_BRANDS) for _ in range(cur_random_list_size)], "phone": random.choice(PHONE_OS) } writer.writerow(record.values()) print(f"CSV file '{file_name}' created with {num_records} records.") def map_check_if_list_literal(element): if isinstance(element,str): try: data = ast.literal_eval(element) if isinstance(data, list): return True else: return False except Exception as e: return False else: return False def map_convert_list_literal(element): if isinstance(element,str): try: data = ast.literal_eval(element) if isinstance(data, list): return data else: return element except Exception as e: return element else: return element if __name__ == "__main__": create_csv("people_data.csv", 5) file_name = "people_data.csv" df = pd.read_csv(file_name) temp_df = df.map(map_check_if_list_literal) columns_w_list = [] for c in temp_df.columns: if temp_df[c].any(): columns_w_list.append(c) new_df = df.map(map_convert_list_literal) new_df = new_df.explode(columns_w_list) #this is column of interest reference_column = ast.literal_eval(df["NAME"].mode()[0]) for name in reference_column: mask = new_df["NAME"] == name unique_df = new_df[mask] unique_df.to_csv(f"{name}_{file_name}.csv", index=False)
1
0
79,204,623
2024-11-19
https://stackoverflow.com/questions/79204623/correct-way-to-parallelize-request-processing-in-flask
I have a Flask service that receives GET requests, and I want to scale the QPS on that endpoint (on a single machine/container). Should I use a python ThreadPoolExecutor or ProcessPoolExecutor, or something else? The GET request just retrieves small pieces of data from a cache backed by a DB. Is there anything specific to Flask that should be taken into account?
Neither. Flask will serve one request per worker (or more, but depending on the worker type) - the way you set-it up, either with gunicorn, wsgi or awsgi is what is responsible for the number of parallel requests your app can process. Inside your app, you don't change anything - your views will be called as independent processes, independent threads or independent async tasks, depending on how you setup your server - that is where you have to tinker with the configurations. Using another concurrency strategy would only make sense if each request would have calculations and data fetching which could themselves be parallelized, inside the same request. Check how is your deployment configured, and pick the best option for you (all things being equal, pick the easiest one): https://flask.palletsprojects.com/en/stable/deploying/ (also, I would not recomend "mod_wsgi" among those options - it is super complex and old tech)
1
2
79,204,622
2024-11-19
https://stackoverflow.com/questions/79204622/how-to-create-dataframe-that-is-the-minimal-values-based-on-2-other-dataframes
Let's say I have DataFrames df1 and df2: >>> df1 = pd.DataFrame({'A': [0, 2, 4], 'B': [2, 17, 7], 'C': [4, 9, 11]}) >>> df1 A B C 0 0 2 4 1 2 17 9 2 4 7 11 >>> df2 = pd.DataFrame({'A': [9, 2, 32], 'B': [1, 3, 8], 'C': [6, 2, 41]}) >>> df2 A B C 0 9 1 6 1 2 3 2 2 32 8 41 What I want is the 3rd DataFrame that will have minimal rows (min is calculated based on column B), that is: >>> df3 A B C 0 9 1 6 1 2 3 2 2 4 7 11 I really don't want to do this by iterating over all rows and comparing them one by one, is there a faster and compact way to do this?
You can mask df1 with df2 when df2['B'] < df1['B']: out = df1.mask(df2['B'].lt(df1['B']), df2) Output: A B C 0 9 1 6 1 2 3 2 2 4 7 11
1
2
79,198,298
2024-11-17
https://stackoverflow.com/questions/79198298/improving-safety-when-a-sqlalchemy-relationship-adds-conditions-that-refer-to-ta
I have a situation where I want to set up relationships between tables, mapped with the SQLAlchemy ORM layer, where these relationships have an extra join key. As far as I know, setting this up by hand requires embedding strings that are eval'd; I'm trying to figure out to what extent that can be avoided, or at least validated early (ideally by pyright or mypy, before runtime). Take the following schema, which doesn't yet have the extra join key added: from typing import List from uuid import UUID from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship from sqlalchemy.schema import ForeignKey from sqlalchemy.types import Uuid import sqlalchemy as sa class Base(DeclarativeBase): pass class User(Base): __tablename__ = 'user' id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) actions: Mapped[List["Action"]] = relationship("Action", back_populates="user", foreign_keys="Action.user_id") class Action(Base): __tablename__ = 'action' id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) user_id: Mapped[UUID] = mapped_column(Uuid(), ForeignKey("user.id")) user: Mapped["User"] = relationship("User", back_populates="actions", foreign_keys=[user_id]) Base.metadata.create_all(sa.create_engine('sqlite://', echo=True)) That's simple enough as long as we aren't trying to add belt-and-suspenders protection against relationship evaluations linking across tenants. As soon as we do, the relation declarations need to be strings any time there's need to refer to an as-yet-undeclared class: class User(Base): __tablename__ = "user" id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) actions: Mapped["Action"] = relationship("Action", back_populates="user", foreign_keys="Action.user_id", primaryjoin="and_(tenant_id == Action.tenant_id, id == Action.user_id)", ) class Action(Base): __tablename__ = "action" id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) user_id: Mapped[UUID] = mapped_column(Uuid(), ForeignKey("user.id")) user: Mapped["User"] = relationship("User", back_populates="actions", foreign_keys=[user_id], primaryjoin=sa.and_(tenant_id == User.tenant_id, user_id == User.id), ) The above works, but having that primaryjoin="and_(tenant_id == Action.tenant_id, id == Action.user_id)" line where the heavy lifting is done in a context opaque to static analysis is unfortunate. If we could provide code that's evaluated after all types are defined, but before SQLAlchemy begins its introspection, that would allow a helper function to be used to generate the relationships. This still isn't static-checking friendly, but it's considerably better than nothing. However, I don't know if when the relevant introspection happens (if relationships need to exist when __init_subclass__ is called, anything trying to add them later would be too late). I'd also be happy with any kind of situation where I'm using strings that static analysis can validate to be legitimate forward references -- if instantiating typing.ForwardRef("Action.tenant_id") were treated by pyright as an indication that a warning should be thrown if Action.tenant_id doesn't eventually exist, that would be perfect. SQLAlchemy has quite a bit by way of facilities I'm immediately unfamiliar with, so I'm hoping there's an option I'm not thinking of here.
Pretty sure you can use lambdas or functions like this: (untested) class User(Base): __tablename__ = "user" id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) actions: Mapped["Action"] = relationship(lambda: Action, back_populates="user", foreign_keys=lambda: Action.user_id, primaryjoin=lambda: and_(tenant_id == Action.tenant_id, id == Action.user_id), )
1
2
79,204,156
2024-11-19
https://stackoverflow.com/questions/79204156/pandas-dataframe-ffill-with-one-greater-the-previous-nonzero-value
I have a pandas DataFrame with a column like: 0 1 1 2 2 3 4 5 5 0 0 0 I would like to leave any leading zeros, but ffill to replace the trailing zeros with one greater than the previous, nonzero value. In this case, I'd like the output to be: 0 1 1 2 2 3 4 5 5 6 6 6 How can I go about doing this?
You could mask, increment and ffill: m = df['col'].eq(0) s = df['col'].mask(m) df['out'] = s.fillna(s.add(1).ffill().fillna(0)).convert_dtypes() Or, if you really want to only target the trailing zeros: df['out'] = df['col'].mask(df['col'].eq(0)[::-1].cummin(), df['col'].max()+1) Output: col out 0 0 0 1 1 1 2 1 1 3 2 2 4 2 2 5 3 3 6 4 4 7 5 5 8 5 5 9 0 6 10 0 6 11 0 6 Intermediates (first approach): col out m s s.add(1) .ffill() .fillna(0) 0 0 0 True NaN NaN NaN 0.0 1 1 1 False 1.0 2.0 2.0 2.0 2 1 1 False 1.0 2.0 2.0 2.0 3 2 2 False 2.0 3.0 3.0 3.0 4 2 2 False 2.0 3.0 3.0 3.0 5 3 3 False 3.0 4.0 4.0 4.0 6 4 4 False 4.0 5.0 5.0 5.0 7 5 5 False 5.0 6.0 6.0 6.0 8 5 5 False 5.0 6.0 6.0 6.0 9 0 6 True NaN NaN 6.0 6.0 10 0 6 True NaN NaN 6.0 6.0 11 0 6 True NaN NaN 6.0 6.0 Intermediates (second approach): col out m s df['col'].eq(0) [::-1].cummin() 0 0 0 True NaN True False 1 1 1 False 1.0 False False 2 1 1 False 1.0 False False 3 2 2 False 2.0 False False 4 2 2 False 2.0 False False 5 3 3 False 3.0 False False 6 4 4 False 4.0 False False 7 5 5 False 5.0 False False 8 5 5 False 5.0 False False 9 0 6 True NaN True True 10 0 6 True NaN True True 11 0 6 True NaN True True applying per group: Assuming a group LOT_ID and the target column STEP_NUMBER: df['out'] = (df.groupby('LOT_ID')['STEP_NUMBER'] .transform(lambda x: x.mask(x.eq(0)[::-1].cummin(), x.max()+1)) )
1
2
79,203,876
2024-11-19
https://stackoverflow.com/questions/79203876/why-does-my-google-service-account-not-see-files-shared-with-it-but-can-see-fil
I am using the Python Client to amend a Google Sheet shared to a Google Service Account. Everything works as expected when using a Sheet that is created by the Service Account, but returns a Requested entity was not found error when accessing a Google Sheet that is owned by someone else but shared with the service account. I have tried with all combinations of these definitions: Personal Google account (i.e. @gmail.com) Workspace Google account Service Account with "shared outside domain set" with Drive and Sheets APIs enabled "Stock" Service account with Drive and Sheets APIs enabled Shared the containing folder of the Sheet Shared just the Sheet itself Can anyone point to what I could be doing wrong that would mean that the client can find a sheet that it creates, but not one that is shared with it?
Can anyone point to what I could be doing wrong that would mean that the client can find a sheet that it creates, but not one that is shared with it? There is a scope in Google drive, which is characteristic of your description: https://www.googleapis.com/auth/drive.file See, edit, create, and delete only the specific Google Drive files you use with this app If you're using that scope, change to a more permissive scope like https://www.googleapis.com/auth/drive
1
2
79,200,874
2024-11-18
https://stackoverflow.com/questions/79200874/column-assignment-with-alias-or
What is the preferred way to assign/add a new column to a polars dataframe in .select() or .with_columns()? Are there any differences between the below column assignments using .alias() or the = sign? import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [1, 1, 7]}) df = df.with_columns(pl.col("A").sum().alias("a_sum"), another_sum=pl.col("A").sum() ) I am not sure which one to use.
The advantage of alias is that it allows you to specify a column name that wouldn't be a valid Python identifier. For example, you could use "a sum!". This can also be achieved by creating a dictionary and using ** to unpack it, passing the items as keyword arguments. Assignment with = cannot work in this way, as it requires a valid identifier (e.g., another_sum). df = df.with_columns(pl.col("A").sum().alias("a sum!"), another_sum=pl.col("A").sum(), **{":) \u2014 also a sum": pl.col("A").sum()} ) Output: shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ a sum! ┆ another_sum ┆ :) β€” also a sum β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ════════β•ͺ═════════════β•ͺ═════════════════║ β”‚ 1 ┆ 1 ┆ 6 ┆ 6 ┆ 6 β”‚ β”‚ 2 ┆ 1 ┆ 6 ┆ 6 ┆ 6 β”‚ β”‚ 3 ┆ 7 ┆ 6 ┆ 6 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
5
79,203,100
2024-11-19
https://stackoverflow.com/questions/79203100/plotting-intersecting-planes-in-3d-space-plotly
I'm trying to plot 3 planes in 3D space using plotly, I can only define a surface along the XY plane, whilst ZY and XZ do not appear. I'm including a simple example below, I would expect the code to produce three planes intersecting at the point (1, 1, 1), instead there is only one surface at (x, y) = 1. Any help would be greatly appreciated. import plotly.graph_objects as go import numpy as np zsurf = go.Surface(y=[0, 1, 2], x=[0, 1, 2], z=np.ones((3, 3))) ysurf = go.Surface(x=[0, 1, 2], z=[0, 1, 2], y=np.ones((3, 3))) xsurf = go.Surface(z=[0, 1, 2], y=[0, 1, 2], x=np.ones((3, 3))) fig = go.Figure() fig.add_trace(zsurf) fig.add_trace(ysurf) fig.add_trace(xsurf) fig.show()
All of your x, y, z should have the same shape. I don't because of which default UB one of your plane even show. But none of them are correctly specified. What you meant is import plotly.graph_objects as go import numpy as np xx,yy=np.meshgrid([0,1,2], [0,1,2]) zsurf = go.Surface(y=xx, x=yy, z=np.ones((3, 3))) ysurf = go.Surface(x=xx, z=yy, y=np.ones((3, 3))) xsurf = go.Surface(z=xx, y=yy, x=np.ones((3, 3))) fig = go.Figure() fig.add_trace(zsurf) fig.add_trace(ysurf) fig.add_trace(xsurf) fig.show() A surface (from plotly Surface point of view) is a 2D mesh of points in space. So a 2D array of points. That is 3 2D arrays, one for x, one for y, one for z. Each point being located in space Or, to be more explicit (it is exactly the same as my first answer. Just, the mesh looks less magic when written explicitly rather that with meshgrid) import plotly.graph_objects as go import numpy as np zsurf = go.Surface(y=[[0,1,2],[0,1,2],[0,1,2]], x=[[0,0,0],[1,1,1],[2,2,2]], z=np.ones((3, 3))) ysurf = go.Surface(x=[[0,1,2],[0,1,2],[0,1,2]], z=[[0,0,0],[1,1,1],[2,2,2]], y=np.ones((3, 3))) xsurf = go.Surface(z=[[0,1,2],[0,1,2],[0,1,2]], y=[[0,0,0],[1,1,1],[2,2,2]], x=np.ones((3, 3))) fig = go.Figure() fig.add_trace(zsurf) fig.add_trace(ysurf) fig.add_trace(xsurf) fig.show()
1
1
79,199,490
2024-11-18
https://stackoverflow.com/questions/79199490/what-is-the-number-at-the-end-of-the-django-get-log
I am using Django to serve an application, and I noticed some slowing down recently. So I went and checked the console that serves the server, that usually logs lines of this format : <date_time> "GET <path> HTTP/1.1" <HTTP_STATUS> <response_time> What I thought was the response time in milliseconds is apparently not, as I get values that would be ludicrous (example 3923437 for a query that when timed in python directly takes 0.936 seconds). I'm pretty sure it's a response time though, as it's always scaled with the time I wait. Can someone explain to me what that number is ? I couldn't find where this default log is documented.
It's the response size and it's printed by Python http-server which Django inherits. That explains why it's not documented by Django, because it's not the Django code that prints it. You can verify that by looking at this Django module. This is the line that starts the http-server. It inherits from Python http-server. This is the line that prints the response size.
1
3
79,202,327
2024-11-19
https://stackoverflow.com/questions/79202327/importing-from-another-directory-located-in-parent-directory-in-python
Suppose we have a project structure like: project/ public_app/ __init__.py dir/ __init__.py config.py subdir/ __init__.py functions.py utils.py my_app/ main.py In my_app/main.py, I would like to import some functions from public_app/dir/subdir/functions.py. A solution I found was to add the following: # main.py import sys import os path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../')) sys.path.append(path) from public_app.dir.subdir.functions import * This seems to work, except now I would also like to import from public_app/dir/subdir/utils.py. However inside this file, it contains other relative imports: # utils.py from dir.config import * If I then try doing # main.py from public_app.dir.subdir.utils import * this gives me a ModuleNotFoundError: No module named 'dir'. Any suggestion on how to do this? Note that I would ideally like to not mess with public_app at all. This is because it is a frequently updated directory pulled from a public repository, and would require constantly changing the imports. I would also like to also keep my_app in a separate directory for cleanliness/easier maintenance if possible. Edit: Figured it out actually by sheer chance. See below for answer.
Run the main.py from the parent directory as a reference. That will resolve relative errors and import errors as the interpreter will get the idea of the relations between modules. Thus, Instead of doing: .../myapp$ python3 main.py Do: <parentfolderofproject>$ python3 -m project.myapp.main
2
1
79,191,769
2024-11-15
https://stackoverflow.com/questions/79191769/how-to-resolve-latency-issue-with-django-m2m-and-filter-horizontal-in-modeladmin
I have used django ModelAdmin with M2M relationship and formfield filtering code as follows: But for superuser or any other login where the number of mailboxes is more than 100k. I have sliced the available after filtering. But loading the m2m field takes time and times out for superuser login: def formfield_for_manytomany(self, db_field, request, **kwargs): if db_field.name == "mailboxes": if request.user.is_superuser: queryset = Mailbox.objects.prefetch_related('domain').only('id','email') kwargs["queryset"] = queryset field = super().formfield_for_manytomany(db_field, request, **kwargs) field.widget.choices.queryset = queryset # Limit visible options return field if request.user.groups.filter(name__in=['customers']).exists(): queryset = Mailbox.objects.filter( domain__customer__email=request.user.email).prefetch_related('domain').only('id','email') kwargs["queryset"] = queryset field = super().formfield_for_manytomany(db_field, request, **kwargs) field.widget.choices.queryset = queryset return field return super().formfield_for_manytomany(db_field, request, **kwargs) I want to use filter_horizontal only and not django auto_complete_light or any javascript. how can the latency be resolved. As you can see the queryset filtering is already done to get valid options. Slicing removed the mailbox model is simple: class Mailbox(AbstractPerson): username = models.EmailField(verbose_name='email', blank=True) email = models.EmailField(verbose_name='email', null=True,blank=True, unique=True) local_part = models.CharField(max_length=100,verbose_name='user part',help_text=hlocal_part) domain = models.ForeignKey(Domain, on_delete=models.CASCADE) which has M2M relation with GroupMailIds model: class GroupMailIds(models.Model): local_part = models.CharField(max_length=100,verbose_name='local part',help_text=hlocal_part) address = models.EmailField(unique=True,verbose_name='Email id of the distribution list') domain = models.ForeignKey(Domain, on_delete=models.CASCADE,related_name='domains') mailboxes = models.ManyToManyField(Mailbox,related_name='my_mailboxes')
Limited the number of objects loading during form init as: class GroupMailIdsForm(forms.ModelForm): class Meta: model = GroupMailIds fields = "__all__" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) request = getattr(self, 'request', None) if not request: return email = request.user.email qs = Mailbox.objects.all() if request.user.is_superuser: filtered_qs = qs else: filtered_qs = qs.none() # No access if none of the conditions match # Set the filtered queryset to the form field self.fields['mailboxes'].queryset = filtered_qs In ModelAdmin: def get_form(self, request, obj=None, **kwargs): # Pass the request to the form form = super().get_form(request, obj, **kwargs) form.request = request # Attach the request to the form return form The init for non-superuser initializes the queryset with None and during the admin, populates the dropdown with required filtered values during admin based on login value
2
1
79,198,397
2024-11-18
https://stackoverflow.com/questions/79198397/how-to-redraw-figure-on-event-in-matplotlib
I'm trying to pre-generate and store matplotlib figures in python, and then display them on a keyboard event (left-right cursor keys). It partially seems working, but fails after the first keypress. Any idea, what am I doing wrong? import matplotlib.pyplot as plt import numpy as np def new_figure(title, data): fig,ax = plt.subplots() plt.plot(data, label=title) ax.set_xlabel('x-axis') ax.set_ylabel('value') plt.legend() plt.title(title) plt.close(fig) return fig def show_figure(fig): dummy = plt.figure() new_manager = dummy.canvas.manager new_manager.canvas.figure = fig fig.set_canvas(new_manager.canvas) def redraw(event, cnt): event.canvas.figure.clear() dummy = event.canvas.figure new_manager = dummy.canvas.manager new_manager.canvas.figure = figs[cnt] figs[cnt].set_canvas(new_manager.canvas) event.canvas.draw() def keypress(event): global cnt if event.key == 'right': cnt += 1 cnt %= mx elif event.key == 'left': cnt -= 1 if cnt < 0: cnt = mx-1 redraw(event, cnt) d = range(0, 360) data = [] data.append(np.sin(np.radians(d))) data.append(np.cos(np.radians(d))) data.append(np.tan(np.radians(d))) titles = ['sin','cos','tan'] mx = len(data) figs = [] for i in range(mx): fig = new_figure(titles[i], data[i]) figs.append(fig) cnt = 0 show_figure(figs[0]) figs[0].canvas.mpl_connect('key_press_event', keypress) plt.show() The error I get eventually is: File "C:\Program Files\Python39\lib\tkinter\__init__.py", line 1636, in _configure self.tk.call(_flatten((self._w, cmd)) + self._options(cnf)) _tkinter.TclError: invalid command name ".!navigationtoolbar2tk.!button2"
Not sure about the root cause of the error, but one way to avoid this is to fully replace the figure and reconnect the key_press_event: def redraw(event, cnt): event.canvas.figure = figs[cnt] event.canvas.mpl_connect('key_press_event', keypress) event.canvas.draw()
3
1
79,202,065
2024-11-19
https://stackoverflow.com/questions/79202065/how-to-sum-data-by-input-date-month-and-previous-month
I'm trying to sum up data of selected date, month of selected date and previous month of selected date but don't know how to do. Below is my sample data and my expected Output: Sample data: import pandas as pd import numpy as np df = pd.read_excel('https://github.com/hoatranobita/hoatranobita/raw/refs/heads/main/Check%20data%20(1).xlsx', sheet_name='Data') df COA Code USDConversion Amount Base Date 2 0 19010000000 26924582.44 2024-10-01 1 19010000000 38835600.44 2024-10-02 2 19010000000 46794586.57 2024-10-03 3 19010000000 57117346.49 2024-10-06 4 19010000000 69256132.98 2024-10-07 ... ... ... ... 65 58000000000 38082130.88 2024-11-12 66 58000000000 38140016.13 2024-11-13 67 58000000000 38160089.27 2024-11-14 68 58000000000 38233974.54 2024-11-17 69 58000000000 38323598.99 2024-11-18 So if I select date of November (for example 2024-11-18, I want to group by selected date, month of selected date and previous month of selected date. Output: COA Code 2024-11-18 October November 0 19010000000 42625047.24 1354513618.61 584813860.97 1 58000000000 38323598.99 820927014.08 456265522.64
The exact generalization of your question is not fully clear, but assuming that you want to group by COA Code, you could ensure everything is a datetime/periods, then select the appropriate rows with boolean indexing and between, finally perform a groupby.sum of those rows and concat to the original date rows. Here as a function for clarity: def get_previous(df, date, date_col='Base Date 2'): # ensure working with datetime/period objects date = pd.Timestamp(date) period = date.to_period('M') dt = pd.to_datetime(df[date_col]) p = dt.dt.to_period('M') # select rows to keep m = p.between(period-1, period, inclusive='both') # produce rows with original date # aggregate previous and current month # combine and rename the columns return pd.concat([ df[df[date_col].eq(date)] .set_index('COA Code')['USDConversion Amount'] .rename(date.strftime('%Y-%m-%d')), df[m].groupby(['COA Code', p])['USDConversion Amount'] .sum().unstack(date_col) .rename(columns=lambda x: x.strftime('%B')) ], axis=1).reset_index() out = get_previous(df, '2024-11-18') Output: COA Code 2024-11-18 October November 0 58000000000 38323598.99 8.209270e+08 4.562655e+08 NB. you can replace groupby.sum+unstack by pivot_table (df[m].assign(col=p).pivot_table(index='COA Code', columns='col', values='USDConversion Amount', aggfunc='sum')).
1
1
79,201,815
2024-11-19
https://stackoverflow.com/questions/79201815/in-python-polars-how-to-search-string-across-multiple-columns-and-create-a-new
To search over multiple columns, and create a new column of flag if string found, the following codes work, but is there any compact way inside with_columns() to achieve the same? df = pl.DataFrame({ "col1": ["hello", "world", "polars"], "col2": ["data", "science", "hello"], "col3": ["test", "string", "match"], "col4": ["hello", "example", "test"] }) search_string = "hello" condition = pl.lit(False) for col in df.columns: condition |= pl.col(col).str.contains(search_string) df = df.with_columns( condition.alias("string_found") + 0 ) print(df) shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 ┆ col4 ┆ string_found β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════β•ͺ═════════β•ͺ══════════════║ β”‚ hello ┆ data ┆ test ┆ hello ┆ 1 β”‚ β”‚ world ┆ science ┆ string ┆ example ┆ 0 β”‚ β”‚ polars ┆ hello ┆ match ┆ test ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
You can use .any_horizontal() df.with_columns( pl.any_horizontal(pl.all().str.contains(search_string)) .alias("string_found") ) shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 ┆ col4 ┆ string_found β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ════════β•ͺ═════════β•ͺ══════════════║ β”‚ hello ┆ data ┆ test ┆ hello ┆ true β”‚ β”‚ world ┆ science ┆ string ┆ example ┆ false β”‚ β”‚ polars ┆ hello ┆ match ┆ test ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can replace pl.all() with pl.col(pl.String) to limit the expression to String columns only. In this example you only have String columns so it doesn't come into play.
3
6
79,201,256
2024-11-18
https://stackoverflow.com/questions/79201256/python-and-json-files
I am fairly new to python but i know the basics, I am not well versed in .json files and the commands used in python for them. I would like to create a personal app for creating character sheets for D&D and storing the data in json files so you can access it later. I watched the "python tutorial: working with JSON data using the json module" video by corey schafer which did help but I am still getting confused. this is my code: #import needed packages import json #inputs character_name = input("name your character: ") #character class is not currently being used. character_class = input("enter character class: ") #data storage character_data = { 'name': character_name, 'class': character_class } with open(character_name, 'w') as f: json.dump(character_data, f) if input("would you like to see your name?") is "yes": print() I would like the name of the character to be the name of a new json file and if the inputed name is a file that already exists it opens that character json file. I assume that would change by just using buttons in frontend dev(html) if thats even how that works. If its a new name a new json file will be created and it will ask for the input data. at the end (very unfinished) I was trying to test to make sure that i could access the json data but got stuck. anyway if anyone could help I would love any feedback (like if i should even use json) or help with the code. if you think i need to rewrite it let me know and I am up for learning a new language if it would make it that much easier. edit: I see some of the comments saying it was a broad question which I understand. the response from @mason ritchason was perfect and helped me with everything I needed. if further clarification is needed for some reason let me know :)
Welcome to Stack Overflow! You should definitely use and be comfortable with JSON. It is super-duper powerful in programming, computer science, math, etc. JSON is used all over the programming and software world, so a solid understanding will serve you well :) Stack Overflow has a ton of information on JSON, and you can also search the web for it to read more. It stands for JavaScript Object Notation; that may help you find more helpful results. The JSON standard is lengthy and boring, but will help you a lot in your learning. After you get the character name, you can use os.listdir() to iterate over whatever file directory you are storing your character files in, like so: from os import listdir # returns a list of the filenames (character sheet names) in the folder existing_characters = os.listdir(character_sheets_directory) Then you can check if the input character name is in the folder: # if the name is in the list of filenames if character_name in existing_characters: # open the character sheet, etc. ... # otherwise the character is new else: # do new character things ... # create the .json file and store the info character_sheet = open(str(character_name) + ".json", 'x') text = json.dumps(character_dict, indent = 4) character_sheet.write(text) character_sheet.close() The .JSON file extension will help you organize your files and preserve formatting. Using open with the x operation will create a new file under that name. You can use the load, dump, loads and dumps methods in the json library to interact with Python dicts. See the json documentation to learn more about each of these methods. The main difference between the normal and the s methods are that the s methods create a string from the data that can then be written into a file as plaintext. I tend to prefer the loads and dumps methods, and I like to use indentation in my dumps calls: File = open(file, 'w') text = json.dumps(dict, indent = 4) File.write(text) File.close() This will give you a file that's a bit more readable, in this kind of format: key:{ key:value, key:{ key:value } } I hope this information helps you achieve your goals!
1
2
79,200,847
2024-11-18
https://stackoverflow.com/questions/79200847/proper-way-to-extract-xml-elements-from-a-namespace
In a Python script I make a call to a SOAP service which returns an XML reply where the elements have a namespace prefix, let's say <ns0:foo xmlns:ns0="SOME-URI"> <ns0:bar>abc</ns0:bar> </ns0:foo> I can extract the content of ns0:bar with the method call doc.getElementsByTagName('ns0:bar') However, the name ns0 is only a local variable so to speak (it's not mentioned in the schema) and might as well have been named flubber or you_should_not_care. What is the proper way to extract the content of a namespaced element without relying on it having a specific name? In my case the prefix was indeed changed in the SOAP service which resulted in a parse failure.
Namespace support is needed if searching by element name doc.getElementsByTagNameNS('SOME-URI','bar') If using a package with namespace support like lxml tree.findall('{http://schemas.xmlsoap.org/soap/envelope/}Body') or by local name tree.xpath('//*[local-name()="bar"]' lxml example from lxml import etree tree = etree.parse("/home/lmc/tmp/soap.xml") tree.xpath('//*[local-name()="Company"]') Result [<Element {http://example.com}Company at 0x7f0959fb3fc0>]
1
2
79,200,092
2024-11-18
https://stackoverflow.com/questions/79200092/mock-patch-function-random-random-with-return-value-depending-on-the-module
Intention: I am trying to create a unit test for a complex class, where a lot of values are randomly generated by using random.random(). To create a unit test, I want to use mock.patch to set fixed values for random.random(), to receive always same values (same configuration) and then I can run my test which must have always same result. Problem: I need to patch function random() from random library with values depending on the module. In my understanding, mock.patch('modul1.random.random', return_value=1) should only influence the modul1 and no other random() functions in other modules. The same for the modul2: modul1.py: import random def function(): return random.random() modul2.py: import random def function(): return random.random() The unit_test: def test_function(): with mock.patch('modul1.random.random', return_value=1), \ mock.patch('modul2.random.random', return_value=0): val1 = modul1.function() val2 = modul2.function() assert not val1 == val2 Expectation: val1 = 1 and val2 = 0, therefore passed Reality: assert not 0 == 0 PythonCodebase/tests/test_patient.py:55: AssertionError
There is only function to patch, random.random, and it's shared by both modules. The best you can do is use side_effect to provide two values for it to return, one per call, but that requires you to know the order in which modul1.function and modul2.function will be called, and that may not be predictable. Better would be to modify the two modules to use their own names to refer to random.random; then you can patch those two names separately. modul1.py: from random import random def function(): return random() modul2.py: from random import random def function(): return random() The unit_test: def test_function(): with mock.patch('modul1.random', return_value=1), \ mock.patch('modul2.random', return_value=0): val1 = modul1.function() val2 = modul2.function() assert not val1 == val2
2
1
79,199,863
2024-11-18
https://stackoverflow.com/questions/79199863/reorder-numpy-array-by-given-index-list
I have an array of indexes: test_idxs = np.array([4, 2, 7, 5]) I also have an array of values (which is longer): test_vals = np.array([13, 19, 31, 6, 21, 45, 98, 131, 11]) So I want to get an array with the length of the array of indexes, but with values from the array of values in the order of the array of indexes. In other words, I want to get something like this: array([21, 31, 131, 45]) I know how to do this in a loop, but I don't know how to achieve this using Numpy tools.
This is actually extremely simple with numpy, just index your test_vals array with test_idx (integer array indexing): out = test_vals[test_idxs] Output: array([ 21, 31, 131, 45]) Note that this requires the indices to be valid. If you have indices that could be too high you would need to handle them explicitly. Example: test_idxs = np.array([4, 2, 9, 5]) test_vals = np.array([13, 19, 31, 6, 21, 45, 98, 131, 11]) out = np.where(test_idxs < len(test_vals), test_vals[np.clip(test_idxs, 0, len(test_vals)-1)], np.nan) Output: array([21., 31., nan, 45.])
1
1
79,189,688
2024-11-14
https://stackoverflow.com/questions/79189688/plotly-python-how-to-properly-add-shapes-to-subplots
How does plotly add shapes to figures with multiple subplots and what best practices are around that? Let's take the following example: from plotly.subplots import make_subplots fig = make_subplots(rows=2, cols=1, shared_xaxes=True) fig.add_vrect(x0=1, x1=2, row=1, col=1, opacity=0.5, fillcolor="grey") fig.add_scatter(x=[1,3], y=[3,4], row=1, col=1) fig.add_scatter(x=[2,2], y=[3,4], row=2, col=1) If we add_vrect at the end, the rectangle is visualized as I would expect. fig = make_subplots(rows=2, cols=1, shared_xaxes=True) fig.add_scatter(x=[1,3], y=[3,4], row=1, col=1) fig.add_scatter(x=[2,2], y=[3,4], row=2, col=1) fig.add_vrect(x0=1, x1=2, row=1, col=1, opacity=0.5, fillcolor="grey") Now, when I move away from the dummy example to a more complex plot (3 subplots, multiple y axes, logarithmic scaling, datetime x axis), adding the rectangles last does not help either. I don't manage to visualize them for two of the three subplots. Thus, I'm trying to better understand how plotly handles this under the hood. From what I have gathered so far, the rectangles are shapes and thus, not part of figure.data, but figure.layout. In the above dummy examples, the shapes are only added to the layout in the second take. Why? Is it more advisable to use fig.add_shape(type="rect") when working with more complex plots? Or should I give up and just manually wrangle with fig.layout.shapes instead of using the function calls? Examples are made with plotly 5.15.0.
The problem with shapes in subplots arises from how plotly assigns axis references (xref and yref). add_vrect automatically maps shapes to subplots using row and col which can fail in complex layouts. For more control, use add_shape. For a rectangle that always stretches vertically across the entire plot, similar to vrect, define it with reference to a secondary y-axis. This ensures it remains fixed from top to bottom even when zooming. Here is an example: import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd data = { "time": pd.date_range("2023-01-01", periods=10, freq="D"), "value1": [10, 15, 20, 25, 30, 35, 40, 45, 50, 55], "value2": [5, 10, 15, 10, 5, 10, 15, 10, 5, 10], } df = pd.DataFrame(data) fig = make_subplots(rows=2, cols=1, shared_xaxes=True, specs=[[{"secondary_y": True}], [{}]]) fig.add_shape( type="rect", x0="2023-01-03", x1="2023-01-06", y0=0, y1=1, xref="x1", yref="y2", secondary_y=True, fillcolor="blue", opacity=0.3, line_width=0, ) fig.add_scatter(x=df["time"], y=df["value1"], row=1, col=1, name="Value 1") fig.add_scatter(x=df["time"], y=df["value2"], row=2, col=1, name="Value 2") fig.update_yaxes(secondary_y=True, range=[0, 1], fixedrange=True, showgrid=False, visible=False, row=1, col=1) fig.update_layout(title="Shapes with Multiple Subplots") fig.show()
1
2
79,199,298
2024-11-18
https://stackoverflow.com/questions/79199298/python-pandas-str-extract-with-one-capture-group-only-works-in-some-cases
I have a single column in a big datasheet which I want to change, by extracting substrings from the string in that column. I do this by using str.extract on that column like so: Groups Group (A) Group (B) Group (CA) Group (CB) Group (G) Group (XP) What I want to get is the following: Groups (A) (B) (CA) (CB) (G) (XP) I did try doing this with str.extract as mentioned above, because the datasheet is transformed into a Dataframe for more data transformation stuff beforehand. Usually this works just fine, but in this case it doesn't. The relevant code is rule = "(\(A\)|\(B\)|\(G\)|\(CA\)|\(CB\)|\(XP\))" df["Groups"] = df["Groups"].str.extract(rule, expand=True) For some ungodly reason, it only extracts (A) and (B), not (G), nor any other characters. What am I doing wrong? Edit: The code around the whole application is super convoluted, badly maintained and overall not all that stable, so the issue may be somewhere else. But since this is the first and only case where this happened in any transformation, because it usually works fine for other cases, and since I managed to isolate it pretty well, the mistake should be within the issue described above.
This works fine in my hands, you have to use a raw string (r'...') to avoid the DeprecationWarning: rule = r'(\(A\)|\(B\)|\(G\)|\(CA\)|\(CB\)|\(XP\))' df['out'] = df['Groups'].str.extract(rule, expand=True) Another, more generic, option could be to allow anything between the parentheses, except parentheses themselves: rule = r'(\([^()]+\))' df['out2'] = df['Groups'].str.extract(rule, expand=True) Or, using a non-greedy quantifier: rule = r'(\(.+?\))' df['out3'] = df['Groups'].str.extract(rule, expand=True) Output: Groups out out2 out3 0 Group (A) (A) (A) (A) 1 Group (B) (B) (B) (B) 2 Group (CA) (CA) (CA) (CA) 3 Group (CB) (CB) (CB) (CB) 4 Group (G) (G) (G) (G) 5 Group (XP) (XP) (XP) (XP) Regex demos: original regex (\([^()]+\)) (\(.+?\)) If, really, the first approach doesn't work for you, you might have hidden characters that cause the regex to fail.
1
1
79,199,034
2024-11-18
https://stackoverflow.com/questions/79199034/how-to-read-a-part-of-parquet-dataset-into-pandas
I have a huge dataframe and want to split it into small files for better performance. Here is the example code to write. BUT I can not just read a small pieces from it without loading whole dataframe into memory. import pandas as pd import os # Create a sample DataFrame with daily frequency data = { "timestamp": pd.date_range(start="2023-01-01", periods=1000, freq="D"), "value": range(100) } df = pd.DataFrame(data) # Add a column for year (to use as a partition key) df["year"] = df["timestamp"].dt.year df["month"] = df["timestamp"].dt.month # Use the join method to expand the DataFrame (Cartesian product with a multiplier) multiplier = pd.DataFrame({"replica": range(100)}) # Create a multiplier DataFrame expanded_df = df.join(multiplier, how="cross") # Cartesian product using cross join # Define the output directory output_dir = "output_parquet" # Save the expanded DataFrame to Parquet with year-based partitioning expanded_df.to_parquet( output_dir, partition_cols=["year", "month"], # Specify the partition column ) Which is the best way to read from the dataset if I only need data from 2023-12-01 to 2024-01-31?
You can load your dataset selectively if your parquet output is properly partitioned. You can use libraries like PyArrow to let you filter the data at the file or partition level to make sure only the relevant data is loaded in to memory. Here's how you can do it using pyarrow.dataset: import pyarrow.dataset as ds dataset = ds.dataset("output_parquet", format="parquet", partitioning="hive") filtered_table = dataset.to_table(filter=(ds.field("year") == 2023) & (ds.field("month") == 12)) # Or any other desired condition filtered_df = filtered_table.to_pandas() You may also use pyarrow.parquet.ParquetDataset to achieve the same goal, but it's less optimized and a bit outdated: import pyarrow.parquet as pq dataset = pq.ParquetDataset("output_parquet", filters=[("year", "=", 2023), ("month", "=", 12)]) table = dataset.read() df = table.to_pandas()
1
3
79,192,572
2024-11-15
https://stackoverflow.com/questions/79192572/how-can-i-abbreviate-phrases-using-polars-built-in-methods
I need to abbreviate a series or expression of phrases by extracting the capitalized words and then creating an abbreviation based on their proportional lengths. Here's what I'm trying to achieve: Extract capitalized words from each phrase. Calculate proportional lengths based on the total length of the capitalized words in each phrase. Adjust the lengths to ensure the abbreviation meets a target length (e.g., 4 characters). TODO: If the abbreviation results in duplicates, I need to either: Automatically resolve them (e.g., by adding numbers or modifying characters) Flag them with a warning. Currently, I’m using a Python function and mapping it across a Polars Series. Is there a more efficient way to do it using Polars built-in methods. Here's my current approach: import polars as pl def _abbreviate_phrase(phrase: str, length: int) -> str: """Abbreviate a single phrase by a constant length. The function aims to abbreviate phrases into a constant length by focusing on capitalized words and adjusting them according to their proportional lengths. Example: phrase = 'Commercial & Professional' length = 4 res = _abbreviate_phrase(phrase, length) print(res) # CoPr """ # determine size of slices capitalized_words = [word for word in phrase.split(' ') if word[0].isupper()] word_lengths = [len(word) for word in capitalized_words] total_word_length = sum(word_lengths) if total_word_length == 0: return '' # Return empty if no capitalized words proportional_lengths = [round(wl / total_word_length * length) for wl in word_lengths] total_proportional_length = sum(proportional_lengths) # Adjust slices if their total length doesn't match target length if total_proportional_length < length: for i in range(length - total_proportional_length): proportional_lengths[i] += 1 elif total_proportional_length > length: for i in range(total_proportional_length - length): proportional_lengths[i] -= 1 # Combine the abbreviated words and return the result abbreviated_phrase = ''.join([word[:plength] for word, plength in zip(capitalized_words, proportional_lengths)]) return abbreviated_phrase def abbreviate_phrases(phrases: pl.Series, length: int) -> pl.Series: """Abbreviate phrases by a constant length. Example: phrases = pl.Series([ 'Sunshine', 'Sunset', 'Climate Change and Environmental Impact', 'Health and Wellness', 'Quantum Computing and Physics', 'Global Warming and Renewable Resources' ]) length = 4 res = abbreviate_phrases(phrases, length) print(res) # Series: '' [str] # [ # "Suns" # "Suns" # "CEnI" # "HeWe" # "QCoP" # "GWRR" # ] """ abbreviates = phrases.map_elements(lambda x: _abbreviate_phrase(x, length), return_dtype=pl.String) # if not abbreviates.is_unique().all(): # print('WARNING: There are duplicated abbreviations.') return abbreviates Edit: performance comparison setup for proposed solutions import sys import timeit def generate_phrases(n: int) -> pl.DataFrame: # Repeat the original phrases 'n' times phrases = pl.DataFrame({ "p": ['Climate Change and Environmental Impact', 'Health and Wellness', 'Quantum Computing and Physics', 'Global Warming and Renewable Resources', 'no capital letters'] }) return pl.concat([phrases.with_columns(pl.col('p')+'_'+str(i)) for i in range(n)]) def compare_performance(n: int, length: int = 4): phrases = generate_phrases(n) abbreviate_phrases_time = timeit.timeit(lambda: abbreviate_phrases(phrases['p'], length), number=10) abbreviate_phrases_harbeck_time = timeit.timeit(lambda: abbreviate_phrases_harbeck(phrases, phrase_column="p", length=length), number=10) abbreviate_phrases_jqurious_time = timeit.timeit(lambda: abbreviate_phrases_jqurious(phrases['p'], length=length), number=10) abbreviate_phrases_rle_time = timeit.timeit(lambda: abbreviate_phrases_rle(phrases['p'], length=length), number=10) ratio_harbeck = abbreviate_phrases_time / abbreviate_phrases_harbeck_time ratio_jqurious = abbreviate_phrases_time / abbreviate_phrases_jqurious_time ratio_rle = abbreviate_phrases_time / abbreviate_phrases_rle_time return ratio_harbeck, ratio_jqurious, ratio_rle Edit: performance comparison results for proposed solutions n = 200_000 ratio_harbeck, ratio_jqurious, ratio_rle = compare_performance(n=n, length=4) print(f"Performance ratio ({n*5} rows):") print(f" original/harbeck {ratio_harbeck:.2f}x") print(f" original/jqurious {ratio_jqurious:.2f}x") print(f" original/rle {ratio_rle:.2f}x") print() print(f'python {sys.version.split(' ')[0]}') print(f'polars {pl.__version__}') # Performance ratio (1000000 rows): # original/harbeck 0.70x # original/jqurious 1.30x # original/rle 1.79x # python 3.12.0 # polars 1.12.0
It is possible, but it's probably not an ideal task to perform this way. This is the fastest approach I've found so far. It runs ~2.5x faster for me with 1_000_000 phrases. (1.5s vs 3.9s) def abbreviate_phrases(s: pl.Series, length: int = 4): phrases = pl.col(s.name) is_upper = pl.element().filter(pl.element().str.contains(r"^\p{Upper}")) words = phrases.list.eval(is_upper) len_chars = phrases.list.eval(is_upper.str.len_chars()) total_len = len_chars.list.sum() proportion_len = ( (len_chars / total_len * length) .list.eval(pl.element().round().cast(int)) .alias("proportion_len") ) total_proportion_len = ( (pl.col("proportion_len").list.sum() - length) .alias("total_proportion_len") ) adjustments = ( pl.concat_list( (-pl.col("total_proportion_len")) .repeat_by(pl.col("total_proportion_len").abs()), pl.lit(0) .repeat_by( pl.col("proportion_len").list.len() - (pl.col("total_proportion_len").abs()) ) ) ) return ( s .to_frame() .with_row_index() .with_columns(phrases.str.split(" ")) .with_columns(words, proportion_len) .with_columns(total_proportion_len) .with_columns(pl.col("proportion_len") + adjustments) .explode(pl.exclude("index", "total_proportion_len")) .with_columns(phrases.str.slice(0, pl.col("proportion_len"))) .group_by("index", maintain_order=True) .agg(phrases) .select(phrases.list.join("")) .to_series() ) Some observations: adjustments seems to be the most expensive operation and takes ~50% of the runtime. .agg().select(phrases.list.join("")) was considerably faster than .agg(phrases.str.join()) which was surprising. Update: Slightly faster version of adjustments using .rle() (this goes from 1.5s to 1.0s for me) (inspired by the when/then approach from @HenryHarbeck's answer) def abbreviate_phrases_rle(s: pl.Series, length = 4): phrases = pl.col(s.name) is_upper = pl.element().filter(pl.element().str.contains(r"^\p{Upper}")) words = phrases.list.eval(is_upper) word_lengths = phrases.list.eval(is_upper.str.len_chars()) total_length = word_lengths.list.sum() proportional_length = ( (word_lengths / total_length * length) .list.eval(pl.element().round().cast(int)) .alias("proportional_length") ) length_diff = ( (pl.col("proportional_length").list.sum() - length) .alias("length_diff") ) rle = ( pl.int_ranges(pl.col("index").rle().struct.field("len")) .flatten() ) adjustments = ( pl.when( pl.col("length_diff") != 0, pl.col("length_diff").abs() > rle ) .then(pl.when(pl.col("length_diff") < 0).then(1).otherwise(-1)) .otherwise(0) ) return ( s .to_frame() .with_row_index() .with_columns(phrases.str.split(" ")) .with_columns(words, proportional_length) .with_columns(length_diff) .explode(pl.exclude("index", "length_diff")) .with_columns(phrases.str.slice(0, pl.col("proportional_length") + adjustments)) .group_by("index", maintain_order=True) .agg(phrases) .select(phrases.list.join("")) .to_series() ) Polars plugins To do this "really" efficiently, you'd likely need to implement it in Rust as a Polars Plugin. If I take the parallel pig latinify example as a template. (I'm a Rust beginner, so may have missed some things.) use polars_utils::cache::FastFixedCache; use pyo3_polars::export::polars_core::export::regex::Regex; fn abbreviate_str(value: &str, reg: &mut Regex, output: &mut String) { let max_len = 4; let mut total_len = 0; let mut word_lengths = vec![]; let mut words = vec![]; if value.len() > 0 { for word in value.split_whitespace() { if reg.is_match(word) { let word_len = word.chars().count(); total_len += word_len; word_lengths.push(word_len); words.push(word); } } let mut total_proportional_len = 0; let mut proportional_lengths: Vec<_> = word_lengths .iter() .map(|wl| { let prop_len = (*wl as f64 / total_len as f64 * max_len as f64).round() as usize; total_proportional_len += prop_len; prop_len }) .collect(); if total_proportional_len > max_len { let diff = total_proportional_len - max_len; for idx in 0..diff { proportional_lengths[idx] -= 1 } } else if total_proportional_len < max_len { let diff = max_len - total_proportional_len; for idx in 0..diff { proportional_lengths[idx] += 1 } } words .iter() .zip(proportional_lengths.iter()) .for_each(|(word, prop_len)| { let end = match word.char_indices().map(|(idx, _)| idx).nth(*prop_len) { Some(end) => end, _ => word.len(), }; write!(output, "{}", &word[..end]) .unwrap() }) } } #[polars_expr(output_type=String)] fn abbreviate_with_parallelism( inputs: &[Series], context: CallerContext, //kwargs: PigLatinKwargs, ) -> PolarsResult<Series> { use rayon::prelude::*; let ca = inputs[0].str()?; let pat = r"^\p{Upper}"; if context.parallel() { let mut reg_cache = FastFixedCache::new(1); let out: StringChunked = ca.apply_into_string_amortized(|value, output| { let reg = reg_cache .try_get_or_insert_with(pat, |p| Regex::new(p)) .unwrap(); abbreviate_str(value, reg, output) }); Ok(out.into_series()) } else { POOL.install(|| { let n_threads = POOL.current_num_threads(); let splits = split_offsets(ca.len(), n_threads); let chunks: Vec<_> = splits .into_par_iter() .map(|(offset, len)| { let mut reg_cache = FastFixedCache::new(1); let sliced = ca.slice(offset as i64, len); let out = sliced.apply_into_string_amortized(|value, output| { let reg = reg_cache .try_get_or_insert_with(pat, |p| Regex::new(p)) .unwrap(); abbreviate_str(value, reg, output) }); out.downcast_iter().cloned().collect::<Vec<_>>() }) .collect(); Ok( StringChunked::from_chunk_iter(ca.name().clone(), chunks.into_iter().flatten()) .into_series(), ) }) } } For comparison, this runs in 0.1s (for 10 million rows I get 1.1s vs 16.1s) https://docs.pola.rs/user-guide/plugins/your-first-polars-plugin/ https://marcogorelli.github.io/polars-plugins-tutorial/stringify/
3
2
79,198,324
2024-11-17
https://stackoverflow.com/questions/79198324/why-does-pythons-structural-pattern-matching-not-support-multiple-assignment
I was experimenting with Python Structural Pattern Matching and wanted to write a match statement capable of matching repeated occurrences in a sequence. Suppose that I have the following list of tuples and was trying to match every pair in which the first element of the tuple is the same. from itertools import combinations objs = (('a', 'b'), ('c', 'd', 'e'), ('f',), ('a', 'g'), ('b', 'a')) pairs = list(combinations(objs, 2)) The goal was my code to match the pair: (('a', 'b'), ('a', 'g')) as both tuples starts with 'a'. My first attempt was: from itertools import combinations objs = (('a', 'b'), ('c', 'd', 'e'), ('f',), ('a', 'g'), ('b', 'a')) pairs = list(combinations(objs, 2)) for pair in pairs: match pair: # This raises Syntax Error! case ((x, *y), (x, *z)): print('Found!') case _: pass However this raises: SyntaxError: multiple assignments to name 'x' in pattern. A possible work around I guess is to use guards in which one gives different names such as x1 and x2 and then matches only if x1 == x2, however this can quickly become quite messy if I want to enforce more than one equality. In my opinion using the same binding name is both elegant and practical with the obvious implication that the two should be the same. And so I ask, is there any reason why Structural Pattern Matching enforces the variable names to be different? Is there a nicer way to accomplish this that I'm missing?
From PEP 635 – Structural Pattern Matching: Motivation and Rationale, section Capture Patterns: A name used for a capture pattern must not coincide with another capture pattern in the same pattern. This, again, is similar to parameters, which equally require each parameter name to be unique within the list of parameters. It differs, however, from iterable unpacking assignment, where the repeated use of a variable name as target is permissible (e.g., x, x = 1, 2). The rationale for not supporting (x, x) in patterns is its ambiguous reading: it could be seen as in iterable unpacking where only the second binding to x survives. But it could be equally seen as expressing a tuple with two equal elements (which comes with its own issues). Should the need arise, then it is still possible to introduce support for repeated use of names later on.
2
5
79,198,199
2024-11-17
https://stackoverflow.com/questions/79198199/how-do-i-stop-legends-from-being-merged-when-vertically-concatenating-two-plots
Consider the following small example (based off of this gallery example]): import altair as alt from vega_datasets import data import polars as pl # add a column indicating the year associated with each date source = pl.from_pandas(data.stocks()).with_columns(year=pl.col.date.dt.year()) # an MSFT specific plot msft_plot = ( alt.Chart(source.filter(pl.col.symbol.eq("MSFT"))) .mark_line() .encode(x="date:T", y="price:Q", color="year:O") ) # the original plot: https://altair-viz.github.io/gallery/line_chart_with_points.html all_plot = ( alt.Chart(source) .mark_line() .encode(x="date:T", y="price:Q", color="symbol:N") ) msft_plot & all_plot This produces the following output: On the other hand, if I only plot all_plot: How do I stop the legends being merged when I concatenate msft_plot & all_plot?
You can use (msft_plot & all_plot).resolve_scale(color='independent'): More info about the resolve_ methods can be found in this section of the docs.
1
2
79,197,104
2024-11-17
https://stackoverflow.com/questions/79197104/difference-between-single-and-table-methods-in-pandas-dataframe-quantile
I hope someone can help me understand the difference between the "single" and "table" methods in pandas.DataFrame.quantile? Whether to compute quantiles per-column (β€˜single’) or over all columns (β€˜table’). https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.quantile.html For example, the following code yields the same results. import numpy as np import pandas as pd df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]), columns=["a", "b"]) print(df.quantile(method="single", interpolation="nearest")) print(df.quantile(method="table", interpolation="nearest")) a 3 b 100 Name: 0.5, dtype: int64 a 3 b 100 Name: 0.5, dtype: int64
If you invert the order of the values in b the result will be different: df['b'] = df['b'].values[::-1] print(df.quantile(method='single', interpolation='nearest')) print(df.quantile(method='table', interpolation='nearest')) Output: a 3 b 100 Name: 0.5, dtype: int64 a 3 b 10 Name: 0.5, dtype: int64 The exact difference in behavior is a bit tricky to understand. This happens in core/frame.py: if method == "single": res = data._mgr.quantile(qs=q, interpolation=interpolation) elif method == "table": valid_interpolation = {"nearest", "lower", "higher"} if interpolation not in valid_interpolation: raise ValueError( f"Invalid interpolation: {interpolation}. " f"Interpolation must be in {valid_interpolation}" ) # handle degenerate case if len(data) == 0: if data.ndim == 2: dtype = find_common_type(list(self.dtypes)) else: dtype = self.dtype return self._constructor([], index=q, columns=data.columns, dtype=dtype) q_idx = np.quantile(np.arange(len(data)), q, method=interpolation) by = data.columns if len(by) > 1: keys = [data._get_label_or_level_values(x) for x in by] indexer = lexsort_indexer(keys) else: k = data._get_label_or_level_values(by[0]) indexer = nargsort(k) res = data._mgr.take(indexer[q_idx], verify=False) res.axes[1] = q result = self._constructor_from_mgr(res, axes=res.axes) return result.__finalize__(self, method="quantile") In short, if you use method='single', this computes: np.percentile(a, 50, axis=0, method='nearest') # array([ 3, 100]) With method='table', this computes the quantile on the indexer (np.arange(len(df))). The result with be an integer (q_idx) between 0 and len(df). The, this sorts the rows with lexsort_indexer, giving priority to the first column(s), and finally takes the (q_idx)th row: data = df q = 0.5 interpolation = 'nearest' q_idx = np.quantile(np.arange(len(data)), q, method=interpolation) # 2 indexer = lexsort_indexer([data._get_label_or_level_values(x) for x in data.columns]) # array([0, 1, 2, 3]) df.iloc[indexer] # a b # 0 1 100 # 1 2 100 # 2 3 10 # this row will be picked # 3 4 1 data.iloc[indexer[q_idx]] # a 3 # b 10 # Name: 2, dtype: int64 This means that the result is dependent on the order of the columns: print(df.quantile(method='table', interpolation='nearest')) a 3 b 10 Name: 0.5, dtype: int64 # now let's give b priority over a print(df.iloc[:, ::-1].quantile(method='table', interpolation='nearest')) b 100 a 1 Name: 0.5, dtype: int64
1
1
79,186,201
2024-11-13
https://stackoverflow.com/questions/79186201/converting-pl-duration-to-human-string
When printing a polars data frame, pl.Duration are printed in a "human format" by default. What function is used to do this conversion? Is it possible to use it? Trying "{}".format() returns something readable but not as good. import polars as pl data = {"end": ["2024/11/13 10:28:00", "2024/10/10 10:10:10", "2024/09/13 09:12:29", "2024/08/31 14:57:02", ], "start": ["2024/11/13 10:27:33", "2024/10/10 10:01:01", "2024/09/13 07:07:07", "2024/08/25 13:48:28", ] } df = pl.DataFrame(data) df = df.with_columns( pl.col("end").str.to_datetime(), pl.col("start").str.to_datetime(), ) df = df.with_columns( duration = pl.col("end") - pl.col("start"), ) df = df.with_columns( pl.col("duration").map_elements(lambda t: "{}".format(t), return_dtype=pl.String()).alias("duration_str") ) print(df) shape: (4, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ end ┆ start ┆ duration ┆ duration_str β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ datetime[ΞΌs] ┆ duration[ΞΌs] ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ══════════════β•ͺ═════════════════║ β”‚ 2024-11-13 10:28:00 ┆ 2024-11-13 10:27:33 ┆ 27s ┆ 0:00:27 β”‚ β”‚ 2024-10-10 10:10:10 ┆ 2024-10-10 10:01:01 ┆ 9m 9s ┆ 0:09:09 β”‚ β”‚ 2024-09-13 09:12:29 ┆ 2024-09-13 07:07:07 ┆ 2h 5m 22s ┆ 2:05:22 β”‚ β”‚ 2024-08-31 14:57:02 ┆ 2024-08-25 13:48:28 ┆ 6d 1h 8m 34s ┆ 6 days, 1:08:34 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Polars 1.14.0 added Duration type support to .dt.to_string() It can produce iso and polars formatted strings. pl.select( pl.duration(hours=1, minutes=2).dt.to_string() # format="iso" ).item() # 'PT1H2M' pl.select( pl.duration(hours=1, minutes=2).dt.to_string(format="polars") ).item() # '1h 2m'
3
2
79,186,983
2024-11-13
https://stackoverflow.com/questions/79186983/how-to-render-latex-in-shiny-for-python
I'm trying to find if there is a way to render LaTeX formulas in Shiny for Python or any low-hanging fruit workaround for that. Documentation doesn't have any LaTeX mentions, so looks like there's no dedicated functionality to support it. Also double-checked different variations of Latex in their playground. Tried this but didn't work: from shiny.express import input, render, ui @render.text def txt(): equation = r"$$\[3 \times 3+3-3 \]$$".strip() return equation
You can import Katex. I got here via https://stackoverflow.com/a/65540803/5599595. Running in shinylive from shiny.express import ui from shiny import render with ui.tags.head(): # Link KaTeX CSS ui.tags.link( rel="stylesheet", href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" ), ui.tags.script(src="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.js"), ui.tags.script(src="https://cdn.jsdelivr.net/npm/[email protected]/dist/contrib/auto-render.min.js"), ui.tags.script(""" document.addEventListener('DOMContentLoaded', function() { renderMathInElement(document.body); }); """) with ui.card(): ui.p("Here's a quadratic formula: \\[x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}\\]") ui.p("And an inline equation: \\(E = mc^2\\)") ui.p("\\[3 \\times 3+3-3 \\]") Here's an alternative which can render the array running in shiny live: from shiny.express import ui from shiny import render with ui.tags.head(): ui.tags.link( rel="stylesheet", href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" ), ui.tags.script(src="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.js"), ui.tags.script(src="https://cdn.jsdelivr.net/npm/[email protected]/dist/contrib/auto-render.min.js"), ui.tags.script(""" document.addEventListener('DOMContentLoaded', function() { renderMathInElement(document.body, { delimiters: [ {left: "$$", right: "$$", display: true}, {left: "\\[", right: "\\]", display: true}, {left: "$", right: "$", display: false}, {left: "\\(", right: "\\)", display: false} ] }); }); """) with ui.card(): ui.p("Here's a quadratic formula: $$x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$$") ui.p("And an inline equation: $$E = mc^2$$") ui.p("And simple math $$3 \\times 3+3-3$$") # KaTeX tables https://www.redgregory.com/notion/2020/12/23/a-katex-table-cheatsheet-for-notion ui.p("Table 1 $$ \\begin{array}{cc} a & b \\\\ c & d \\end{array} $$") ui.p("""Table 2 $$ \\begin{array} {|c|c|} \\hline A & B \\\\ \\hline 1 & 2 \\\\ \\hline 3 & 4 \\\\ \\hline \\end{array} $$ """.strip())
3
2
79,196,656
2024-11-17
https://stackoverflow.com/questions/79196656/networkx-graph-get-groups-of-linked-connected-values-with-multiple-values
If I use such data import networkx as nx G = nx.Graph() G.add_nodes_from([1, 2, 3, 4, 5, 6, 7]) G.add_edges_from([(1, 2), (1, 3), (2, 4), (5, 6)]) print(list(nx.connected_components(G))) Everything works fine. But what if I need to get connected values from multiple tuple, such as the folowing import networkx as nx G = nx.Graph() G.add_nodes_from([1, 2, 3, 4, 5, 6, 7]) G.add_edges_from([(1, 2), (1, 3, 7), (2, 4, 1, 6), (5, 6)]) print(list(nx.connected_components(G))) As you can see its not classic and not working. What methods can I implement in order to pass such data, so that I got connected array values? I expect getting arrays with connected values between each other
The issue is that add_edges_from only takes a list of (source, target) tuples. You could use itertools: import networkx as nx from itertools import chain, pairwise G = nx.Graph() G.add_nodes_from([1, 2, 3, 4, 5, 6, 7]) edges = [(1, 2), (1, 3, 7), (2, 4, 1, 6), (5, 6)] G.add_edges_from(chain.from_iterable(map(pairwise, edges))) print(list(nx.connected_components(G))) Variant without import: G.add_edges_from(e for t in edges for e in zip(t, t[1:])) Output: [{1, 2, 3, 4, 5, 6, 7}] Graph: Intermediates: # list(chain.from_iterable(map(pairwise, edges))) # [e for t in edges for e in zip(t, t[1:])] [(1, 2), (1, 3), (3, 7), (2, 4), (4, 1), (1, 6), (5, 6)]
2
2
79,195,581
2024-11-16
https://stackoverflow.com/questions/79195581/json-normalization-record-path-key-not-found
This post was edited to grab the actual JSON file (large) instead of the sample snippet that I extracted,(which works in this post). I was wondering why I get a key error when i use record_path on this data set. under the results key there are 2 nested keys named 'active_ingredients' and 'packaging' when i normalize i get result = pd.json_normalize(data['results'], record_path=["packaging"],meta=['product_ndc']) the expected columns package_ndc description marketing_start_date sample marketing_end_date product_ndcs but when i add active_ingredients to the record_path list i get a key error. The same goes for meta as well. When i add the other columns like 'brand_name' and 'generic_name' to the meta list, I get a key error. to see the keys this doesnt work result = pd.json_normalize(data['results'], record_path=["packaging","active_ingredients"],meta=['product_ndc','brand_name','generic_name']) Thanks for any help Here is the actual code I use to grab the data which produces the key error. import pandas as pd import json import requests, zipfile, io, os cwd = os.getcwd() zip_url = 'https://download.open.fda.gov/drug/ndc/drug-ndc-0001-of-0001.json.zip' r = requests.get(zip_url) z = zipfile.ZipFile(io.BytesIO(r.content)) z.extractall(cwd) with open('drug-ndc-0001-of-0001.json', 'r') as file: data = json.load(file) packaging_data = pd.json_normalize( data['results'], record_path=["packaging"], meta=['product_ndc', 'brand_name', 'generic_name'] ) active_ingredients_data = pd.json_normalize( data['results'], record_path=["active_ingredients"], meta=['product_ndc', 'brand_name', 'generic_name'] ) i paired it with your answer and getting the same issues I had before I posted the question.
When you specify multiple record_path entries (like "packaging" and "active_ingredients"), pandas expects that the second record_path ("active_ingredients") exists within every element of the first record_path ("packaging"), but, in your data, active_ingredients is not a nested property of packaging Do this to solve this import pandas as pd data = { "meta": { "disclaimer": "Do not rely on openFDA to make decisions regarding medical care. While we make every effort to ensure that data is accurate, you should assume all results are unvalidated. We may limit or otherwise restrict your access to the API in line with our Terms of Service.", "terms": "https://open.fda.gov/terms/", "license": "https://open.fda.gov/license/", "last_updated": "2024-11-15", "results": { "skip": 0, "limit": 2, "total": 118943 } }, "results": [ { "product_ndc": "73647-062", "generic_name": "MENTHOL, CAMPHOR", "labeler_name": "Just Brands LLC", "brand_name": "JUST CBD - CBD AND THC ULTRA RELIEF", "active_ingredients": [ { "name": "CAMPHOR (SYNTHETIC)", "strength": "2 g/100g" }, { "name": "MENTHOL", "strength": "6 g/100g" } ], "finished": True, "packaging": [ { "package_ndc": "73647-062-04", "description": "113 g in 1 BOTTLE, PUMP (73647-062-04)", "marketing_start_date": "20230314", "sample": False } ], "listing_expiration_date": "20251231", "openfda": { "manufacturer_name": ["Just Brands LLC"], "spl_set_id": ["f664eb79-8897-3a49-e053-2995a90a37b4"], "is_original_packager": [True], "unii": ["5TJD82A1ET", "L7T10EIP3A"] }, "marketing_category": "OTC MONOGRAPH DRUG", "dosage_form": "GEL", "spl_id": "16c906dd-6989-9a79-e063-6394a90afa71", "product_type": "HUMAN OTC DRUG", "route": ["TOPICAL"], "marketing_start_date": "20230314", "product_id": "73647-062_16c906dd-6989-9a79-e063-6394a90afa71", "application_number": "M017", "brand_name_base": "JUST CBD - CBD AND THC ULTRA RELIEF" }, { "product_ndc": "0591-4039", "marketing_end_date": "20250930", "generic_name": "CLOBETASOL PROPIONATE", "labeler_name": "Actavis Pharma, Inc.", "brand_name": "CLOBETASOL PROPIONATE", "active_ingredients": [ { "name": "CLOBETASOL PROPIONATE", "strength": ".05 g/mL" } ], "finished": True, "packaging": [ { "package_ndc": "0591-4039-46", "description": "1 BOTTLE in 1 CARTON (0591-4039-46) / 59 mL in 1 BOTTLE", "marketing_start_date": "20150828", "marketing_end_date": "20250930", "sample": False }, { "package_ndc": "0591-4039-74", "description": "1 BOTTLE in 1 CARTON (0591-4039-74) / 125 mL in 1 BOTTLE", "marketing_start_date": "20150828", "marketing_end_date": "20250930", "sample": False } ], "openfda": { "manufacturer_name": ["Actavis Pharma, Inc."], "rxcui": ["861512"], "spl_set_id": ["907e425a-720a-4180-b97c-9e25008a3658"], "is_original_packager": [True], "unii": ["779619577M"] }, "marketing_category": "NDA AUTHORIZED GENERIC", "dosage_form": "SPRAY", "spl_id": "33a56b8b-a9a6-4287-bbf4-d68ad0c59e07", "product_type": "HUMAN PRESCRIPTION DRUG", "route": ["TOPICAL"], "marketing_start_date": "20150828", "product_id": "0591-4039_33a56b8b-a9a6-4287-bbf4-d68ad0c59e07", "application_number": "NDA021835", "brand_name_base": "CLOBETASOL PROPIONATE", "pharm_class": [ "Corticosteroid Hormone Receptor Agonists [MoA]", "Corticosteroid [EPC]" ] } ] } packaging_data = pd.json_normalize( data['results'], record_path=["packaging"], meta=['product_ndc', 'brand_name', 'generic_name'] ) active_ingredients_data = pd.json_normalize( data['results'], record_path=["active_ingredients"], meta=['product_ndc', 'brand_name', 'generic_name'] ) combined_data = pd.merge( packaging_data, active_ingredients_data, on=['product_ndc', 'brand_name', 'generic_name'], how='outer' ) print(packaging_data) print(active_ingredients_data) print(combined_data) which gives package_ndc description \ 0 73647-062-04 113 g in 1 BOTTLE, PUMP (73647-062-04) 1 0591-4039-46 1 BOTTLE in 1 CARTON (0591-4039-46) / 59 mL i... 2 0591-4039-74 1 BOTTLE in 1 CARTON (0591-4039-74) / 125 mL ... marketing_start_date sample marketing_end_date product_ndc \ 0 20230314 False NaN 73647-062 1 20150828 False 20250930 0591-4039 2 20150828 False 20250930 0591-4039 brand_name generic_name 0 JUST CBD - CBD AND THC ULTRA RELIEF MENTHOL, CAMPHOR 1 CLOBETASOL PROPIONATE CLOBETASOL PROPIONATE 2 CLOBETASOL PROPIONATE CLOBETASOL PROPIONATE name strength product_ndc \ 0 CAMPHOR (SYNTHETIC) 2 g/100g 73647-062 1 MENTHOL 6 g/100g 73647-062 2 CLOBETASOL PROPIONATE .05 g/mL 0591-4039 brand_name generic_name 0 JUST CBD - CBD AND THC ULTRA RELIEF MENTHOL, CAMPHOR 1 JUST CBD - CBD AND THC ULTRA RELIEF MENTHOL, CAMPHOR 2 CLOBETASOL PROPIONATE CLOBETASOL PROPIONATE package_ndc description \ 0 0591-4039-46 1 BOTTLE in 1 CARTON (0591-4039-46) / 59 mL i... ... 0 CLOBETASOL PROPIONATE .05 g/mL 1 CLOBETASOL PROPIONATE .05 g/mL 2 CAMPHOR (SYNTHETIC) 2 g/100g 3 MENTHOL 6 g/100g EDIT I changed the naming to mirror your changes in your edit: The first script uses the variable names packaging_df, active_ingredients_df, and combined_df for the DataFrames related to packaging, active_ingredients, and their merged result, respectively, whereas the second script uses packaging_data, active_ingredients_data, and combined_data for the same purposes. The difference lies solely in the naming conventions, with no impact on functionality or logic. The output is the same, so if you still experience issues, it must come from something else, probably in something you do before. import pandas as pd data = { "meta": { "disclaimer": "Do not rely on openFDA to make decisions regarding medical care. While we make every effort to ensure that data is accurate, you should assume all results are unvalidated. We may limit or otherwise restrict your access to the API in line with our Terms of Service.", "terms": "https://open.fda.gov/terms/", "license": "https://open.fda.gov/license/", "last_updated": "2024-11-15", "results": { "skip": 0, "limit": 2, "total": 118943 } }, "results": [ { "product_ndc": "73647-062", "generic_name": "MENTHOL, CAMPHOR", "labeler_name": "Just Brands LLC", "brand_name": "JUST CBD - CBD AND THC ULTRA RELIEF", "active_ingredients": [ { "name": "CAMPHOR (SYNTHETIC)", "strength": "2 g/100g" }, { "name": "MENTHOL", "strength": "6 g/100g" } ], "finished": True, "packaging": [ { "package_ndc": "73647-062-04", "description": "113 g in 1 BOTTLE, PUMP (73647-062-04)", "marketing_start_date": "20230314", "sample": False } ], "listing_expiration_date": "20251231", "openfda": { "manufacturer_name": ["Just Brands LLC"], "spl_set_id": ["f664eb79-8897-3a49-e053-2995a90a37b4"], "is_original_packager": [True], "unii": ["5TJD82A1ET", "L7T10EIP3A"] }, "marketing_category": "OTC MONOGRAPH DRUG", "dosage_form": "GEL", "spl_id": "16c906dd-6989-9a79-e063-6394a90afa71", "product_type": "HUMAN OTC DRUG", "route": ["TOPICAL"], "marketing_start_date": "20230314", "product_id": "73647-062_16c906dd-6989-9a79-e063-6394a90afa71", "application_number": "M017", "brand_name_base": "JUST CBD - CBD AND THC ULTRA RELIEF" }, { "product_ndc": "0591-4039", "marketing_end_date": "20250930", "generic_name": "CLOBETASOL PROPIONATE", "labeler_name": "Actavis Pharma, Inc.", "brand_name": "CLOBETASOL PROPIONATE", "active_ingredients": [ { "name": "CLOBETASOL PROPIONATE", "strength": ".05 g/mL" } ], "finished": True, "packaging": [ { "package_ndc": "0591-4039-46", "description": "1 BOTTLE in 1 CARTON (0591-4039-46) / 59 mL in 1 BOTTLE", "marketing_start_date": "20150828", "marketing_end_date": "20250930", "sample": False }, { "package_ndc": "0591-4039-74", "description": "1 BOTTLE in 1 CARTON (0591-4039-74) / 125 mL in 1 BOTTLE", "marketing_start_date": "20150828", "marketing_end_date": "20250930", "sample": False } ], "openfda": { "manufacturer_name": ["Actavis Pharma, Inc."], "rxcui": ["861512"], "spl_set_id": ["907e425a-720a-4180-b97c-9e25008a3658"], "is_original_packager": [True], "unii": ["779619577M"] }, "marketing_category": "NDA AUTHORIZED GENERIC", "dosage_form": "SPRAY", "spl_id": "33a56b8b-a9a6-4287-bbf4-d68ad0c59e07", "product_type": "HUMAN PRESCRIPTION DRUG", "route": ["TOPICAL"], "marketing_start_date": "20150828", "product_id": "0591-4039_33a56b8b-a9a6-4287-bbf4-d68ad0c59e07", "application_number": "NDA021835", "brand_name_base": "CLOBETASOL PROPIONATE", "pharm_class": [ "Corticosteroid Hormone Receptor Agonists [MoA]", "Corticosteroid [EPC]" ] } ] } packaging_data = pd.json_normalize( data['results'], record_path=["packaging"], meta=['product_ndc', 'brand_name', 'generic_name'] ) active_ingredients_data = pd.json_normalize( data['results'], record_path=["active_ingredients"], meta=['product_ndc', 'brand_name', 'generic_name'] ) combined_data = pd.merge( packaging_data, active_ingredients_data, on=['product_ndc', 'brand_name', 'generic_name'], how='outer' ) print(packaging_data) print(active_ingredients_data) print(combined_data)
3
0
79,196,626
2024-11-17
https://stackoverflow.com/questions/79196626/python-polars-recursion
I've used Polars for some time now but this is something that often makes me go from Polars DataFrames to native Python calculations. I've spent resonable time looking for solutions that (tries) to use shift(), rolling(), group_by_dynamic() and so on but none is successful. Task Do calculation that depends on previous calculation's result that is in the same column. Example in Excel In Excel this is like the most straighforward formula ever...if the "index" is zero I want to return "A", otherwise I want to return the result from the cell above. A B C 1 Index Result Formula for the "Result" column 2 0 A =IF(A2=0;"A";B1) 3 1 A =IF(A3=0;"A";B2) Where is the recursion In column "B" the formula refers to the previously calculated values on the same column "B". Copy & Paste Excel's solution to Polars # Import Polars module. import polars as pl # Create the data. data = {'Index': [0, 1]} # Create the DataFrame. df = pl.from_dict(data) # Add a column to the DataFrame. df = df.with_columns( # Tries to reproduce the Excel formula. Result = pl.when( pl.col('Index') == 0 ).then( pl.lit('A') ).otherwise( pl.col('Result') ) ) The issue Within the "with_columns()" method the "Result" column cannot be referred because It doens't exist in the DataFrame yet. If we try to do so, we get a ColumnNotFoundError: Question Any idea on how can I accomplish such a simple task on Polars? Thank you,
How can I do a calculation that depends on previous (row's) calculation result that is in the same column? The short answer is that you can't without falling back into Python. To do this, any library would need to essentially need to iterate over the rows, only calculating a single row at a time. This means any sort of vectorisation is not possible. Polars offers map_elements for this use-case, but it is discouraged. From the docs: This method is much slower than the native expressions API. Only use it if you cannot implement your logic otherwise. df = pl.DataFrame({'Index': [1, 0, 1, 1, 0]}) previous_result = "Result" # hardcode the header as the intial "previous result" def f(index): global previous_result out = "A" if index == 0 else previous_result previous_result = out return out print(df.with_columns(Result=pl.col("Index").map_elements(f, return_dtype=pl.String))) # shape: (5, 2) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ Index ┆ Result β”‚ # β”‚ --- ┆ --- β”‚ # β”‚ i64 ┆ str β”‚ # β•žβ•β•β•β•β•β•β•β•ͺ════════║ # β”‚ 1 ┆ Result β”‚ # β”‚ 0 ┆ A β”‚ # β”‚ 1 ┆ A β”‚ # β”‚ 1 ┆ A β”‚ # β”‚ 0 ┆ A β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The better solution is to attempt to recognise any pattern that allows the computation to be done in a vectorised way. In this (likely contrived) example, it is that once Index == 0 has been seen once, the result of the remainder of the column is "A" df.with_columns( # If the row number is >= to the first row an index of 0 was seen pl.when(pl.int_range(pl.len()) >= pl.arg_where(pl.col("Index") == 0).min()) .then(pl.lit("A")) .otherwise(pl.lit("Result")) .alias("Result") ) # Returns the same output as above
5
4
79,196,393
2024-11-17
https://stackoverflow.com/questions/79196393/pip-requirements-syntax-highlighting-in-github-markdown
According to GitHub syntax highlighting, keyword for pip requirements syntax highlighting can be found on languages.yml. According to the link, the keyword is Pip Requirements, but the following markdown snippet isn't highlighted on GitHub: ```Pip Requirements pandas==2.2.3 ``` How to syntax highlight pip requirements?
Use "pip-requirements" instead of "Pip Requirements": ```pip-requirements --editable . foo==1.2 bar==3.4 baz[quux]>=1.0.1 ``` It does not seem to render on Stack Overflow, but here is a screencap of markdown rendered from Github: Source: https://gist.github.com/wimglenn/204dd744240848d8cbf2b9beb6eb4a83
1
1
79,196,228
2024-11-16
https://stackoverflow.com/questions/79196228/pydirectinput-entering-incorrect-key
Given the below code I am expecting the keyboard to press the page down key. However, instead it is pressing the number 3. All other keys I have used appear to be working correctly. How do I get this mapped correctly? import time import pydirectinput time.sleep(2) pydirectinput.keyDown('pagedown') time.sleep(0.01) pydirectinput.keyUp('pagedown')
You want the sequence of keystrokes to include NumLock.
1
1
79,195,973
2024-11-16
https://stackoverflow.com/questions/79195973/how-to-access-unknown-fields-in-python-protobuf-version-5-38-3-with-upb-backend
I'm using Python protobuf package version 5.38.3 for deserializing some packets and I need to check if the messages I deserialize are conformant or not to a specific protobuf message structure. For some checks I want to obtain the list of unknown fields. This post points to an API UnknownFields() supported by messages, but when I call it in a deserialized message it raises NotImplementedError. How can I get access to the list of unknown fields from a deserialized message in protobuf 5.28.3?
How can I get access to the list of unknown fields Here, let me google that for you. https://protobuf.dev/news/2023-08-15 Python Breaking Change In v25 message.UnknownFields() will be deprecated in pure Python and C++ extensions. It will be removed in v26. Use the new UnknownFieldSet(message) support in unknown_fields.py as a replacement. You will want to update your code to use the new public API.
1
1
79,195,042
2024-11-16
https://stackoverflow.com/questions/79195042/handling-complex-parentheses-structures-to-get-the-expected-data
We have data from a REST API call stored in an output file that looks as follows: Sample Input File: test test123 - test (bla bla1 (On chutti)) test test123 bla12 teeee (Rinku Singh) balle balle (testagain) (Rohit Sharma) test test123 test1111 test45345 (Surya) (Virat kohli (Lagaan)) testagain blae kaun hai ye banda (Ranbir kapoor (Lagaan), Milkha Singh (On chutti) (Lagaan)) Expected Output: bla bla1 Rinku Singh Rohit Sharma Virat kohli Ranbir kapoor, Milkha Singh Conditions to Derive the Expected Output: Always consider the last occurrence of parentheses () in each line. We need to extract the values within this last, outermost pair of parentheses. Inside the last occurrence of (), extract all values that appear before each occurrence of nested parentheses (). Eg: test test123 - test (bla bla1 (On chutti)) last parenthesis starts from (bla to till chutti)) so I need bla bla1 since its before inner (On chutti). So look for the last parenthesis and then inside how many pair of parenthesis comes we need to get data before them, eg: in line testagain blae kaun hai ye banda (Ranbir kapoor (Lagaan), Milkha Singh (On chutti) (Lagaan)) needed is Ranbir kapoor and Milkha Singh. Attempted Regex: I tried using the following regular expression on Working Demo of regex: Regex: ^(?:^[^(]+\([^)]+\) \(([^(]+)\([^)]+\)\))|[^(]+\(([^(]+)\([^)]+\),\s([^\(]+)\([^)]+\)\s\([^\)]+\)\)|(?:(?:.*?)\((.*?)\(.*?\)\))|(?:[^(]+\(([^)]+)\))$ The Regex that I have tried is working fine but I want to improve it with the advice of experts here. Preferred Languages: Looking to improve this regex OR a Python, or an awk answer is also ok. I myself will also try to add an awk answer.
Purely based on your shown input and your comments reflecting that you need to capture 1 or 2 values per line, here is an optimized regex solution: ^(?:\([^)(]*\)|[^()])*\(([^)(]+)(?:\([^)(]*\)[, ]*(?:([^)(]+))?)? RegEx Demo RegEx Details: This regex solution does the following: match everythng before last (...) then match ( then 1st group: match name that must not have ( and ) then optional match of (...) or comma/space then 2nd group: match name that must not have ( and ) Further Details: ^: Start (?:: Start non-capture group \([^\n)(]*\): Match any pair of (...) text |: OR [^()\n]: Match any character that are not (, ) and \n )*: End non-capture group. Repeat this 0 or more times \(: Match last ( ([^)(\n]+): 1st capture group that matches text with 1+ characters that are not (, ) and \n (?:: Start non-capture group 1 \([^\n)(]*\): Match any pair of (...) text [, ]*: Match 0 or more of space or comma characters (?:: Start non-capture group 2 ([^)(\n]+): 2nd capture group that matches text with 1+ characters that are not (, ) and \n )?: End non-capture group 2. ? makes this an optional match )?: End non-capture group 1. ? makes this an optional match
7
6
79,195,659
2024-11-16
https://stackoverflow.com/questions/79195659/issue-with-toggling-sign-of-the-last-entered-number-in-calculator-using-%e2%81%ba%e2%88%95%e2%82%8b-in-p
I am developing a calculator using Python. The problem I'm facing is that when I try to toggle the sign of the last number entered by the user using the βΊβˆ•β‚‹ button, all similar numbers in the text get toggled as well. I believe the reason for this is Python's memory optimization, which causes similar strings to be stored only once in memory and their addresses to be used multiple times in the list. code: import re, math from decimal import Decimal from fractions import Fraction from customtkinter import * ... def on_button_click(self, char:str): if char == "βœ”": # self.buttons_dict[char].configure(text="") ... elif char == 'C': self.entry.delete(0, END) elif char == 'CE': self.entry.delete(0, END) elif char == 'Del': current_text = self.entry.get() self.entry.delete(0, END) self.entry.insert(END, current_text[:-1]) elif char == 'βΊβˆ•β‚‹': current_text = self.entry.get() current_text_list = [(item+' ')[:-1] for item in re.split("[Γ·Γ—+–]",current_text)] for i in current_text_list: print(id(i)) if current_text: print(current_text_list) if current_text_list[-1][0] == '(': self.entry.delete(0, END) if len(current_text_list) > 1: last_txt = current_text_list[-1] current_text_list[-1] = current_text_list[-1][2:].replace(")", "") self.entry.insert(END, current_text.replace(last_txt,current_text_list[-1])) else: self.entry.insert(END, current_text.replace(current_text_list[-1], current_text_list[-1].replace("-", ""))) else: self.entry.delete(0, END) if len(current_text_list) > 1: self.entry.insert(END, current_text.replace(current_text_list[-1], f"(-{current_text_list[-1]})")) else: self.entry.insert(END, current_text.replace(current_text_list[-1], f"-{current_text_list[-1]}")) elif char == '=': self.buttons_dict[char].configure(text="βœ”") try: expression = self.entry.get().replace('x', '*') result = eval(expression.replace('Γ—', '*').replace("Γ·", '/')) if isinstance(result, float): result_decimal = Decimal(result).quantize(Decimal('0.01')) if math.isclose(result, float(Fraction(result_decimal))): display_result = result_decimal else: display_result = f"{result_decimal}..." else: display_result = result self.entry.delete(0, END) self.entry.insert(END, str(display_result)) except Exception as e: self.entry.delete(0, END) self.entry.insert(END, 'Error') else: current_text = self.entry.get() self.entry.delete(0, END) self.entry.insert(END, current_text + char) Solutions I tried but didn't work: current_text_list = [(item+'.')[:-1] for item in re.split("[Γ·Γ—+–]", current_text)] current_text_list = re.split("[Γ·Γ—+–]", current_text) current_text_list[-1] = (current_text_list[-1]+'.')[:-1] new_current_text_list = [str(i) for i in current_text_list] copied_list = copy.deepcopy(current_text_list) The result of all in the test: Input: 2 Γ— 2 After pressing the βΊβˆ•β‚‹ button: Result: (-2) Γ— (-2)
replace() method replaces all of the occurrences of a substring, not just the last one. So when the following line executes: current_text.replace(current_text_list[-1], ...) It replaces all instances of current_text_list[-1] in current_text. That's why 2 x 2 is becoming (-2) x (-2), both 2s being are replaced. A possible solution could be to split the expression in the tokens (numbers and operators). Then only modify the last number, and then you could reconstruct the expression. tokens = re.split('([Γ·Γ—+–])', current_text) tokens = [t for t in tokens if t] # Removing the empty strings last_number = tokens[-1] if last_number.startswith('(-'): last_number = last_number[2:-1] else: last_number = f"(-{last_number})" tokens[-1] = last_number # Finally reconstruct the whole expression new_text = ''.join(tokens) self.entry.delete(0, END) self.entry.insert(END, new_text)
1
1
79,190,072
2024-11-14
https://stackoverflow.com/questions/79190072/collecting-joining-waiting-for-parallel-depth-first-ops-in-dagster
After a much-appreciated assist from @zyd in this answer to parallel, deep-first execution in Dagster, I am now looking for a way to run an @op on the collected results of the graph run, or at least one that waits until they have all finished, since they don't have hard dependencies per se. My working code is as follows: @op def get_csv_filenames(context) -> List[str]: @op(out=DynamicOut()) def generate_subtasks(context, csv_list:List[str]): for csv_filename in csv_list: yield DynamicOutput(csv_filename, mapping_key=csv_filename) @op # no dep since 1st task def load_csv_into_duckdb(context, csv_filename) @op(ins={"start":In(Nothing)} def transform_dates(context, csv_filename) @op(ins={"start":In(Nothing)} def from_code_2_categories(context, csv_filename) @op(ins={"start":In(Nothing)} def export_2_parquet(context, csv_filename) @op(ins={"start":In(Nothing)} def profile_dataset(context, csv_filename) @graph def process(context, csv_filename:str): task1 = load_csv_into_duckdb(context=context, csv_filename=csv_filename) task2 = transform_dates(start=task1, context=context, csv_filename=csv_filename) task3 = from_code_2_categories(start=task2, context=context, csv_filename=csv_filename) task4 = export_2_parquet(start=task3, context=context, csv_filename=csv_filename) profile_dataset(start=task4, context=context, csv_filename=csv_filename) @job def pipeline(): csv_filename_list = get_csv_filenames() generate_subtasks(csv_filename_list).map(process) I have tried the .map(process).collect() approach, but Dagster complains that Nonetype has no attribute collect. However, I've seen several examples online of this same approach and apparently it should work. I have also tried for the @graph to return a list of the individual task return values, but DagsterUI complains that a graph-decorated function should return a dict with mapping keys. I could build that, but I feel I should instead pick that up from Dagster's execution context, which I don't know how to access from within the graph funct. Does anyone have some pointers?
Here's an example that worked for me: from dagster import Definitions, op, DynamicOutput, graph, GraphOut, DynamicOut @op def a_op(path): return path @op def op2(path): return path @op def op3(path): return path @op(out=DynamicOut(str)) def mapper(): for i in range(10): yield DynamicOutput(str(i), mapping_key=str(i)) # I think what you were missing is returning the output from the graph here @graph(out={"out": GraphOut()}) def nested(path: str): return op2(a_op(path)) @op def consumer(context, paths: list[str]): context.log.info(paths) @graph def the_graph(): consumer(mapper().map(nested).collect()) the_job = the_graph.to_job() defs = Definitions( jobs=[the_job], ) A graph is really just an organizational concept for grouping ops. At runtime nested graphs are flattened into a single graph for the whole job. This implies that if you want to use the output of an op in the nested graph, you need to return the output of the op from the nested graph. Then everything else works the same as any other op->op dependency.
2
2
79,195,523
2024-11-16
https://stackoverflow.com/questions/79195523/error-select-one-object-and-all-float-int-in-pandas-groupby
I have this dataframe. import pandas as pd x = { "year": ["2012", "2012", "2013", "2014", "2012", "2014", "2013", "2013", "2012", "2013", "2012", "2014", "2014", "2013", "2012", "2014"], "class": ["A", "B", "C", "A", "C", "B", "B", "C", "A", "C", "B", "C", "A", "C", "B", "A"], "gender": ["M", "F", "F", "M", "F", "M", "M", "F", "F", "F", "M", "M", "F", "M", "F", "F"], "score1": ["6", "6", "8", "10", "6", "7", "6", "7", "8", "7", "10", "9", "9", "9", "8", "9"], "score2": ["5", "9", "10", "5", "10", "9", "5", "7", "8", "9", "8", "8", "5", "5", "8", "5"], "score3": ["5", "9", "9", "7", "8", "5", "9", "5", "7", "6", "5", "10", "8", "8", "6", "8"], "score4": ["10", "8", "8", "10", "9", "8", "10", "9", "7", "8", "10", "9", "7", "7", "10", "7"] } data = pd.DataFrame(x) I want to find the median on every column with dtypes = 'int64'. Then I do groupby class columns on my df. data.groupby('class').median() But it shows an error on it. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1490, in GroupBy._cython_agg_general..array_func(values) 1489 try: -> 1490 result = self.grouper._cython_operation( 1491 "aggregate", 1492 values, 1493 how, 1494 axis=data.ndim - 1, 1495 min_count=min_count, 1496 **kwargs, 1497 ) 1498 except NotImplementedError: 1499 # generally if we have numeric_only=False 1500 # and non-applicable functions 1501 # try to python agg 1502 # TODO: shouldn't min_count matter? File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:959, in BaseGrouper._cython_operation(self, kind, values, how, axis, min_count, **kwargs) 958 ngroups = self.ngroups --> 959 return cy_op.cython_operation( 960 values=values, 961 axis=axis, 962 min_count=min_count, 963 comp_ids=ids, 964 ngroups=ngroups, 965 **kwargs, 966 ) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:657, in WrappedCythonOp.cython_operation(self, values, axis, min_count, comp_ids, ngroups, **kwargs) 649 return self._ea_wrap_cython_operation( 650 values, 651 min_count=min_count, (...) 654 **kwargs, 655 ) --> 657 return self._cython_op_ndim_compat( 658 values, 659 min_count=min_count, 660 ngroups=ngroups, 661 comp_ids=comp_ids, 662 mask=None, 663 **kwargs, 664 ) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:497, in WrappedCythonOp._cython_op_ndim_compat(self, values, min_count, ngroups, comp_ids, mask, result_mask, **kwargs) 495 return res.T --> 497 return self._call_cython_op( 498 values, 499 min_count=min_count, 500 ngroups=ngroups, 501 comp_ids=comp_ids, 502 mask=mask, 503 result_mask=result_mask, 504 **kwargs, 505 ) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:541, in WrappedCythonOp._call_cython_op(self, values, min_count, ngroups, comp_ids, mask, result_mask, **kwargs) 540 out_shape = self._get_output_shape(ngroups, values) --> 541 func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric) 542 values = self._get_cython_vals(values) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:167, in WrappedCythonOp._get_cython_function(cls, kind, how, dtype, is_numeric) 165 if how in ["median", "cumprod"]: 166 # no fused types -> no __signatures__ --> 167 raise NotImplementedError( 168 f"function is not implemented for this dtype: " 169 f"[how->{how},dtype->{dtype_str}]" 170 ) 171 if "object" not in f.__signatures__: 172 # raise NotImplementedError here rather than TypeError later NotImplementedError: function is not implemented for this dtype: [how->median,dtype->object] During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\nanops.py:786, in nanmedian(values, axis, skipna, mask) 785 try: --> 786 values = values.astype("f8") 787 except ValueError as err: 788 # e.g. "could not convert string to float: 'a'" ValueError: could not convert string to float: 'M' The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) Cell In[135], line 1 ----> 1 data.groupby('class').median() File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1883, in GroupBy.median(self, numeric_only) 1862 @final 1863 def median(self, numeric_only: bool = False): 1864 """ 1865 Compute median of groups, excluding missing values. 1866 (...) 1881 Median of values within each group. 1882 """ -> 1883 result = self._cython_agg_general( 1884 "median", 1885 alt=lambda x: Series(x).median(numeric_only=numeric_only), 1886 numeric_only=numeric_only, 1887 ) 1888 return result.__finalize__(self.obj, method="groupby") File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1507, in GroupBy._cython_agg_general(self, how, alt, numeric_only, min_count, **kwargs) 1503 result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt) 1505 return result -> 1507 new_mgr = data.grouped_reduce(array_func) 1508 res = self._wrap_agged_manager(new_mgr) 1509 out = self._wrap_aggregated_output(res) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\internals\managers.py:1503, in BlockManager.grouped_reduce(self, func) 1499 if blk.is_object: 1500 # split on object-dtype blocks bc some columns may raise 1501 # while others do not. 1502 for sb in blk._split(): -> 1503 applied = sb.apply(func) 1504 result_blocks = extend_blocks(applied, result_blocks) 1505 else: File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\internals\blocks.py:329, in Block.apply(self, func, **kwargs) 323 @final 324 def apply(self, func, **kwargs) -> list[Block]: 325 """ 326 apply the function to my values; return a block if we are not 327 one 328 """ --> 329 result = func(self.values, **kwargs) 331 return self._split_op_result(result) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1503, in GroupBy._cython_agg_general..array_func(values) 1490 result = self.grouper._cython_operation( 1491 "aggregate", 1492 values, (...) 1496 **kwargs, 1497 ) 1498 except NotImplementedError: 1499 # generally if we have numeric_only=False 1500 # and non-applicable functions 1501 # try to python agg 1502 # TODO: shouldn't min_count matter? -> 1503 result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt) 1505 return result File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1457, in GroupBy._agg_py_fallback(self, values, ndim, alt) 1452 ser = df.iloc[:, 0] 1454 # We do not get here with UDFs, so we know that our dtype 1455 # should always be preserved by the implemented aggregations 1456 # TODO: Is this exactly right; see WrappedCythonOp get_result_dtype? -> 1457 res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True) 1459 if isinstance(values, Categorical): 1460 # Because we only get here with known dtype-preserving 1461 # reductions, we cast back to Categorical. 1462 # TODO: if we ever get "rank" working, exclude it here. 1463 res_values = type(values)._from_sequence(res_values, dtype=values.dtype) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:994, in BaseGrouper.agg_series(self, obj, func, preserve_dtype) 987 if len(obj) > 0 and not isinstance(obj._values, np.ndarray): 988 # we can preserve a little bit more aggressively with EA dtype 989 # because maybe_cast_pointwise_result will do a try/except 990 # with _from_sequence. NB we are assuming here that _from_sequence 991 # is sufficiently strict that it casts appropriately. 992 preserve_dtype = True --> 994 result = self._aggregate_series_pure_python(obj, func) 996 npvalues = lib.maybe_convert_objects(result, try_float=False) 997 if preserve_dtype: File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\ops.py:1015, in BaseGrouper._aggregate_series_pure_python(self, obj, func) 1012 splitter = self._get_splitter(obj, axis=0) 1014 for i, group in enumerate(splitter): -> 1015 res = func(group) 1016 res = libreduction.extract_result(res) 1018 if not initialized: 1019 # We only do this validation on the first iteration File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\groupby\groupby.py:1885, in GroupBy.median..(x) 1862 @final 1863 def median(self, numeric_only: bool = False): 1864 """ 1865 Compute median of groups, excluding missing values. 1866 (...) 1881 Median of values within each group. 1882 """ 1883 result = self._cython_agg_general( 1884 "median", -> 1885 alt=lambda x: Series(x).median(numeric_only=numeric_only), 1886 numeric_only=numeric_only, 1887 ) 1888 return result.__finalize__(self.obj, method="groupby") File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\generic.py:11623, in NDFrame._add_numeric_operations..median(self, axis, skipna, numeric_only, **kwargs) 11606 @doc( 11607 _num_doc, 11608 desc="Return the median of the values over the requested axis.", (...) 11621 **kwargs, 11622 ): > 11623 return NDFrame.median(self, axis, skipna, numeric_only, **kwargs) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\generic.py:11212, in NDFrame.median(self, axis, skipna, numeric_only, **kwargs) 11205 def median( 11206 self, 11207 axis: Axis | None = 0, (...) 11210 **kwargs, 11211 ) -> Series | float: > 11212 return self._stat_function( 11213 "median", nanops.nanmedian, axis, skipna, numeric_only, **kwargs 11214 ) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\generic.py:11158, in NDFrame._stat_function(self, name, func, axis, skipna, numeric_only, **kwargs) 11154 nv.validate_stat_func((), kwargs, fname=name) 11156 validate_bool_kwarg(skipna, "skipna", none_allowed=False) > 11158 return self._reduce( 11159 func, name=name, axis=axis, skipna=skipna, numeric_only=numeric_only 11160 ) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\series.py:4670, in Series._reduce(self, op, name, axis, skipna, numeric_only, filter_type, **kwds) 4665 raise TypeError( 4666 f"Series.{name} does not allow {kwd_name}={numeric_only} " 4667 "with non-numeric dtypes." 4668 ) 4669 with np.errstate(all="ignore"): -> 4670 return op(delegate, skipna=skipna, **kwds) File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\nanops.py:158, in bottleneck_switch.__call__..f(values, axis, skipna, **kwds) 156 result = alt(values, axis=axis, skipna=skipna, **kwds) 157 else: --> 158 result = alt(values, axis=axis, skipna=skipna, **kwds) 160 return result File c:\ProgramData\anaconda3\Lib\site-packages\pandas\core\nanops.py:789, in nanmedian(values, axis, skipna, mask) 786 values = values.astype("f8") 787 except ValueError as err: 788 # e.g. "could not convert string to float: 'a'" --> 789 raise TypeError(str(err)) from err 790 if mask is not None: 791 values[mask] = np.nan TypeError: could not convert string to float: 'M' From the error box above, it shows that groupby do aggregation gender columns. But when I watch someone on YouTube do this with the same dataframe and the same code, it's all fine and shows no error. So the question is: Why is this happening? Is it because I ran at the newest Python/Pandas version? (I run on Python 3.11.5 and Pandas 2.0.3. While I watched that YouTube video, it was posted 2 years ago). Am I missing something on the groupby?
The issue id due to the columns score1, score2, score3, and coree4 in your DataFrame are stored as strings, not as numeric types. Do this import pandas as pd x = { "year": ["2012", "2012", "2013", "2014", "2012", "2014", "2013", "2013", "2012", "2013", "2012", "2014", "2014", "2013", "2012", "2014"], "class": ["A", "B", "C", "A", "C", "B", "B", "C", "A", "C", "B", "C", "A", "C", "B", "A"], "gender": ["M", "F", "F", "M", "F", "M", "M", "F", "F", "F", "M", "M", "F", "M", "F", "F"], "score1": ["6", "6", "8", "10", "6", "7", "6", "7", "8", "7", "10", "9", "9", "9", "8", "9"], "score2": ["5", "9", "10", "5", "10", "9", "5", "7", "8", "9", "8", "8", "5", "5", "8", "5"], "score3": ["5", "9", "9", "7", "8", "5", "9", "5", "7", "6", "5", "10", "8", "8", "6", "8"], "score4": ["10", "8", "8", "10", "9", "8", "10", "9", "7", "8", "10", "9", "7", "7", "10", "7"] } data = pd.DataFrame(x) data[["score1", "score2", "score3", "score4"]] = data[["score1", "score2", "score3", "score4"]].apply(pd.to_numeric) numeric_cols = data.select_dtypes(include='number') result = numeric_cols.join(data[['class']]).groupby('class').median() print(result) which gives score1 score2 score3 score4 class A 9.0 5.0 7.0 7.0 B 7.0 8.0 6.0 10.0 C 7.5 8.5 8.0 8.5
1
1
79,194,023
2024-11-15
https://stackoverflow.com/questions/79194023/how-to-insert-string-data-as-enum-into-postgres-database-in-python
I'm trying to add a String type data into a postgresql database in Python by Prisma. The culumn in database is defined as a specific enum type in Prisma schema. I tried to insert by a String mapping. However the insert was failed with not support with String type. What should I do? from enum import Enum from prisma import Prisma db = Prisma() os.environ['DATABASE_URL'] = '' db.connect() class DifficultyLevel(Enum): EASY = 'EASY' MEDIUM = 'MEDIUM' HARD = 'HARD' LUCK = 'LUCK' def map_question_difficulty(generated_difficulty): difficulty_mapping = { 'Easy': DifficultyLevel.EASY, 'Medium': DifficultyLevel.MEDIUM, 'Hard': DifficultyLevel.HARD, 'LUCK': DifficultyLevel.LUCK } return difficulty_mapping.get(generated_difficulty, DifficultyLevel.EASY).value # Default to EASY if difficulty not found async def insert_questions_to_db(questions): try: for question in questions: db.elementgenerated.create( data={ "difficulty": map_question_difficulty(question['difficulty']) } ) logging.info(f"Successfully inserted {len(questions)} questions into the database.") except Exception as e: logging.error(f"Error inserting questions into the database: {e}") raise Definition in Prisma schema: enum DifficultyLevel { EASY MEDIUM HARD } Here the Program gave feedback with: 'prisma.errors.DataError: Error converting field "difficulty" of expected non-nullable type "String", found incompatible value of "EASY".' The question['difficulty'] is a String object like 'Easy', 'Medium' or 'Hard'.
missing enumerated value Your prisma schema should also include LUCK. A discrepancy between python and prisma seems like a Bad Thing. read the diagnostic prisma.errors.DataError: Error converting field "difficulty" of expected non-nullable type "String", found incompatible value of "EASY".' The question['difficulty'] is a String object like 'Easy', 'Medium' or 'Hard'. This suggests revising your Enum class to be: class DifficultyLevel(Enum): EASY = 'Easy' MEDIUM = 'Medium' HARD = 'Hard' LUCK = 'Luck' auto() Side note: consider DRYing up the class by defining it this way: from enum import Enum, auto class DifficultyLevel(Enum): EASY = auto() MEDIUM = auto() HARD = auto() LUCK = auto() def __str__(self): return self.name.title() And then use f-string or str() rather than .value.
2
0
79,192,371
2024-11-15
https://stackoverflow.com/questions/79192371/how-to-combine-slice-assignment-mask-assignment-and-broadcasting-in-pytorch
To be more specific, I'm wondering how to assign a tensor by slice and by mask at different dimension(s) simultaneously in PyTorch. Here's a small example about what I want to do: With the tensors and masks below: x = torch.zeros(2, 3, 4, 6) mask = torch.tensor([[ True, True, False], [True, False, True]]) y = torch.rand(2, 3, 1, 3) I want to achieve something like x[mask, :, :3] = y[mask] In dimension 0 and 1, only the 4x6/1x3 slices in x/y that whose corresponding element in mask is True are allowed to be assigned. In dimension 2, I hope the 1-row tensor in y can be broadcast to all the 8 rows of x, In dimension 3, only the first 3 elements in x are assigned with the 3-element tensor from y. However, with code above, following error was caught: RuntimeError: shape mismatch: value tensor of shape [4, 1, 3] cannot be broadcast to indexing result of shape [4, 3, 6] It seems that PyTorch did [mask] indexing instead, and ignored the :3 indexing. I've also tried x[mask][:, :, :3] = y[mask] No error occurred but the assignment still failed. I know I can assign by slice and by mask step by step, but I hope to avoid any intermediate tensors if possible. Tensors in neural networks may be extremely big, so may be an all-in-one assignment may take less time and less memory.
You can do the following: x = torch.zeros(2, 3, 4, 6) mask = torch.tensor([[ True, True, False], [True, False, True]]) y = torch.rand(2, 3, 1, 3) x[..., :3][mask] = y[mask] This produces the same result as i, j = mask.nonzero(as_tuple = True) x[i, j, :, :3] = y[i, j] For the 2D mask scenario. This method also works for additional dims: x = torch.zeros(2, 3, 3, 4, 6) y = torch.rand(2, 3, 3, 1, 3) mask = torch.rand(2,3,3)>0.5 x[..., :3][mask] = y[mask] For additional dims, the only constraint is that the first n=mask.ndim dims of x and y must match the shape of mask and the final dimension of y is 3 to match the :3.
1
1
79,192,549
2024-11-15
https://stackoverflow.com/questions/79192549/why-is-jaxs-jit-compilation-slower-on-the-second-run-in-my-example
I am new to using JAX, and I’m still getting familiar with how it works. From what I understand, when using Just-In-Time (JIT) compilation (jax.jit), the first execution of a function might be slower due to the compilation overhead, but subsequent executions should be faster. However, I am seeing the opposite behavior. In the following code snippet: from icecream import ic import jax from time import time import numpy as np @jax.jit def my_function(x, y): return x @ y vectorized_function = jax.vmap(my_function, in_axes=(0, None)) shape = (1_000_000, 1_000) x = np.ones(shape) y = np.ones(shape[1]) start = time() vectorized_function(x, y) t_1 = time() - start start = time() vectorized_function(x, y) t_2 = time() - start print(f'{t_1 = }\n{t_2 = }') I get the following results: t_1 = 13.106784582138062 t_2 = 15.664098024368286 As you can see, the second run (t_2) is actually slower than the first one (t_1), which seems counterintuitive to me. I expected the second run to be faster due to JAX’s JIT caching. Has anyone encountered a similar situation or have any insights into why this might be happening? PS: I know I could have done x @ y directly without invoking vmap, but this is an easy example just to test its behaviour. My actual code is more complex, and the difference in runtime is even bigger (around 8x slower). I hope this simple example works similar.
For general tips on running JAX microbenchmarks effectively, see FAQ: Benchmarking JAX code. I cannot reproduce the timings from your snippet, but in your more complicated case, I suspect you are getting fooled by JAX's Asynchronous dispatch, which means that the timing method you're using will not actually reflect the time taken by the underlying computation. To address this, you can wrap your results in jax.block_until_ready: start = time() vectorized_function(x, y).block_until_ready() t_1 = time() - start
1
2
79,191,165
2024-11-15
https://stackoverflow.com/questions/79191165/django-is-not-importing-models-sublime-text-cant-find-django-windows-11
I've been working through Python Crash Course e2, and got stuck on Ch.18. Having opened models.py, and entered the code, the error message is: ModuleNotFoundError: No module named 'django' I have spent some time working on this, without a solution. Could it be that PCCe2 is out of date, or is there a workaround solution? ChatGPT has said to make sure to activate the virtual environment, make sure Django is installed; verify the installation; check my Python environment and ensure it is activated; check for multiple Python installations; check the installation path; verify the installation path; check that "models.py" is in the app directory, and the app is listed in the installed apps, etc. I have tried all of these, and the text editor cannot seem to find the module django. But it is among the files. Here is the settings, with INSTALLED APPS list: from pathlib import Path INSTALLED_APPS = [ # My apps 'learning_logs', # Default django apps 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] This is the line where I get the error: from django.db import models Unfortunately, I'm not so sure what is meant by "Related model", hence it's omission. I will say that in terminal, and error message I got was: ModuleNotFoundError: No module named 'django.utils'. I hope that makes sense.
You can create a Build System for your project where you can specify a python environment to use: { "cmd": ["/full/path/to/your/specific/python", "$file"], "selector": "source.python", "file_regex": "^\\s*File \"(...*?)\", line ([0-9]*)" } Generally though, you don't need to run any files when working with Django. Just run python manage.py runserver (in a terminal), edit the source files and the app will be reloaded automatically; refresh the browser to see the results. If you do need to run a standalone script that uses Django, make sure to add django.setup(): import django django.setup() # now you can use models and other Django features
3
1
79,192,393
2024-11-15
https://stackoverflow.com/questions/79192393/torch-randn-vector-differs
I am trying to generate a torch vector with a specific length. I want the vector to have the same beginning elements when increasing its length using the same seed. This works when the vector's length ranges from 1 to 15 for example: For length 14 torch.manual_seed(1) torch.randn(14) tensor([ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817, -1.0276, -0.5631, -0.8923, -0.0583, -0.1955, -0.9656]) For length 15 torch.manual_seed(1) torch.randn(15) tensor([ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519, -0.1661, -1.5228, 0.3817, -1.0276, -0.5631, -0.8923, -0.0583, -0.1955, -0.9656]) But for length 16 I get a totaly different vector torch.manual_seed(1) torch.randn(16) tensor([-1.5256, -0.7502, -0.6540, -1.6095, -0.1002, -0.6092, -0.9798, -1.6091, -0.7121, 0.3037, -0.7773, -0.2515, -0.2223, 1.6871, 0.2284, 0.4676]) Can someone explain what's happennig to me and could I get a solution where the vector does not changes?
This is due to the behaviour of PRNG. Different code paths might be used. There is no guarantee that all different sequence length will produce exactly the same output samples from the PRNG. The outputs from 1-15 match while starting from 16 another (probably vectorized) code path will be used. Changes in the sequence length could dispatch to faster code paths Source: Odd result using multinomial num_samples... It seems that this may not be an issue in downgraded torch versions.
1
2
79,191,501
2024-11-15
https://stackoverflow.com/questions/79191501/polars-rolling-window-on-time-series-with-custom-filter-based-on-the-current-row
How do I use polars' native API to do a rolling window on a datetime column, but filter out rows in the window based on the value of a column of the "current" row? My polars dataframe of financial transactions has the following schema: For each transaction and a duration d, I want to: grab the source_acct and its timestamp look back timestamp - d hours and get only rows whose source_acct or dest_acct matches the current source_acct sum up all txn as amount_in when the current source_acct is equal to a row's dest_acct do the same for amount_out but where the current src acct is the row's source_account including itself. I tried this using map_rows but it's way too slow for a dataframe with 20M rows. I sort my df on the timestamp column, then run: def windowing(df: pl.DataFrame, window_in_hours: int): d = timedelta(hours=window_in_hours) def calculate_amt(row): acc_no, window_end = row[0], row[1] window_start = window_end - d acct_window_mask = ( (pl.col('timestamp') >= window_start) & (pl.col('timestamp') <= window_end) & (pl.col('dest_acct').eq(acc_no) | pl.col('source_acct').eq(acc_no)) ) window_txns = df.filter(acct_window_mask) amount_in = window_txns.filter(pl.col('dest_acct').eq(acc_no))['amount'].sum() amount_out = window_txns.filter(pl.col('source_acct').eq(acc_no))['amount'].sum() return (amount_in, amount_out) calculated_amounts = df.select(["source_acct", "timestamp", 'dest_acct', 'amount']).map_rows(calculate_amt) return df.with_columns( calculated_amounts['column_0'].alias('amount_in'), calculated_amounts['column_1'].alias('amount_out'), ) I've been trying to implement this using polars native API like .rolling() but I don't get how to do the filter step of comparing the current row's source account against the windowed transactions. Here's a sample: import polars as pl from datetime import timedelta data = { "timestamp": [ "2024-01-01 10:00:00", "2024-01-01 10:30:00", "2024-01-01 11:00:00", "2024-01-01 11:30:00", "2024-01-01 12:00:00" ], "source_acct": ["A", "B", "A", "C", "A"], "dest_acct": ["B", "A", "C", "A", "B"], "amount": [100, 150, 200, 300, 250] } df = pl.DataFrame(data).with_columns(pl.col("timestamp").str.to_datetime()) print(windowing(df, 1)) Expected output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ source_acct ┆ dest_acct ┆ amount ┆ amount_in ┆ amount_out β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ str ┆ str ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════β•ͺ════════β•ͺ═══════════β•ͺ════════════║ β”‚ 2024-01-01 10:00:00 ┆ A ┆ B ┆ 100 ┆ 0 ┆ 100 β”‚ β”‚ 2024-01-01 10:30:00 ┆ B ┆ A ┆ 150 ┆ 100 ┆ 150 β”‚ β”‚ 2024-01-01 11:00:00 ┆ A ┆ C ┆ 200 ┆ 150 ┆ 300 β”‚ β”‚ 2024-01-01 11:30:00 ┆ C ┆ A ┆ 300 ┆ 200 ┆ 300 β”‚ β”‚ 2024-01-01 12:00:00 ┆ A ┆ B ┆ 250 ┆ 300 ┆ 450 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
df_in = ( df.join_where( df, pl.col.source_acct == pl.col.dest_acct_right, pl.col.timestamp.dt.offset_by("-1h") <= pl.col.timestamp_right, pl.col.timestamp >= pl.col.timestamp_right ) .group_by("timestamp", "source_acct") .agg(amount_in = pl.col.amount_right.sum()) ) ( df .join(df_in, on=["timestamp","source_acct"], how="left") .with_columns( pl.col.amount_in.fill_null(0), pl.col.amount .rolling_sum_by("timestamp", "1h", closed="both") .over("source_acct") .alias("amount_out") ) ) shape: (5, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ source_acct ┆ dest_acct ┆ amount ┆ amount_in ┆ amount_out β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ str ┆ str ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════β•ͺ════════β•ͺ═══════════β•ͺ════════════║ β”‚ 2024-01-01 10:00:00 ┆ A ┆ B ┆ 100 ┆ 0 ┆ 100 β”‚ β”‚ 2024-01-01 10:30:00 ┆ B ┆ A ┆ 150 ┆ 100 ┆ 150 β”‚ β”‚ 2024-01-01 11:00:00 ┆ A ┆ C ┆ 200 ┆ 150 ┆ 300 β”‚ β”‚ 2024-01-01 11:30:00 ┆ C ┆ A ┆ 300 ┆ 200 ┆ 300 β”‚ β”‚ 2024-01-01 12:00:00 ┆ A ┆ B ┆ 250 ┆ 300 ┆ 450 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Just for illustration, you can also do it with DuckDB sql: import duckdb duckdb.sql(""" select d.*, ( select coalesce(sum(tt.amount), 0) from df as tt where tt.dest_acct = d.source_acct and tt.timestamp between d.timestamp - interval 1 hour and d.timestamp ) as amount_in, sum(d.amount) over( partition by d.source_acct order by d.timestamp range between interval 1 hour preceding and current row ) as amount_out from df as d """).pl()
2
2
79,187,368
2024-11-14
https://stackoverflow.com/questions/79187368/how-to-use-init-py-to-create-a-clean-api
Problem Description: I am trying to create a local API for my team. I think I understand broadly the mechanics of _init_.py. Let's say we have the below package structure: API/ β”œβ”€β”€ __init__.py # Top-level package init file └── core/ β”œβ”€β”€ __init__.py # Core module init file β”œβ”€β”€ calculator.py └── exceptions.py Now if I build my API with empty _init_.py files, and then import API in my script, I won't be able to do something like: import API API.core API.core.calculator Because the submodules have not been imported specifically. What I need to do is to add into the following into my top-level _init_.py: from . import core And the following into my core module _init_.py: from . import calculator from . import exceptions Now, when I do this, all imports I am making in my calculator.py or exceptions.py such as numpy or pandas are actually available through my package as follows: API.core.calculator.numpy Question 1: What are the best practices to prevent imported libraries to show through my API ? On the same theme, let's say I want to access my calculator.py functions directly through the core keyword (let's assume _all_ variables are safely set up). Then I can add the following into my core module _init_.py from .calculator import * from .exceptions import * which then allows me to do: API.core.my_function() But then again, I can also call API.core.calculator.my_function() at the same time, which might be confusing for users. 2. How to prevent imported functions to be available through both my package name and module name ? I tried mixing up the approaches, but with no results, please help !
The typical best practice is to define an __all__ list in each sub-module with names you want the sub-module to export so that the parent module can import just those names with a star import. Names that you don't want exposed should be named with a leading underscore by convention so that the linters will warn the users if they try to import a "private" name. So in your example case, your API/core/calculator.py will look like: import numpy as _numpy __all__ = ['my_function'] def my_function(): ... And then API/core/__init__.py will star-import what's exported by calculator (that is, just my_function): from .calculator import * which then allows you to do: API.core.my_function() but not: API.core._numpy API.core.calculator._numpy Note that the user can always do import API.core.calculator explicitly to access _numpy in that sub-module, but that is just how the module namespace works as there is nothing really private in Python. Also note that a star import will not import names with leading underscores so if you do follow the convention of naming your "private" variables as such you don't even need to define an __all__ list. Excerpt from the documentation of More on Modules: There is even a variant to import all names that a module defines: >>> from fibo import * fib(500) 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 This imports all names except those beginning with an underscore (_).
2
1
79,191,312
2024-11-15
https://stackoverflow.com/questions/79191312/how-to-sort-python-pandas-dataframe-in-repetitive-order-after-groupby
I have a dataset which is sorted in this order: col1 col2 col3 a 1 r a 1 s a 2 t a 2 u a 3 v a 3 w b 4 x b 4 y b 5 z b 5 q b 6 w b 6 e I want it to be sorted in the following order: col1 col2 col3 a 1 r a 2 t a 3 v a 1 s a 2 u a 3 w b 4 x b 5 z b 6 w b 4 y b 5 q b 6 e I want the col2 to be in repetitive fashion, as in, for col1 'a' values, it should be 1,2,3,4 and then 1,2,3,4 again instead of 1,1,2,2,3,3,4,4. I have used the following code, but it is not working: import pandas as pd # Creating the DataFrame data = { 'col1': ['a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b'], 'col2': [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6], 'col3': ['r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'q', 'w', 'e'] } df = pd.DataFrame(data) # Sort by col1, then reorder col2 within each group df_sorted = df.sort_values(by=['col1', 'col2']).reset_index(drop=True) df_sorted = df_sorted.groupby('col1', group_keys=False).apply(lambda x: x.sort_values('col2')) # Display the sorted dataframe print(df_sorted)
Use groupby.cumcount to form a secondary key for sorting: out = (df.assign(key=lambda x: x.groupby(['col1', 'col2']).cumcount()) .sort_values(by=['col1', 'key', 'col2']) .drop(columns='key') ) Note that you can avoid creating the intermediate column using numpy.lexsort: import numpy as np out = df.iloc[np.lexsort([df['col2'], df.groupby(['col1', 'col2']).cumcount(), df['col1']])] Output: col1 col2 col3 0 a 1 r 2 a 2 t 4 a 3 v 1 a 1 s 3 a 2 u 5 a 3 w 6 b 4 x 8 b 5 z 10 b 6 w 7 b 4 y 9 b 5 q 11 b 6 e Intermediate (before sorting): col1 col2 col3 key 0 a 1 r 0 1 a 1 s 1 2 a 2 t 0 3 a 2 u 1 4 a 3 v 0 5 a 3 w 1 6 b 4 x 0 7 b 4 y 1 8 b 5 z 0 9 b 5 q 1 10 b 6 w 0 11 b 6 e 1
2
3
79,191,017
2024-11-15
https://stackoverflow.com/questions/79191017/how-to-check-if-one-dictionary-containing-list-is-a-subset-of-another-dictionary
Hi I have two dictionaries defined which contains lists dict_1 = {'V1': ['2024-11-07', '2024-11-08'], 'V2': ['2024-11-07', '2024-11-08']} dict_2 = {'V1': ['2024-11-08'], 'V2': ['2024-11-07']} both items (key and val) from dict_2 above are subset of dict_1 so I wanted return true in both cases. I tried to use res = set(dict_2.items()).issubset(set(dict_1.items())) However it works if its simple dictionary like dict_1 = {'abc' : 1, 'pqr' : 2} dict_2 = {'abc' : 1} in my case is there any way this can be done?
You can check for each key of dict_2 that its sublist is a subset of the corresponding sublist of dict_1: def dict_issubset(maybe_subset, maybe_superset): return { key: set(sublist).issubset(maybe_superset[key]) for key, sublist in maybe_subset.items() } so that: dict_1 = {'V1': ['2024-11-07', '2024-11-08'], 'V2': ['2024-11-07', '2024-11-08']} dict_2 = {'V1': ['2024-11-08'], 'V2': ['2024-11-07']} print(dict_issubset(dict_2, dict_1)) outputs: {'V1': True, 'V2': True} Demo: https://ideone.com/wuvTET
3
2
79,190,189
2024-11-14
https://stackoverflow.com/questions/79190189/extract-uncaptured-raw-text-from-regex
I am given a regex expression that consists of raw text and capture groups. How can I extract all raw text snippets from it? For example: pattern = r"Date: (\d{4})-(\d{2})-(\d{2})" assert extract(pattern) == ["Date: ", "-", "-", ""] Here, the last entry in the result is an empty string, indicating that there is no raw text after the last capture group. The solution should not extract raw text within capture groups: pattern = r"hello (world)" assert extract(pattern) == ["hello ", ""] The solution should work correctly with escaped characters too, for example: pattern = r"\(born in (.*)\)" assert extract(pattern) == ["(born in ", ")"] Ideally, the solution should be efficient, avoiding looping over the string in Python.
What you are asking for is to extract literal tokens from a parsed regex pattern at the top level. If you don't mind tapping into the internals of the re package, you can see from the list of tokens of a given pattern parsed by re._parser.parse: import re pattern = r"\(born in (.*)\)" print(*re._parser.parse(pattern).data, sep='\n') which outputs: (LITERAL, 40) (LITERAL, 98) (LITERAL, 111) (LITERAL, 114) (LITERAL, 110) (LITERAL, 32) (LITERAL, 105) (LITERAL, 110) (LITERAL, 32) (SUBPATTERN, (1, 0, 0, [(MAX_REPEAT, (0, MAXREPEAT, [(ANY, None)]))])) (LITERAL, 41) that all you need is to group together the LITERAL tokens and join their codepoints for output: def extract(pattern): literal_groups = [[]] for op, value in re._parser.parse(pattern).data: if op is re._constants.LITERAL: literal_groups[-1].append(chr(value)) else: literal_groups.append([]) return list(map(''.join, literal_groups)) so that: for pattern in ( r"Date: (\d{4})-(\d{2})-(\d{2})", r"hello (world)", r"\(born in (.*)\)" ): print(extract(pattern)) outputs: ['Date: ', '-', '-', ''] ['hello ', ''] ['(born in ', ')'] Demo here
2
3
79,189,825
2024-11-14
https://stackoverflow.com/questions/79189825/use-brush-for-transform-calculate-in-interactive-altair-char
I have an interactive plot in altair/vega where I can select points and I see a pie chart with the ratio of the colors of the selected points. import altair as alt import numpy as np import polars as pl selection = alt.selection_interval(encodings=["x"]) base = ( alt.Chart( pl.DataFrame( { "x": list(np.random.rand(100)), "y": list(np.random.rand(100)), "class": list(np.random.choice(["A", "B"], 100)), } ) ) .mark_point(filled=True) .encode( color=alt.condition( selection, alt.Color("class:N"), alt.value("lightgray") ), ) .add_params(selection) ) alt.hconcat( base.encode(x="x:Q", y="y:Q"), ( base.transform_filter(selection) .mark_arc() .encode(theta="count()", color="class:N") ), ) The outcome looks like this: Now I'd like to add two more charts that show the ratio of selected / unselected points for each color. I.e. one pie chart that is orange / gray and one pie chart that is blue / gray with ratios depending on the number of selected points. I tried to use the selection like this ( base.mark_arc().encode( theta="count()", color=alt.condition( selection, alt.Color("class:N"), alt.value("gray") ), row="class:N", ) ), But it's not what I want: What's the best way to add the pie charts I want?
There might be an easier way to do this, but the following works: alt.hconcat( base.encode(x="x:Q", y="y:Q"), ( base.mark_arc(theta=4) .transform_joinaggregate(class_total='count()', groupby=['class']) .transform_filter(selection) # Including class_total in the groupby just so that column is not dropped since we need it in the calculate .transform_aggregate(count_after_filter='count()', groupby=['class', 'class_total']) .transform_calculate(proportion_selected=alt.datum.count_after_filter / alt.datum.class_total) .encode( # Setting the domain to reflect that we computed a proportion theta=alt.Theta('proportion_selected:Q').scale(domain=(0, 1)), row='class', color='class' # tooltip='proportion_selected:Q' ) ) ) Together with your initial chart, this could make for an interesting gallery example if you want to create a PR. For a grey background, I think you would have to layer with a completely filled in grey chart, similar to this layered histogram here https://altair-viz.github.io/gallery/interval_selection_map_quakes.html (should just be alt.Chart().mark_arc(color='lightgrey')
1
2
79,190,321
2024-11-14
https://stackoverflow.com/questions/79190321/resampling-by-group-in-polars
I'm trying to build a Monte Carlo simulator for my data in Polars. I am attempting to group by a column, resample the groups and then, unpack the aggregation lists back in their original sequence. I've got it worked out up until the last step and I'm stuck and beginning to think I've gone about this in the wrong way. df_original = pl.DataFrame({ 'colA': ['A','A','B','B','C','C'], 'colB': [11,12,13,14,15,16], 'colC': [21,22,23,24,25,26]}) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ colA ┆ colB ┆ colC β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ A ┆ 11 ┆ 21 β”‚ β”‚ A ┆ 12 ┆ 22 β”‚ β”‚ B ┆ 13 ┆ 23 β”‚ β”‚ B ┆ 14 ┆ 24 β”‚ β”‚ C ┆ 15 ┆ 25 β”‚ β”‚ C ┆ 16 ┆ 26 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ I am grouping and resampling like this. Please note that I am using a seed here so this example is reproducible but this would get run many many times with no seeds in he end. df_resampled = ( df_original .group_by('colA', maintain_order=True) .agg(pl.all()) .sample(fraction=1.0, shuffle=True, seed=9) ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ colA ┆ colB ┆ colC β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════║ β”‚ B ┆ [13, 14] ┆ [23, 24] β”‚ β”‚ C ┆ [15, 16] ┆ [25, 26] β”‚ β”‚ A ┆ [11, 12] ┆ [21, 22] β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ What I can't figure out is how to explode the lists and end up with this. The original order within each group is preserved. Only the groups themselves are reshuffled on each run. shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ colA ┆ colB ┆ colC β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ B ┆ 13 ┆ 23 β”‚ β”‚ B ┆ 14 ┆ 24 β”‚ β”‚ C ┆ 15 ┆ 25 β”‚ β”‚ C ┆ 16 ┆ 26 β”‚ β”‚ A ┆ 11 ┆ 21 β”‚ β”‚ A ┆ 12 ┆ 22 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
As @jqurious pointed out in the comments, this is easily solved with... df_resampled.explode(pl.exclude("colA"))
2
2
79,190,108
2024-11-14
https://stackoverflow.com/questions/79190108/how-to-generate-an-array-which-is-a-multiple-of-original
I'm trying to upsize OpenCV images in Python in such a manner that the individual pixels are spread out by an integral factor; I use this to visually examine deep detail and individual pixel values can be seen (using cv2.imshow in this instance). For example, an array: [[1,2], [3,4]] And a factor of 2 means I'd get: [[1,1,2,2], [1,1,2,2], [3,3,4,4], [3,3,4,4]] I've done this by generating an array of size*factor using np.zeros, then iterating each point in the original array and copying it to the target array using (for example) for y in range(src.shape[0]): for x in range(src.shape[1]): tgt[y*f:y*f+f, x*f:x*f+f, :] = src[y,x,:] But as you can imagine, it's not the fastest approach, and I'm hoping I'm just not finding the right thing. OpenCV (and PIL) do not have a resize capability that doesn't interpolate by one method or another, which seems weird all by itself. I looked over & tried numpy broadcast*, numpy stride_trickks, opencv functions, PIL functions. The semi-manual method works as long as I don't need interactivity, but I'm trying to adjust parameters to several opencv functions quickly so I can find the right combinations to solve my problem. (Which is proprietary, so I can't share imagery...) Waiting a significant time between results is counterproductive.
You can use opencv's cv.resize with nearest-neighbor as interpolation method (cv.INTER_NEAREST) to achieve what you need: import cv2 as cv import numpy as np src = np.array([[1,2], [3,4]]) dst = cv.resize(src, (4,4), interpolation=cv.INTER_NEAREST) print(dst) Output: [[1 1 2 2] [1 1 2 2] [3 3 4 4] [3 3 4 4]] Live demo
1
3
79,188,746
2024-11-14
https://stackoverflow.com/questions/79188746/presenting-complex-table-data-in-chart-for-a-single-slide
Tables allow to summarise complex information. I have a table similar following one (this is produce for this question) in my latex document, like so: \documentclass{article} \usepackage{graphicx} % Required for inserting images \usepackage{tabularx} \usepackage{booktabs} \usepackage{makecell} \begin{document} \begin{table}[bt] \caption{Classification results.} \label{tab:baseline-clsf-reprt} \setlength{\tabcolsep}{1pt} % Adjust column spacing \renewcommand{\arraystretch}{1.2} % Adjust row height \begin{tabular}{lcccccccccccc} \toprule & \multicolumn{3}{c}{Data1} & \multicolumn{3}{c}{\makecell{Data2 \\ (original)}} & \multicolumn{3}{c}{\makecell{Data2 \\ (experiment 3)}} & \multicolumn{3}{c}{\makecell{Data2 \\ (experiment 4)}} \\ \cmidrule(r{1ex}){2-4} \cmidrule(r{1ex}){5-7} \cmidrule(r{1ex}){8-10} \cmidrule(r{1ex}){11-13} & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 & Precision & Recall & F1 \\ \midrule Apple & 0.61 & 0.91 & 0.71 & 0.61 & 0.72 & 0.91 & 0.83 & 0.62 & 0.71 & 0.62 & 0.54 & 0.87 \\ Banana & 0.90 & 0.32 & 0.36 & 0.86 & 0.81 & 0.53 & 0.61 & 0.69 & 0.68 & 0.72 & 0.56 & 0.57 \\ Orange & 0.23 & 0.35 & 0.18 & 0.56 & 0.56 & 0.56 & 0.54 & 0.55 & 0.55 & 0.55 & 0.57 & 0.63 \\ Grapes & 0.81 & 0.70 & 0.76 & 0.67 & 0.47 & 0.54 & 0.85 & 0.28 & 0.42 & 0.38 & 0.66 & 0.48 \\ Mango & 0.31 & 0.23 & 0.45 & 0.87 & 0.54 & 0.73 & 0.63 & 0.57 & 0.63 & 0.75 & 0.29 & 0.34 \\ \bottomrule \end{tabular} \end{table} \end{document} Which gives: Now, I preparing a slide deck, and I needed to present the classification results in just one slide. To show results of each dataset for each fruit and metric. My attempts didn't result in a chart that's meaning (showing all info in the table). First attempt: import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns datasets = ['Data1', 'Data2-Orig', 'Data2-Exp3', 'Data2-Exp4'] fruits = ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'] metrics = ['Precision', 'Recall', 'F1'] colors = ['#1f77b4', '#ff7f0e', '#2ca02c'] # Colors for Precision, Recall, F1 data = { 'Fruit': ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'], 'Data1_Precision': [0.61, 0.90, 0.23, 0.81, 0.31], 'Data1_Recall': [0.91, 0.32, 0.35, 0.70, 0.23], 'Data1_F1': [0.71, 0.36, 0.18, 0.76, 0.45], 'Data2-Orig_Precision': [0.61, 0.86, 0.56, 0.67, 0.87], 'Data2-Orig_Recall': [0.72, 0.81, 0.56, 0.47, 0.54], 'Data2-Orig_F1': [0.91, 0.53, 0.56, 0.54, 0.73], 'Data2-Exp3_Precision': [0.83, 0.61, 0.54, 0.85, 0.63], 'Data2-Exp3_Recall': [0.62, 0.69, 0.55, 0.28, 0.57], 'Data2-Exp3_F1': [0.71, 0.68, 0.55, 0.42, 0.63], 'Data2-Exp4_Precision': [0.62, 0.72, 0.55, 0.38, 0.75], 'Data2-Exp4_Recall': [0.54, 0.56, 0.57, 0.66, 0.29], 'Data2-Exp4_F1': [0.87, 0.57, 0.63, 0.48, 0.34] } df = pd.DataFrame(data) # Reshape data for Seaborn df_melted = df.melt(id_vars='Fruit', var_name='Metric', value_name='Score') # Split the 'Metric' column into separate columns for easier grouping df_melted[['Dataset', 'Measure']] = df_melted['Metric'].str.split('_', expand=True) df_melted.drop(columns='Metric', inplace=True) plt.figure(figsize=(12, 8)) sns.set_style("whitegrid") # Create grouped bar plot sns.barplot( data=df_melted, x='Fruit', y='Score', hue='Dataset', ci=None ) # Customize plot plt.title('Classification Results by Fruit and Dataset') plt.xlabel('Fruit type') plt.ylabel('Score') plt.legend(title='Dataset', bbox_to_anchor=(1.05, 1), loc='upper left') # Show plot plt.tight_layout() Gives: Second attempt: fig, ax = plt.subplots(figsize=(14, 8)) # Set the width of each bar and spacing between groups bar_width = 0.2 group_spacing = 0.25 x = np.arange(len(fruits)) # Plot bars for each dataset and metric combination for i, dataset in enumerate(datasets): for j, metric in enumerate(metrics): # Calculate the position for each bar within each group positions = x + i * (len(metrics) * bar_width + group_spacing) + j * bar_width # Plot each metric bar ax.bar(positions, df[f'{dataset}_{metric}'], width=bar_width, label=f'{metric}' if i == 0 else "", color=colors[j]) # Customize x-axis and labels ax.set_xticks(x + (len(datasets) * len(metrics) * bar_width + (len(datasets) - 1) * group_spacing) / 2 - bar_width / 2) ax.set_xticklabels(fruits) ax.set_xlabel('Fruit type') ax.set_ylabel('Score ') ax.set_title('Classification Results by Dataset, Fruit, and Metric') # Create custom legend for metrics metric_legend = [plt.Line2D([0], [0], color=colors[i], lw=4) for i in range(len(metrics))] ax.legend(metric_legend, metrics, title="Metrics", loc="upper left", bbox_to_anchor=(1.05, 1)) plt.tight_layout() plt.show() This gives: All these plots does not present the result in a way people can easily flow in a presentation. And adding the original table doesn't just make sense. People cannot easily flow the results in a table as I talk. How would you recommend plotting the results in this table for adding to a slide?
I would definitely go for some kind of heatmap. Any barplot-like graphic would be cluttered. import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = { 'Fruit': ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'], 'Data1-Precision': [0.61, 0.90, 0.23, 0.81, 0.31], 'Data1-Recall': [0.91, 0.32, 0.35, 0.70, 0.23], 'Data1-F1': [0.71, 0.36, 0.18, 0.76, 0.45], 'Data2-Orig-Precision': [0.61, 0.86, 0.56, 0.67, 0.87], 'Data2-Orig-Recall': [0.72, 0.81, 0.56, 0.47, 0.54], 'Data2-Orig-F1': [0.91, 0.53, 0.56, 0.54, 0.73], 'Data2-Exp3-Precision': [0.83, 0.61, 0.54, 0.85, 0.63], 'Data2-Exp3-Recall': [0.62, 0.69, 0.55, 0.28, 0.57], 'Data2-Exp3-F1': [0.71, 0.68, 0.55, 0.42, 0.63], 'Data2-Exp4-Precision': [0.62, 0.72, 0.55, 0.38, 0.75], 'Data2-Exp4-Recall': [0.54, 0.56, 0.57, 0.66, 0.29], 'Data2-Exp4-F1': [0.87, 0.57, 0.63, 0.48, 0.34] } df = pd.DataFrame(data) df_melted = df.melt(id_vars='Fruit', var_name='Dataset-Metric', value_name='Score') df_melted[['Dataset', 'Metric']] = df_melted['Dataset-Metric'].str.extract(r'(.+)-(.+)') heatmap_data = df_melted.pivot(index='Fruit', columns=['Dataset', 'Metric'], values='Score') plt.figure(figsize=(14, 8)) sns.heatmap( heatmap_data, annot=True, fmt=".2f", cmap="YlGnBu", linewidths=0.5, cbar_kws={'label': 'Score'} ) plt.title('Classification Results Heatmap') plt.xlabel('Dataset and Metric') plt.ylabel('Fruit') plt.tight_layout() plt.show() which gives But if you absolutely want to stick to barplots, choose to do it in 3d: import pandas as pd import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D data = { 'Fruit': ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'], 'Data1-Precision': [0.61, 0.90, 0.23, 0.81, 0.31], 'Data1-Recall': [0.91, 0.32, 0.35, 0.70, 0.23], 'Data1-F1': [0.71, 0.36, 0.18, 0.76, 0.45], 'Data2-Orig-Precision': [0.61, 0.86, 0.56, 0.67, 0.87], 'Data2-Orig-Recall': [0.72, 0.81, 0.56, 0.47, 0.54], 'Data2-Orig-F1': [0.91, 0.53, 0.56, 0.54, 0.73], 'Data2-Exp3-Precision': [0.83, 0.61, 0.54, 0.85, 0.63], 'Data2-Exp3-Recall': [0.62, 0.69, 0.55, 0.28, 0.57], 'Data2-Exp3-F1': [0.71, 0.68, 0.55, 0.42, 0.63], 'Data2-Exp4-Precision': [0.62, 0.72, 0.55, 0.38, 0.75], 'Data2-Exp4-Recall': [0.54, 0.56, 0.57, 0.66, 0.29], 'Data2-Exp4-F1': [0.87, 0.57, 0.63, 0.48, 0.34] } df = pd.DataFrame(data) df_melted = df.melt(id_vars='Fruit', var_name='Dataset-Metric', value_name='Score') df_melted[['Dataset', 'Metric']] = df_melted['Dataset-Metric'].str.extract(r'(.+)-(.+)') fruits = df_melted['Fruit'].unique() datasets = df_melted['Dataset'].unique() metrics = df_melted['Metric'].unique() x = np.array([np.where(fruits == fruit)[0][0] for fruit in df_melted['Fruit']]) y = np.array([np.where(datasets == dataset)[0][0] for dataset in df_melted['Dataset']]) z = np.array([np.where(metrics == metric)[0][0] for metric in df_melted['Metric']]) scores = df_melted['Score'].values fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111, projection='3d') dx = dy = 0.4 dz = scores colors = plt.cm.viridis(scores / max(scores)) ax.bar3d(x, y, np.zeros_like(z), dx, dy, dz, color=colors, alpha=0.8) ax.set_xlabel('Fruit') ax.set_ylabel('Dataset') ax.set_zlabel('Score') ax.set_xticks(range(len(fruits))) ax.set_xticklabels(fruits, rotation=45) ax.set_yticks(range(len(datasets))) ax.set_yticklabels(datasets) ax.set_zticks(np.linspace(0, 1, 6)) plt.title('3D Bar Plot of Classification Results') plt.tight_layout() plt.show() which gives BUT, I still think a heatmap is more readable.
1
2
79,188,329
2024-11-14
https://stackoverflow.com/questions/79188329/pandas-find-duplicate-pairs-between-2-columns-of-data
I have a dataset that contains 3 columns. These are edge connections between nodes and the strength of the connection. What I am trying to do is find and merge the extra edges that can occur when the direction goes in the opposite direction. as a short example data_frame = pd.DataFrame({"A":["aa", "aa", "aa", "bb", "bb", "cc", "dd", "dd"], "B":["bb", "cc", "dd", "aa", "dd", "aa", "ee", "aa"], "C":[4,3,4,5,3,4,2, 5]}) the resulting node graph aa - bb | \ | cc dd -- ee From the nodes, we have overlap as "aa - bb" is the same as "bb - aa" and same with "aa - dd" and "dd - aa" I thought about merging A and B together both forward and reverse, concatenating the two dataframes and than performing a group_by().sum() but I end up with extras that need to be removed afterwards. ideally this is how it would work A | B | C A | B | C aa bb 4 aa bb 9 aa cc 3 aa cc 7 aa dd 4 aa dd 9 bb aa 5 bb dd 3 bb dd 3 --> dd ee 2 cc aa 4 dd ee 2 dd aa 5
You can aggregate as frozenset, then perform a groupby.sum: out = (data_frame['C'] .groupby(data_frame[['A', 'B']].agg(frozenset, axis=1)) .sum() .reset_index() ) Output: index C 0 (bb, aa) 9 1 (cc, aa) 7 2 (dd, aa) 9 3 (bb, dd) 3 4 (ee, dd) 2 Variant to get the original columns: cols = ['A', 'B'] out = (data_frame .groupby(data_frame[['A', 'B']].agg(frozenset, axis=1), as_index=False) .agg(dict.fromkeys(cols, 'first')|{'C': 'sum'}) ) Output: A B C 0 aa bb 9 1 aa cc 7 2 aa dd 9 3 bb dd 3 4 dd ee 2 Since converting to frozenset is quite slow, you can also sort the values in a consistent order using numpy, and groupby.sum: import numpy as np tmp = data_frame.copy() # avoid modifying the original frame tmp[['A', 'B']] = np.sort(data_frame[['A', 'B']], axis=1) out = tmp.groupby(['A', 'B'], as_index=False).sum() Variant of the similar method suggested by @PandaKim, with improved efficiency: cols = ['A', 'B'] out = (data_frame.groupby([*np.sort(data_frame[cols]).T])['C'] .sum().rename_axis(cols).reset_index() ) Output: A B C 0 aa bb 9 1 aa cc 7 2 aa dd 9 3 bb dd 3 4 dd ee 2 timings:
1
3
79,188,007
2024-11-14
https://stackoverflow.com/questions/79188007/polars-equivalent-of-numpy-tile
df = pl. DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]}) print(df) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ I am looking for the polars equivalent of numpy.tile. Something along the line such as df.tile(2) or df.select(pl.all().tile(2)). The expected result should look like this: shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
You could concat several times the same DataFrame: out = pl.concat([df]*2) # for contiguous memory out = pl.concat([df]*2, rechunk=True) Or, for just 2 repeats vstack: out = df.vstack(df) Alternatively, using Expr.repeat_by + explode: N = 2 (df.with_columns(pl.all().repeat_by(pl.lit(N)), pl.int_ranges(pl.lit(N))) .explode(pl.all()) .sort(by='literal', maintain_order=True).drop('literal') ) Output: β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
3
1
79,187,488
2024-11-14
https://stackoverflow.com/questions/79187488/groupby-count-after-conditional-python
I'm trying to perform a groupby sum on a specific column in a pandas df. But I only want to execute of count after a certain threshold. For this example, it will be where B > 2. The groupby is on A and the count is on C. The correct output should be: x = 3 y = 9 df = pd.DataFrame(dict(A=list('ababaa'), B=[1, 1, 3, 4, 5, 6], C=[9, 9, 0, 9, 1, 2])) df.loc[(df['B'] > 2), 'Count'] = df.groupby('A')['C'].transform('sum') df['Count'] = df['Count'].replace(np.NaN, 0).astype(int) Out: A B C Count 0 x 1 9 0 1 y 1 9 0 2 x 3 0 12 *3 3 y 4 9 18 *9 4 x 5 1 12 *3 5 x 6 2 12 *3
Use mask in both sides: m = df['B'] > 2 df['Count'] = 0 df.loc[m, 'Count'] = df[m].groupby('A')['C'].transform('sum') print (df) A B C Count 0 a 1 9 0 1 b 1 9 0 2 a 3 0 3 3 b 4 9 9 4 a 5 1 3 5 a 6 2 3 Another idea is use Series.where: m = df['B'] > 2 df['Count'] = m.groupby(df['A']).transform('sum').where(m, 0) Or numpy.where: m = df['B'] > 2 df['Count'] = np.where(m, m.groupby(df['A']).transform('sum'), 0) Or multiple by mask: m = df['B'] > 2 df['Count'] = m.groupby(df['A']).transform('sum').mul(m)
1
2
79,187,234
2024-11-14
https://stackoverflow.com/questions/79187234/how-can-i-fix-the-python-repl-in-vs-code-with-python-3-13
I'm having trouble sending code in Python file to the interactive REPL in VS Code (using Shift + Enter). Single line code and functions with one line work fine, but any code chunks with multiple lines raise IndentationErrors. Also KeyboardInterrupt shows up in the REPL every time code is sent. Tried updating and restarting VS Code and the Python Extension. VS Code Version: 1.95.2 (Universal), configured to use 4 spaces instead of tabs Python Extension from Microsoft: v2024.20.0 Python: 3.13.0 (venv) MacOS Sonoma 14.2.1
It's a known issue: https://github.com/microsoft/vscode-python/issues/24256 Yes, that KeyboardInterrupt is handled via #24422 So you give it a try tomorrow (it just got merged today Nov 12), and it wont be there! So apparently it will be fixed in the next vscode-python build (v2024.20.0 is currently the latest).
2
2