question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,975,594
2024-9-11
https://stackoverflow.com/questions/78975594/should-i-use-djangos-floatfield-or-decimalfield-for-audio-length
I use: duration = float(ffmpeg.probe(audio_path)["format"]["duration"]) I collect an audio/video's length and want to store it using my models. Should I use models.DecimalField() or models.FloatField()? I use it to calculate and store a credit/cost in my model using credit = models.DecimalField(max_digits=20, decimal_places=4)
I think the most sensical way is to use a DurationField model field [Django-doc]. Django will look what database the backend uses, and try to work with the most sensical column type the database offers: class MyModel(models.Model): duration = models.DurationField() and work with: from datetime import timedelta MyModel.objects.create( duration=timedelta( seconds=float(ffmpeg.probe(audio_path)['format']['duration']) ) )
2
0
78,988,010
2024-9-15
https://stackoverflow.com/questions/78988010/explode-multiple-columns-with-different-lengths
I have a dataframe like: data = { "a": [[1], [2], [3, 4], [5, 6, 7]], "b": [[], [8], [9, 10], [11, 12]], } df = pl.DataFrame(data) """ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ [1] ┆ [] β”‚ β”‚ [2] ┆ [8] β”‚ β”‚ [3, 4] ┆ [9, 10] β”‚ β”‚ [5, 6, 7] ┆ [11, 12] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """ Each pair of lists may not have the same length, and I want to "truncate" the explode to the shortest of both lists: """ β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 2 ┆ 8 β”‚ β”‚ 3 ┆ 9 β”‚ β”‚ 4 ┆ 10 β”‚ β”‚ 5 ┆ 11 β”‚ β”‚ 6 ┆ 12 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ """ I was thinking that maybe I'd have to fill the shortest of both lists with None to match both lengths, and then drop_nulls. But I was wondering if there was a more direct approach to this?
Here's one approach: min_length = pl.min_horizontal(pl.col('a', 'b').list.len()) out = (df.filter(min_length != 0) .with_columns( pl.col('a', 'b').list.head(min_length) ) .explode('a', 'b') ) Output: shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 2 ┆ 8 β”‚ β”‚ 3 ┆ 9 β”‚ β”‚ 4 ┆ 10 β”‚ β”‚ 5 ┆ 11 β”‚ β”‚ 6 ┆ 12 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Explanation Get the length for the lists in both columns with Expr.list.len and get the shortest for each row with pl.min_horizontal. Now, filter out the rows where min_length == 0 (df.filter) and inside df.with_columns select the first n values of each list with Expr.list.head. Finally, apply df.explode.
7
6
78,979,491
2024-9-12
https://stackoverflow.com/questions/78979491/how-can-i-avoid-using-pl-dataframe-iter-rows-and-instead-vectorize-this
I have two polars dataframes containing a unique ID and the name of a utility. I am trying to build a mapping of entries between these two dataframes. I am using polars_fuzzy_match to do a fuzzy string search against entries. My first dataframe (wg_df) is approximately a subset of the second (eia_df). In my code below I am passing each utility_name from wg_df into fuzzy_match_score run against the eia_utility_name. Can I avoid the rowise iteration and vectorize this? import polars as pl from polars_fuzzy_match import fuzzy_match_score # Sample data # wg_df is approximately a subset of eia_df. wg_df = pl.DataFrame({"wg_id": [1, 2], "utility_name": ["Utility A", "Utility B"]}) eia_df = pl.DataFrame( {"eia_id": [101, 102, 103], "utility_name": ["Utility A co.", "Utility B", "utility c"]} ) out = pl.DataFrame( schema=[ ("wg_id", pl.Int64), ("eia_id", pl.Int64), ("wg_utility_name", pl.String), ("utility_name", pl.String), ("score", pl.UInt32), ], ) # Iterate through each wg utility and find the best match in eia # can this be vectorized? for wg_id, utility in wg_df.iter_rows(): res = ( eia_df.with_columns(score=fuzzy_match_score(pl.col("utility_name"), utility)) .filter(pl.col("score").is_not_null()) .sort(by="score", descending=True) ) # insert the wg_id and wg_utility_name into the results. They have to be put into the res.insert_column( 0, pl.Series("wg_id", [wg_id] * len(res)), ) res.insert_column(2, pl.Series("wg_utility_name", [utility] * len(res))) out = out.vstack(res.select([col_name for col_name in out.schema]))
polars-fuzzy-match would need to add support for vectorization. (i.e. at the Rust level) The polars_ds plugin has vectorized functions backed by the impressive RapidFuzz library. import polars_ds as pds (eia_df .lazy() .join(wg_df.lazy(), how="cross") .with_columns( score = pds.str_fuzz(pl.col.utility_name, pl.col.utility_name_right), ) .collect() ) shape: (6, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ eia_id ┆ utility_name ┆ wg_id ┆ utility_name_right ┆ score β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════β•ͺ════════════════════β•ͺ══════════║ β”‚ 101 ┆ Utility A co. ┆ 1 ┆ Utility A ┆ 0.818182 β”‚ β”‚ 101 ┆ Utility A co. ┆ 2 ┆ Utility B ┆ 0.727273 β”‚ β”‚ 102 ┆ Utility B ┆ 1 ┆ Utility A ┆ 0.888889 β”‚ β”‚ 102 ┆ Utility B ┆ 2 ┆ Utility B ┆ 1.0 β”‚ β”‚ 103 ┆ utility c ┆ 1 ┆ Utility A ┆ 0.777778 β”‚ β”‚ 103 ┆ utility c ┆ 2 ┆ Utility B ┆ 0.777778 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ It seems polars-fuzzy-match uses the nucleo library which says it uses the Smith-Waterman algorithm. rapidfuzz has an open feature request for this algorithm. It seems like it only gives a scores if they are actual substrings? (eia_df .lazy() .join(wg_df.lazy(), how="cross") .with_columns( score = pds.str_fuzz(pl.col.utility_name, pl.col.utility_name_right), ) .with_columns( a = pl.col.utility_name .str.to_lowercase() .str.contains( pl.col.utility_name_right.str.to_lowercase(), literal = True ), b = pl.col.utility_name_right.str.to_lowercase() .str.contains( pl.col.utility_name.str.to_lowercase(), literal = True ) ) .filter(pl.col.a | pl.col.b) .collect() ) shape: (2, 7) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ eia_id ┆ utility_name ┆ wg_id ┆ utility_name_right ┆ score ┆ a ┆ b β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 ┆ str ┆ f64 ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════════β•ͺ═══════β•ͺ════════════════════β•ͺ══════════β•ͺ══════β•ͺ═══════║ β”‚ 101 ┆ Utility A co. ┆ 1 ┆ Utility A ┆ 0.818182 ┆ true ┆ false β”‚ β”‚ 102 ┆ Utility B ┆ 2 ┆ Utility B ┆ 1.0 ┆ true ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ https://polars-ds-extension.readthedocs.io/en/latest/string.html#polars_ds.string.str_fuzz https://github.com/rapidfuzz/rapidfuzz-rs https://github.com/helix-editor/nucleo https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm https://github.com/rapidfuzz/RapidFuzz/issues/175
2
0
78,974,836
2024-9-11
https://stackoverflow.com/questions/78974836/fitting-multidimensional-data-with-python-symfit-odemodel
I am trying to fit the parameters of an ODE to data with two dimensions, which should generally be possible, according to the example Fitting multidimensional datasets. This is my failed attempt so far import symfit as sf import numpy as np # data x = np.arange(0,19) data = 10e-4 * np.array([8,10,12,11,10,15,25,37,46,40,43,35,27,14,8,10,13,9,10]) data2 = 10e-3 * np.array([0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0]) # model t, mp, inp = sf.variables('t, mp, inp') tau, w = sf.parameters('tau, w') model = {sf.D(mp, t): (-mp + inp * w) / tau } # fitting ode_model = sf.ODEModel(model, initial={inp: data2[0], t: 0.0, mp: data[0]}) fit = sf.Fit(ode_model, inp=data2, t=x, mp=data) fit_result = fit.execute() It seems I'm not correctly defining inp in the model definition. I'm getting the following error TypeError: got an unexpected keyword argument 'inp' . I suspect that I'm making a mistake in providing inp as named_data to sf.Fit() as ist does not appear to be an independent variable in the model documentation sf.Fit() This is the full error message: /usr/local/lib/python3.10/dist-packages/symfit/core/fit.py in __init__(self, model, objective, minimizer, constraints, absolute_sigma, *ordered_data, **named_data) 372 # Bind as much as possible the provided arguments. 373 signature = self._make_signature() --> 374 bound_arguments = signature.bind_partial(*ordered_data, **named_data) 375 376 # Select objective function to use. Has to be done before calling /usr/lib/python3.10/inspect.py in bind_partial(self, *args, **kwargs) 3191 Raises `TypeError` if the passed arguments can not be bound. 3192 """ -> 3193 return self._bind(args, kwargs, partial=True) 3194 3195 def __reduce__(self): /usr/lib/python3.10/inspect.py in _bind(self, args, kwargs, partial) 3173 arguments[kwargs_param.name] = kwargs 3174 else: -> 3175 raise TypeError( 3176 'got an unexpected keyword argument {arg!r}'.format( 3177 arg=next(iter(kwargs)))) TypeError: got an unexpected keyword argument 'inp' Could someone help? Thank you so so much :-)
inp has to be defined as an expression and integrated into the expression d mp / dt. To do so, the data for inp has to be fit so as to reproduce data2. Since data2 looks like a square wave, a Fourier series is used to fit the data. The following is the code implementing these ideas: import symfit as sf import numpy as np from functools import reduce import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.integrate import solve_ivp def square_wave_symbolic(x, L, shift, n=8): return reduce(lambda a, b: a + b, [sf.sin(2*(1+2 *k) *sf.pi * (x+shift) / L)/(1+2* k) for k in range(n)]) def square_wave_numeric(x, L, shift, n=8): return reduce(lambda a, b: a + b, [np.sin(2*(1+2 *k) *np.pi * (x+shift) / L)/(1+2* k) for k in range(n)]) # data x = np.arange(0,19) data = 10e-4 * np.array([8,10,12,11,10,15,25,37,46,40,43,35,27,14,8,10,13,9,10]) data2 = 10e-3 * np.array([0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0]) # Fit square wave alpha0 = 5E-3 beta0 = 5E-3 L0 = 18 shift0 = -4 n = 9 plt.figure() plt.title('Approximation of inp using Fourier Series') result = curve_fit(lambda x, alpha, beta, shift, L: alpha + beta * square_wave_numeric(x, L, shift, n=n), x, data2, p0=(alpha0, beta0, shift0, L0), full_output=True) alpha, beta, shift, Lf = result[0] plt.scatter(x, data2, label='Original data') plt.plot(x, alpha + beta * square_wave_numeric(x, Lf, shift, n=n), label='Fit data') plt.legend() # model t, mp = sf.variables('t, mp') tau, w = sf.parameters('tau, w') inp = alpha + beta * square_wave_symbolic(t, Lf, shift, n=n) model = {sf.D(mp, t): (-mp + inp * w) / tau } # fitting ode_model = sf.ODEModel(model, {t: 0.0, mp: data[0]}) fit = sf.Fit(ode_model, t=x, mp=data) fit_result = fit.execute() print(fit_result.params) w = fit_result.params['w'] tau = fit_result.params['tau'] # Verify parameters by solving numerically def f(t, x, w, tau): mp = x[0] inp = alpha + beta * square_wave_numeric(t, Lf, shift, n=n) return np.array([(-mp + inp * w) / tau]) t0 = x[0] tf = x[-1] t_eval = np.linspace(t0, tf, 200) ode_result = solve_ivp(f, (t0, tf), (data[0],), t_eval=t_eval, args=(w, tau), method='Radau') plt.figure() plt.scatter(x, data, label='Original data') plt.plot(ode_result.t, ode_result.y[0], label='ODE solver data') plt.legend() I end up with the following value for fit_result.params: OrderedDict([('tau', 1.191491195628205), ('w', 3.4675371620653133)]) The following are the plot for inp variable fit: ODE fit plot: Notes: The square wave function describing inp is not smooth. I am not sure about where the data comes from, but if this is just some test data, the real data may have to be fit using a different function. It would make sense to do the ODE and inp function fit together. Doing it using a minimize function is feasible. I am not sure how this could be done with symfit as I am not familiar with this library.
2
1
78,985,516
2024-9-14
https://stackoverflow.com/questions/78985516/how-to-automatically-download-or-warn-about-a-non-pypi-dependency-of-a-python-pa
I have a Python package, which is distributed on PyPi. It depends on number of other packages available on PyPi and on Psi4, which is only distributed on Conda repositories (https://anaconda.org/psi4/psi4), not on PyPi. Now, my package is distributed as wheel package via hatchling, so my pyproject.toml looks similar to this: [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [project] name = "My project" version = "1.0.0" authors = [ ] description = "New method" readme = "README.md" requires-python = ">=3.12" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Operating System :: POSIX :: Linux", ] dependencies = [ "qiskit==1..0", "qiskit-nature>=0.5.1", "numpy>=1.23.0", "deprecated>=1.2.0"] Is there any way to deal with such an external dependency automatically? Ideally it'd download and install Psi4 from its repositories, but if not, is there any way to get at least a warning before the download from PyPi starts? I had a look around and found this related question, which, unfortunately, got no answers: Distributing pip packages that have non-pypi dependencies
If you use a setup.py instead of pyproject.toml, you can add an installation check for psi4. pyproject.toml does not natively support conda packages. For setup.py, you can approach it like: import subprocess import sys try: import psi4 except ImportError: subprocess.check_call([sys.executable, "-m", "conda", "install", "-c", "psi4", "psi4"]) from setuptools import setup setup( # Your package configuration here ) For more information on setuptools and how to create it to replicate the work of your pyproject.toml see here. Additionally, You can add in your README.md about the conda dependency. When the pypi package is downloaded, setup.py is run automatically and therefore you can also add a warning in there too if you dont want to install it directly. This approach work with sdist and not with wheels as they bypass it as mentioned in the comments by @phd. For wheels, you can add a piece of code in your project itself to check for psi4 and either warn and exit or install it. before you begin your work in the code file add something like this: try: import psi4 except ImportError: raise ImportError("Psi4 is required but not installed. Please install it via Conda: " "`conda install -c conda-forge psi4`.") Additionally, You can add it in your README.md and also add a long description where it is stated properly that psi4 is a requirement.
2
1
78,985,089
2024-9-14
https://stackoverflow.com/questions/78985089/modifying-multiple-dimensions-of-jax-array-simultaneously
When using the jax_array.at[idx] function, I wish to be able to set values at both a set of specified rows and columns within the jax_array to another jax_array containing values in the same shape. For example, given a 5x5 jax array, I might want to set the values, jax_array.at[[0,3],:][:,[1,2]] to some 2x2 array of values. However, I am coming across an issue where the _IndexUpdateRef' object is not subscriptable. I understand the idea of the error (and I get a similar one when using 2 chained .at[]s), but I want to know if there is anyway to achieve the desired functionality within 1 line.
JAX follows the indexing semantics of NumPy, and NumPy's indexing semantics allow you to do this via broadcasted arrays of indices (this is discussed in Integer array indexing in the NumPy docs). So for example, you could do something like this: import jax.numpy as jnp x = jnp.zeros((4, 6), dtype=int) y = jnp.array([[1, 2], [3, 4]]) i = jnp.array([0, 3]) j = jnp.array([1, 2]) # reshape indices so they broadcast i = i[:, jnp.newaxis] j = j[jnp.newaxis, :] x = x.at[i, j].set(y) print(x) [[0 1 2 0 0 0] [0 0 0 0 0 0] [0 0 0 0 0 0] [0 3 4 0 0 0]] Here the i index has shape (2, 1), and the j index has shape (1, 2), and via broadcasting rules they index a 2x2 noncontiguous subgrid of the array x, which you can then set to the contents of y in a single statement.
2
2
78,983,916
2024-9-13
https://stackoverflow.com/questions/78983916/implementing-discriminated-unions-in-pydantic-without-using-nested-models
I'm trying to implement discriminated unions in Pydantic to select the correct class based on user input using the discriminator parameter. While the documentation suggests creating a nested model class to handle this easily, I'd like to use this functionality without introducing an additional nested model and have a similar behaviour as a normal Pydantic BaseModel class. I've tried to use RootModel as a workaround, but the resulting object is encapsulated within the .root property, which isn't ideal for my use case. I am able to do .model_dump() but unable to access the attributes on it directly. Is there a better way to implement this without creating a nested model or using RootModel? from typing import Literal, Union, Annotated from pydantic import BaseModel, Field, RootModel class Cat(BaseModel): pet_type: Literal["cat"] meows: int class Dog(BaseModel): pet_type: Literal["dog"] barks: float class Lizard(BaseModel): pet_type: Literal["reptile", "lizard"] scales: bool Animal = Annotated[ Union[Cat, Dog, Lizard], Field(discriminator="pet_type"), ] AnimalModel = RootModel[Animal] animal = AnimalModel.model_validate({"pet_type": "cat", "meows": 3}) try: # want to access the attributes directly print(animal.pet_type) except AttributeError as e: print(e) #> "RootModel[Annotated[Union[Cat, Dog, Lizard], FieldInfo(annotation=NoneType, required=True, discriminator='pet_type')]]" object has no attribute 'pet_type' # have to access the attributes by first accessing the .root attribute print(animal.root.pet_type) (am using Pydantic v2.7)
You can use TypeAdapter instead of RootModel: from typing import Literal, Union, Annotated from pydantic import BaseModel, Field, TypeAdapter class Cat(BaseModel): pet_type: Literal["cat"] meows: int class Dog(BaseModel): pet_type: Literal["dog"] barks: float class Lizard(BaseModel): pet_type: Literal["reptile", "lizard"] scales: bool Animal = Union[Cat, Dog, Lizard] AnimalAdapter: TypeAdapter[Animal] = TypeAdapter( Annotated[Animal, Field(discriminator="pet_type")] ) animal_1 = AnimalAdapter.validate_python({"pet_type": "cat", "meows": 3}) animal_2 = AnimalAdapter.validate_python({"pet_type": "dog", "barks": 2}) try: print(animal_1.pet_type) # ok print(animal_2.pet_type) # ok except AttributeError as e: print(e)
2
3
78,983,868
2024-9-13
https://stackoverflow.com/questions/78983868/keep-only-rows-that-have-at-least-one-null
I am trying to do basically the opposite of drop_nulls(). I want to keep all rows that have at least one null. I want to do something like (but I don't want to list all other columns): for (name,) in ( df.filter( pl.col("a").is_null() | pl.col("b").is_null() | pl.col("c").is_null() ) .select("name") .unique() .rows() ): print( f"Ignoring `{name}` because it has at least one null", file=sys.stderr, ) df = df.drop_nulls()
It sounds like you are looking for pl.Expr.any_horizontal. The following will keep all rows containing at least one null value (in any of the columns). df.filter(pl.any_horizontal(pl.all().is_null()))
4
3
78,980,521
2024-9-13
https://stackoverflow.com/questions/78980521/mapping-over-arrays-of-functions-in-jax
What is the most performant, idiomatic way of mapping over arrays of functions in JAX? Context: This GitHub issue shows a way to apply vmap to several functions using lax.switch. The example is reproduced below: from jax import lax, vmap import jax.numpy as jnp def func1(x): return 2 * x def func2(x): return -2 * x def func3(x): return 0 * x functions = [func1, func2, func3] index = jnp.arange(len(functions)) x = jnp.ones((3, 5)) vmap_functions = vmap(lambda i, x: lax.switch(i, functions, x)) vmap_functions(index, x) # DeviceArray([[ 2., 2., 2., 2., 2.], # [-2., -2., -2., -2., -2.], # [ 0., 0., 0., 0., 0.]], dtype=float32) My specific questions are: Is this (currently) the most idiomatic way of mapping over arrays of functions in JAX? What performance penalties, if any, does this method incur? (This refers to both runtime and/or compile-time performance.)
For the kind of operation you're doing, where the functions are applied over full axes of an array in a way that's known statically, you'll probably get the best performance via a simple Python loop: def map_functions(functions: list[Callable[[Array], Array], x: Array) -> Array: assert len(functions) == x.shape[0] return jnp.array([f(row) for f, row in zip(functions, x)]) The method based on switch is designed for the more general case where the structure of the indices is not known statically. What performance penalties, if any, does this method incur? (This refers to both runtime and/or compile-time performance.) vmap of switch is implemented via select, which will compute the output of each function for the full input array before selecting just the pieces needed to construct the output, so if the functions are expensive to compute, it may lead to longer runtimes.
2
1
78,982,423
2024-9-13
https://stackoverflow.com/questions/78982423/how-to-propagate-null-in-a-column-after-first-occurrence
I have 2 data sets: The first one describes what I expect: expected = { "name": ["start", "stop", "start", "stop", "start", "stop", "start", "stop"], "description": ["a", "b", "c", "d", "e", "f", "g", "h"], } and the second one describes what I observe: observed = { "name": ["start", "stop", "start", "stop", "stop", "stop", "start"], "time": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7], } I want to match all my observations to descriptions based on the order I expect. But once I see an inconsistency, nothing should match anymore. I managed to find the first inconsistency like: observed_df = pl.DataFrame(observed).with_row_index() expected_df = pl.DataFrame(expected).with_row_index() result = observed_df.join(expected_df, on=["index", "name"], how="left").select( "description", "time" ) """ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ description ┆ time β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 0.1 β”‚ β”‚ b ┆ 0.2 β”‚ β”‚ c ┆ 0.3 β”‚ β”‚ d ┆ 0.4 β”‚ β”‚ null ┆ 0.5 β”‚ -> First inconsistency gets a "null" description β”‚ f ┆ 0.6 β”‚ β”‚ g ┆ 0.7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ """ How can I propagate this null passed the first inconsistency? Also, my real data has an additional id column, where each id is a case like described above, and independent from other ids. Is it possible to somehow "group by id" and apply this logic all at once instead of working with each id separately: observed = { "id": [1, 2, 1, 2, 2], "name": ["start", "start", "stop", "stop", "stop"], "time": [0.1, 0.2, 0.3, 0.4, 0.5], } expected = { "id": [1, 1, 2, 2], "name": ["start", "stop", "start", "stop"], "description": ["a", "b", "c", "d"], } result = { "id": [1, 2, 1, 2, 2], "description": ["a", "c", "b", "d", None], "time": [0.1, 0.2, 0.3, 0.4, 0.5], }
The check whether any null value appeared in an increasing window can be done using a cumulative evaluation, such as pl.Expr.cum_sum. A when-then-otherwise construct can be used to propagate null values accordingly. In your example, this might look as follows. ( observed_df .join( expected_df, on=["index", "name"], how="left", ) .select("description", "time") .with_columns( pl.when( pl.col("description").is_null().cum_sum() == 0 ).then( "description" ) ) ) shape: (7, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ description ┆ time β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 0.1 β”‚ β”‚ b ┆ 0.2 β”‚ β”‚ c ┆ 0.3 β”‚ β”‚ d ┆ 0.4 β”‚ β”‚ null ┆ 0.5 β”‚ β”‚ null ┆ 0.6 β”‚ β”‚ null ┆ 0.7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ If you'd like to evaluate this expression separately for each group defined by id, a window function, such as pl.Expr.over, might be used. ... pl.when( pl.col("description").is_null().cum_sum() == 0 ).then( "description" ).over("id") # <-- ...
4
2
78,982,686
2024-9-13
https://stackoverflow.com/questions/78982686/filter-polars-dataframe-on-records-where-column-values-differ-catching-nulls
Have: import polars as pl df = pl.DataFrame({'col1': [1,2,3], 'col2': [1, None, None]}) in polars dataframes, those Nones become nulls: > df β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ 1 β”‚ β”‚ 2 ┆ null β”‚ β”‚ 3 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Want: some command that returns the last two rows of df, since 2 & 3 are not null Tried: ..., but everything I've thought to try seems to drop/ignore records where one column is null: df.filter(pl.col('col1')!=pl.col('col2')) # returns no rows df.filter(~pl.col('col1')==pl.col('col2')) # returns no rows df.filter(~pl.col('col1').eq(pl.col('col2'))) # returns no rows ...
It is mentioned somewhat at the end of the .filter() docs. There are "missing" functions: .eq_missing() .ne_missing() df.filter(pl.col.col1.ne_missing(pl.col.col2)) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ 2 ┆ null β”‚ β”‚ 3 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
3
78,981,683
2024-9-13
https://stackoverflow.com/questions/78981683/error-installing-trottersuzuki-package-in-venv-numpy-not-found-error-even-thoug
It says numpy not installed even though it is installed. I thought may be the venv is not accessible to pip (which it should be, because numpy is installed inside the venv) and I installed it system wide using sudo apt install python3-numpy as you can see in the very last of the following snippet. vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ workon gpinn (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ pip install trottersuzuki Collecting trottersuzuki Using cached trottersuzuki-1.6.2.tar.gz (218 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─> [20 lines of output] Traceback (most recent call last): File "/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vanangamudi/.virtualenvs/gpinn/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 503, in run_setup super().run_setup(setup_script=setup_script) File "/tmp/pip-build-env-d6cdwe1m/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup exec(code, locals()) File "<string>", line 6, in <module> ModuleNotFoundError: No module named 'numpy' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ python -c 'import numpy' (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $ python -c 'import matplotlib' Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'matplotlib' (gpinn) vanangamudi@kaithadi:~/code/bec-gp/BEC_GP $
trottersuzuki doesn't provide binary wheels, only sdist. When installing from a source distribution modern pip first build a wheel in a new isolated virtual environment. In this isolated venv there is no numpy. For pip to install numpy during build phase there must be a file pyproject.toml in the package that lists numpy as a build dependency. But there is no such file. I advice to report it… oh, I see you've already reported the bug; you should told this in the question. Until the bug is fixed there is workaround: disable build isolation: $ pip install numpy setuptools $ pip install --no-build-isolation trottersuzuki
2
1
78,981,438
2024-9-13
https://stackoverflow.com/questions/78981438/unittest-class-init-mock-exception-raised
#!/usr/bin/env python3 import unittest from unittest.mock import patch class User(object): def __init__(self): self.__name = None self.__authorised_users = ["me", "you"] local_input = input("please provide your windows 8 character lower case login: ") if local_input not in self.__authorised_users: raise ValueError("you have no permission to run this app") else: self.__name = local_input class TestUser(unittest.TestCase): def testUserClassFound(self): self.assertNotIsInstance(ModuleNotFoundError, User) @patch('builtins.input', lambda *args:"y") def testUserClassInit(self): # just check if user class has __name set to none local_object = User() self.assertEqual(local_object._User__name, None) if __name__ == "__main__": unittest.main() I would like to, in the second test, just assure when the class object is created, the tester checks the class has the attribute __name and set to None. I need to patch the raise ValueError from the class init , but I can't find the correct patch.
You can patch User.__init__ with a wrapper that suppresses ValueError: def ignore(func, exception): def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except exception: pass return wrapper class TestUser(unittest.TestCase): @patch('builtins.input', lambda *args:"y") @patch.object(User, '__init__', ignore(User.__init__, ValueError)) def testUserClassInit(self): local_object = User() self.assertEqual(local_object._User__name, None) Demo: https://ideone.com/TEpHsF
2
2
78,979,081
2024-9-12
https://stackoverflow.com/questions/78979081/python-exception-stack-trace-not-full-when-function-is-wrapped
I have two files t.py: import functools import traceback def wrapper(func): @functools.wraps(func) def wrapped(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: traceback.print_exception(e) return wrapped @wrapper def problematic_function(): raise ValueError("Something went wrong") and t2.py: from t import problematic_function problematic_function() when I call from command line python t2.py, the stack trace information that has info from t2.py is lost when printing traceback.print_exception(e) in the t.py. What I get is Traceback (most recent call last): File "/home/c/t.py", line 9, in wrapped return func(*args, **kwargs) File "/home/c/t.py", line 18, in problematic_function raise ValueError("Something went wrong") ValueError: Something went wrong where as if I remove the decorator I get: Traceback (most recent call last): File "/home/c/t2.py", line 3, in <module> problematic_function() File "/home/cu/t.py", line 17, in problematic_function raise ValueError("Something went wrong") ValueError: Something went wrong How do I get the full stack trace in the wrapped function as without the wrapper? Thanks!
To print the full stack trace you can unpack traceback.walk_tb to obtain the last frame of the stack from the traceback, and pass it to traceback.print_stack as a starting frame to output the entire stack: def wrapper(func): @functools.wraps(func) def wrapped(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: *_, (last_frame, _) = traceback.walk_tb(e.__traceback__) traceback.print_stack(last_frame) return wrapped Demo here
2
1
78,980,426
2024-9-13
https://stackoverflow.com/questions/78980426/flag-the-max-value-in-each-column-of-a-dataframe-as-true-and-the-rest-as-false
I have a DataFrame that I am rounding. After the round, I subtract the original from the resultant. This gives me a data frame with a shape identical to the original, but which contains the amount of change the rounding operation caused. I need to transform this into a Boolean where there is a true flag for the max of the row, and everything else in the row is false. All steps but the final one are handled with a vectorized function. But I can't seem to figure out how to vectorize the last step. This is what I am currently doing: a = pd.DataFrame([[2.290119, 5.300725, 17.266693, 75.134857, 0.000000, 0.000000, 0.007606], [0.000000, 7.560276, 55.579175, 36.858266, 0.000000, 0.000000, 0.002284], [0.001574, 15.225538, 39.309742, 45.373800, 0.000951, 0.001198, 0.087197], [0.000000, 55.085390, 15.547927, 29.327661, 0.000000, 0.017691, 0.021331], [0.000000, 66.283488, 15.636673, 17.912315, 0.000000, 0.003185, 0.164339]]) b = a.round(-1) # round to 10's place (not 10ths) c = b-a round_modifier = c.apply(lambda x: x.eq(x.max()), axis="columns") print(round_modifier) 0 1 2 3 4 5 6 0 False False False True False False False 1 False False True False False False False 2 False True False False False False False 3 False True False False False False False 4 False False True False False False False I am aware of DataFrame.idxmax(axis="columns"), which gives me the column name (of each row) where the max is found, but I can't seem to find a (pythonic) way to take that and populate the corresponding flag with a True. The lambda expression I'm using gives the correct result, but I'm hoping for a faster method. For anyone wondering, the use case is that I want to round the values in the original data frame to the tens place, such that they sum to 100. I have pre-scaled this data so it should be close, but the rounding can cause the sum to come to 90 or 110. I intend to use this T/F matrix to decide which rounded value caused the most delta, then round it in the opposite direction since this is the minimum impact method with which to coerce the series to properly sum to 100 in chunks of 10.
You can use idxmax to get the position of column with the max value, and use numpy broadcasting to match the position with the column. m = c.columns.to_numpy() == c.idxmax(axis=1).to_numpy()[:, None] new_df = pd.DataFrame(np.where(m, True, False), columns=c.columns) End result: 0 1 2 3 4 5 6 False False False True False False False False False True False False False False False True False False False False False False True False False False False False False False True False False False False
2
1
78,978,186
2024-9-12
https://stackoverflow.com/questions/78978186/correct-way-to-find-dimension-after-broadcasting-in-numpy
Suppose I am given a few numpy arrays, say a, b and c, which are assumed to be broadcastable. Is there a standard or othwerwise an elegant way to find the array shape after broadcasting? Of course, something like (a+b+c).shape would work, but is very inefficient if I am only interested in the shape of the result.
You can use broadcast_shapes. Like this result_shape = np.broadcast_shapes(a.shape, b.shape, c.shape) Here is the documentation page: https://numpy.org/doc/stable/reference/generated/numpy.broadcast_shapes.html Good luck!
2
3
78,977,665
2024-9-12
https://stackoverflow.com/questions/78977665/django-autoreload-raises-typeerror-unhashable-type-types-simplenamespace
When I upgrade importlib_meta from version 8.4.0 to 8.5.0 (released just yesterday, Sep 11 2024), I get the following error when I start running the development server with python manage.py runserver: File "/app/manage.py", line 17, in main execute_from_command_line(sys.argv) File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 413, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 75, in execute super().execute(*args, **options) File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 459, in execute output = self.handle(*args, **options) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 112, in handle self.run(**options) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 119, in run autoreload.run_with_reloader(self.inner_run, **options) File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 671, in run_with_reloader start_django(reloader, main_func, *args, **kwargs) File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 660, in start_django reloader.run(django_main_thread) File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 344, in run self.run_loop() File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 350, in run_loop next(ticker) File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 390, in tick for filepath, mtime in self.snapshot_files(): File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 411, in snapshot_files for file in self.watched_files(): File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 304, in watched_files yield from iter_all_python_module_files() File "/usr/local/lib/python3.10/site-packages/django/utils/autoreload.py", line 120, in iter_all_python_module_files return iter_modules_and_files(modules, frozenset(_error_files)) TypeError: unhashable type: 'types.SimpleNamespace' I actually could narrow the problem down to the following commit https://github.com/python/importlib_metadata/commit/56b61b3dd90df2dba2da445a8386029b54fdebf3. When I install importlib_meta just one commit before the problematic commit via pip install git+https://github.com/python/importlib_metadata@d968f6270d55f27a10491344a22e9e0fd77b5583 the error disappears. When I install importlib_meta at the problematic commit the error starts to appear. I can not really make sense out of the Traceback and how the problem might be connected to the changes of the mentioned commit. Has anyone an idea what could cause this problem or how I can debug it? Update (Sep 15, 2024) The problem is solved now with version 3.20.2 of the zipp package.
The problematic commit in importlib_meta puts a types.SimpleNamespace object in sys.modules["zipp.compat.overlay.zipfile"]: zipfile = types.SimpleNamespace(**vars(importlib.import_module('zipfile'))) ... sys.modules[__name__ + '.zipfile'] = zipfile # type: ignore[assignment] Here's the invocation error site in django: modules = tuple( m for m in map(sys.modules.__getitem__, keys) if not isinstance(m, weakref.ProxyTypes) ) # WARNING: `modules` now has `zipfile = types.SimpleNamespace(...)` in it return iter_modules_and_files(modules, frozenset(_error_files)) @lru_cache(maxsize=1) def iter_modules_and_files(modules, extra_files): From the Python docs on @functools.lru_cache: Since a dictionary is used to cache results, the positional and keyword arguments to the function must be hashable. types.SimpleNamespace() is not hashable: >>> from types import SimpleNamespace >>> hash(SimpleNamespace()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'types.SimpleNamespace' I would say that this is a django problem - django should account for the fact that sys.modules is subject to monkeypatching at runtime, and can hold arbitrary objects (including unhashable ones).
5
3
78,976,021
2024-9-12
https://stackoverflow.com/questions/78976021/azure-monitor-open-telemetry-python-package-raising-exception-when-python-app-de
I have a Python web application built with Plotly Dash and deployed on Azure App Service using Python 3.12. As Azure App Service has not yet enabled Python 3.12 version to have application insight enabled, I have utilized the package: azure-monitor-opentelemetry==1.6.2 within my application to log exceptions into my application insight resource. However, as I deploy my web application on Azure App Service, and when someone is accessing the web app, my application logs the following exception onto the application insight: Failed to derive Resource from Tracer Provider: 'ProxyTracerProvider' object has no attribute 'resource' Traceback (most recent call last): File "/tmp/8dcd22e385c2da5/antenv/lib/python3.12/site-packages/azure/monitor/opentelemetry/exporter/export/trace/_exporter.py", line 91, in export resource = tracer_provider.resource # type: ignore AttributeError: 'ProxyTracerProvider' object has no attribute 'resource' I have implemented the azure-monitor-opentelemetry in my application simply as following: appopentelemetry.py: from azure.monitor.opentelemetry import configure_azure_monitor from models.environmentmanager.environmentmanager import EnvironmentManager def azure_monitoring_open_telemetry(): environment_manager = EnvironmentManager() connection_string = environment_manager.get_connection_string() # The app insight connection string return configure_azure_monitor( connection_string=connection_string, enable_live_metrics=True, ) and in my app.py: ...imports... environment_manager = EnvironmentManager() # This should be executed if the environment is not local, this is set in .env if ran locally # and set in environment variables of the app service on Azure when deployed if not environment_manager.get_is_local(): from functions.app.appopentelemetry import azure_monitoring_open_telemetry azure_monitoring_open_telemetry() ... app = Dash(__name__, use_pages=True, external_stylesheets=stylesheets) ... app.layout = dmc.MantineProvider(...) if __name__ == '__main__': app.run(debug=True, port=8000) Can I know what I am doing wrong?
Version 1.6.2 also upgrades azure-monitor-opentelemetry-exporter to "1.0.0b29" and it has the braking changes. Just downgrade to azure-monitor-opentelemetry-exporter = "1.0.0b28" and maybe to azure-monitor-opentelemetry = "1.6.1". It will solve the issue for now. In general, I think that config should be done differntly rather then using configure_azure_monitor but it is not that clear from the documentation. Try to read here https://opentelemetry.io/docs/languages/python/getting-started/
3
1
78,977,145
2024-9-12
https://stackoverflow.com/questions/78977145/derived-python-dataclass-cannot-override-default-value
In the following code snippet the dataclass Derived is derived from dataclass Base. The Derived dataclass is setting new default values for field1 and field2. from dataclasses import dataclass @dataclass class Base: field1: str field2: str = "default_base_string" @dataclass class Derived(Base): field1 = "default_string1" field2 = "default_string2" print(Derived()) Because Derived is a dataclass and has set a default value for all member variables field1 and field2, I expected that I could simply initialize the class calling Derived(). However I get the following error: TypeError: Derived.__init__() missing 1 required positional argument: 'field1' Is it not possible to set the default value for a member variable in a derived dataclass, so that this member variable becomes optional in the autogenerated __init__() method? Must I always define my own __init__() method in the Derived class? Or am I doing something wrong here?
To set default values in Derived you need to add the type annotation @dataclass class Derived(Base): field1: str = "default_string1" field2: str = "default_string2" print(Derived()) # Derived(field1='default_string1', field2='default_string2')
2
2
78,975,956
2024-9-11
https://stackoverflow.com/questions/78975956/python-list-comprehension-two-loops-with-three-results
I can ask my question best by just giving an example. Let's say I want to use a list comprehension to generate a set of 3-element tuples from two loops, something like this: [ (y+z,y,z) for y in range(10) if y%2==0 for z in range(20) if z%3==0 ] This works, giving me [(0, 0, 0), (3, 0, 3), (6, 0, 6), (9, 0, 9), (12, 0, 12), (15, 0, 15), ... ] I am wondering, though, if there is a way to do it more cleanly, something to the effect of [ (x,y,z) for y in range(10) if y%2==0 for z in range(20) if z%3==0 ... somehow defining x(y,z) ... ] I would consider something like this to be more clean, especially since what I really need to do is much more complicated than the example I give here. Everything I have tried has given me a syntax error.
You can do: out = [ (x, y, z) for y in range(10) if y % 2 == 0 for z in range(20) if z % 3 == 0 for x in [y + z] # <-- initialize `x` in list-comprehension ] This is optimized since Python 3.9: https://docs.python.org/3/whatsnew/3.9.html#optimizations
3
5
78,974,383
2024-9-11
https://stackoverflow.com/questions/78974383/how-can-i-build-distribute-install-python-packages-with-limited-access-to-pa
At my workplace pip is not able to access the outside world to download packages. I'm not sure what system exactly is preventing this, but ideally I shouldn't be installing any old packages from the internet anyway. The only way I can install packages from online is to download a source distribution or wheel from the PyPI site or from the package's github and run pip install --no-deps <package path> so that pip doesn't attempt to go online for dependencies which would just hang for a long time and then fail. I am developing some tools for my own use, and would like to package and distribute them to other members of my team so that they can be installed from a tarball or wheel, and also not require them to go online and pull more dependencies manually. The tools I'm writing are using the standard library so they don't require any online dependencies. I'm leaning towards disregarding deprecation and using the setuptools versions of the now-removed distutils module, and writing a setup.py file for my package. However, this is deprecated or at least seems heavily discouraged. The setuptools user guide gives instructions for using the build package, but this package has a set of dependencies, and some of those have their own dependencies, and manually installing all of those packages by hand is a serious headache. At the top-level build requires: colorama (this feels like it should be optional) importlib-metadata packaging pyproject_hooks tomli and some of these have their own list of dependencies. Does anyone in a similarly restricted environment have a preferred method for building and installing internal packages? If so, what is your process? In an ideal world I could just install all of the dependencies I need with no problem, but I understand that has security implications and I don't think I'll be able to make any sort of exceptions for pip's access being blocked. What I've Tried I have read documentation for various Python packaging methods, and the simplest I could find that isn't "deprecated" still requires the build package, which as I've said requires a large-enough tree of dependencies that installing them all manually is out of the question and still is in a gray area of acceptability for security reasons. The setuptools documentation recommends against the use of setup.py: setuptools quickstart The latest version of Python officially removes distutils from the standard library and continuing to use setup.py builds with modules from setuptools seems to be discouraged for the most part PEP 0632
In my situation it seems that the most straightforward course of action is to ignore the fact that directly using setup.py is discouraged and do it anyway. With setup.py importing setup from setuptools I'm able to build a package that can be installed with pip by running python setup.py sdist Of the suggestions in the comments this seems to be only way to build a package that only requires installation of setuptools if it isn't already installed. I would still like a more modern and elegant solution, but that may not be possible due to whatever is restricting pip's access to the internet.
2
0
78,975,219
2024-9-11
https://stackoverflow.com/questions/78975219/maintaining-order-in-polars-data-frame-after-partition-by
Does polars.DataFrame.partition_by preserves the order of rows within each group? I understand that group_by does, even when maintain_order=False. From documentation: Within each group, the order of rows is always preserved, regardless of this argument. But nothing is mentioned for the partition_by operation. I guess this means the order is not guaranteed to be preserved, but looking for confirmation, since from a few tests I did the resulting dataframes (partitioned) always respected the original order. Here is a the code I used for some toy experiment: df = pl.DataFrame({ "a" : np.arange(100000000), "b": np.random.randint(0,50,100000000) } ) all_dfs = df.partition_by("b", as_dict=True) for key, df in all_dfs.items(): assert df["a"].is_sorted()
partition_by is implemented by just doing a group_by and extracting the groups into separate DataFrames. I see no reason why we would change that, so I think it's safe to assume the order within each group is preserved, at least with the default arguments. I'll see if we can get the docs to match group_by's docs in that regard.
2
2
78,975,044
2024-9-11
https://stackoverflow.com/questions/78975044/quickest-way-to-iterate-over-a-pandas-dataframe-and-perform-an-operation-on-a-co
I have a table that is laid out somewhat like this: t linenum many other columns 1234567 0 ... 1234568 0 ... 1234569 0 ... 1234570 1 ... 1234571 1 ... Except it is very, very large. As in, the raw .dat files can get up to 20 gb. I have them converted into .h5 files so they are slightly smaller, but still large (about half the size, I'd say.) I want to add a column that is time within line, so it subtracts the first time value for the line from each time, so I end up having something like this: t linenum time within line 1234567 0 0 1234568 0 1 1234569 0 2 1234570 1 0 1234571 1 1 The thing is, while I know that doing an operation on the whole dataframe at once is much faster, I haven't been able to figure out how to do this without using a for loop, since the number that needs to be subtracted depends on linenum, and it takes ages. (Yesterday, I tested this on a file about 9gb big, and I gave up and went home after it had been processing for half an hour, only to find this morning that my computer had restarted overnight so the jupyter server had to restart and I lost the processed dataframe...) Here is the relevant parts of the code I currently have: import pandas as pd file = [h5 file address] df = pd.read_hdf(file) for linenum in pd.unique(df['linenum']): line_df = df.loc[df['linenum'] == linenum] first_t = int(line_df['t'].iloc[0]) df.loc[df['linenum'] == linenum, 't_adjusted'] = (df.loc[df['linenum'] == linenum, 't'] - first_t) Is there any way to do this without a for loop, and if not, is there any way to make it faster? I'm trying to graph one of the other columns using matplotlib.pyplot.tricontourf, with linenum on the x axis and time within line on the y axis, if that's relevant at all. There is another column I can use as a workaround because it's approximately proportional to time within line but I'd prefer to find a way to use the time. Thank you! Edit: Also, if it's relevant, I am using Python 3.7. For some reason some of the computers my programs have to run on at my work are still on Windows 7 so I can't update...
You can use a groupby on 'linenum' and then transform to populate each group df['timewithinline'] = df.groupby('linenum')['t'].transform(lambda x: x - min(x)) If the times are already sorted, you can use: df['timewithinline'] = df.groupby('linenum')['t'].transform(lambda x: x - x.iloc[0])
3
3
78,974,451
2024-9-11
https://stackoverflow.com/questions/78974451/how-to-map-scores-from-one-table-to-another-when-the-cell-contains-operators
I performed OLS regression on a dataset and I have the predicted Diagnostic_Score but the mapping table (norms) can have two operators - e.g. >= and <=. Is there a way to map the predicted score to the percentile? My first thought was to map the scores that I can match and the ones that do not match I know must to be associated with a percentile that has an operator in the Diagnostic_Score column. I could then use numpy.select and create the conditions and choices. Does that approach make sense or is there an easier way to map the test percentile to the predicted score without having to manually create a 108 conditions and 108 choices for numpy.select? Here is a sample import pandas as pd d = {'percentile': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 2, 10: 2, 11: 2, 12: 2, 13: 2, 14: 2, 15: 2, 16: 2, 17: 2}, 'Subject': {0: 'Math', 1: 'Math', 2: 'Math', 3: 'Math', 4: 'Math', 5: 'Math', 6: 'Math', 7: 'Math', 8: 'Math', 9: 'Math', 10: 'Math', 11: 'Math', 12: 'Math', 13: 'Math', 14: 'Math', 15: 'Math', 16: 'Math', 17: 'Math'}, 'Term': {0: 'Fall', 1: 'Fall', 2: 'Fall', 3: 'Fall', 4: 'Fall', 5: 'Fall', 6: 'Fall', 7: 'Fall', 8: 'Fall', 9: 'Fall', 10: 'Fall', 11: 'Fall', 12: 'Fall', 13: 'Fall', 14: 'Fall', 15: 'Fall', 16: 'Fall', 17: 'Fall'}, 'Grade_Level': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 0, 10: 1, 11: 2, 12: 3, 13: 4, 14: 5, 15: 6, 16: 7, 17: 8}, 'Diagnostic_Score': {0: '<=296', 1: '<=322', 2: '<=352', 3: '<=372', 4: '<=390', 5: '<=405', 6: '<=412', 7: '<=423', 8: '<=434', 9: '297', 10: '323', 11: '353', 12: '373', 13: '391', 14: '406', 15: '413', 16: '424', 17: '435'}} norms = pd.DataFrame(d) df = pd.DataFrame({'Term': ['Fall', 'Fall', 'Fall', 'Fall'], 'Subject': ['Math', 'Math', 'Math', 'Math'], 'Grade_Level': [0, 3, 5, 7], 'Predicted Score': [290, 300, 406, 424]}) My expected output is Term Subject Grade_Level Predicted Score Percentile 0 Fall Math 0 290 1 1 Fall Math 3 300 1 2 Fall Math 5 406 2 3 Fall Math 7 424 2 norms table percentile Subject Term Grade_Level Diagnostic_Score 0 1 Math Fall 0 <=296 1 1 Math Fall 1 <=322 2 1 Math Fall 2 <=352 3 1 Math Fall 3 <=372 4 1 Math Fall 4 <=390 5 1 Math Fall 5 <=405 6 1 Math Fall 6 <=412 7 1 Math Fall 7 <=423 8 1 Math Fall 8 <=434 9 2 Math Fall 0 297 10 2 Math Fall 1 323 11 2 Math Fall 2 353 12 2 Math Fall 3 373 13 2 Math Fall 4 391 14 2 Math Fall 5 406 15 2 Math Fall 6 413 16 2 Math Fall 7 424 17 2 Math Fall 8 435 ... 99 Math Spring 8 >=585
You can extract the prefix and perform a merge and a merge_asof: # add group/score norms[['group', 'Predicted Score']] = ( norms['Diagnostic_Score'] .astype(str) .str.extract(r'([<>]=|)(\d+)') .astype({1: 'int'}) ) # ensure scores are sorted (for the merge_asof) norms.sort_values(by='Predicted Score', inplace=True) # define groups low = norms['group'].eq('<=') high = norms['group'].eq('>=') exact = ~(low|high) # merge s_ex = df.merge(norms[exact], on=['Subject', 'Term', 'Predicted Score'], how='left')['percentile'] s_lo = pd.merge_asof(df.reset_index().sort_values(by='Predicted Score'), norms[low], by=['Subject', 'Term'], on=['Predicted Score'], direction='forward', ).set_index('index')['percentile'] s_hi = pd.merge_asof(df.reset_index().sort_values(by='Predicted Score'), norms[high], by=['Subject', 'Term'], on=['Predicted Score'], direction='backward' ).set_index('index')['percentile'] # combine df['Percentile'] = s_ex.fillna(s_lo).fillna(s_hi).astype(norms['percentile'].dtype) You can actually simplify to two merge_asof: low = norms['group'].ne('>=') s_lo = pd.merge_asof(df.reset_index().sort_values(by='Predicted Score'), norms[low], by=['Subject', 'Term'], on=['Predicted Score'], direction='forward', ).set_index('index')['percentile'] s_hi = pd.merge_asof(df.reset_index().sort_values(by='Predicted Score'), norms[high], by=['Subject', 'Term'], on=['Predicted Score'], direction='backward' ).set_index('index')['percentile'] df['Percentile'] = s_lo.fillna(s_hi).astype(norms['percentile'].dtype) Output: Term Subject Grade_Level Predicted Score Percentile 0 Fall Math 0 290 1 1 Fall Math 3 300 1 2 Fall Math 5 406 2 3 Fall Math 7 424 2
2
2
78,973,393
2024-9-11
https://stackoverflow.com/questions/78973393/pandas-rename-function-not-working-within-jupyter-notebook
I have a pandas dataframe ('df3') which columns I would like to rename.It is a subset of another dataframe, and I'm working in a jupyter notebook. Getting all infos from the dataframe structure: df3.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 29 entries, 104 to 132 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 ZΓ€hlung in 28 non-null object 1 Unnamed: 17 29 non-null object 2 Unnamed: 18 29 non-null object 3 Unnamed: 19 29 non-null object 4 Unnamed: 20 29 non-null object 5 Unnamed: 21 29 non-null object 6 Unnamed: 40 29 non-null object 7 Unnamed: 41 29 non-null object 8 Unnamed: 42 29 non-null object 9 Unnamed: 43 28 non-null object 10 Unnamed: 44 28 non-null object 11 Unnamed: 63 29 non-null object 12 Unnamed: 64 29 non-null object 13 Unnamed: 65 29 non-null object 14 Unnamed: 66 28 non-null object dtypes: object(15) memory usage: 3.5+ KB The columns need to be renamed by their index, so I create a dict with the index as keys and the new names as values: col_names= {0:'Zeit', 1:'ZU_PKW', 2: 'ZU_LKW1', 3: 'ZU_LKW2', 4: 'ZU_SV', 5: 'ZU_SV%', 6:'AB_PKW', 7: 'AB_LKW1', 8: 'AB_LKW2', 9: 'AB_SV', 10: 'AB_SV%', 11:'ALL_PKW', 12: 'ALL_LKW1', 13: 'ALL_LKW2', 14: 'ALL_SV' } I tried to rename the columns with all options using the 'rename'-function: 1. Create a new dataframe with renamed columns without using inplace=True df4 = df3.rename(index=col_names) Result: no error, but the columns are not renamed 2. Create a new dataframe with renamed columns by using inplace=True df4 = df3.rename(index={col_names, inplace = True) Result: Error: A value is trying to be set on a copy of a slice from a DataFrame And the resulting df is None. 3. Change the dataframe directly by using inplace=True df3.rename(index={col_names, inplace = True) Result: `Error: A value is trying to be set on a copy of a slice from a DataFrame and the df3 stays unchanged. 4. Specifying also axis=1 df3.rename(index={col_names, inplace = True) Returns TypeError: Cannot specify both 'axis' and any of 'index' or 'columns' I feel I'm running out of options. What am I doing wrong? Could it have something to do with working in the jupyter notebook environment?
If need rename by index of columns set values in list comprehension: df.columns = [col_names.get(i, i) for i in range(len(df.columns))] Or: df.columns = pd.Series(range(len(df.columns))).replace(col_names) If need set first N values by values of dictionary: df.columns = list(col_names.values())[:len(df.columns)] But if really need rename need change columns names to RangeIndex first: df = df.set_axis(range(len(df.columns)), axis=1).rename(columns=col_names)
2
2
78,972,997
2024-9-11
https://stackoverflow.com/questions/78972997/how-to-use-threading-in-python-in-an-unblocking-way
I have a complete "working" python code which is supposed to contain two threads which run simultaneously, which populate some lists in a dict, and when the user presses CRTL-C these two threads should be stopped and some output from both threads should be written to a file: import sys import time import threading import signal from functools import partial messages = {} lock = threading.Lock() class Handler: def __init__(self, port): self.port = port def run(self): while True: time.sleep(1) with lock: messages[self.port].append(time.time()) def signal_handler(filename, sig, frame): with lock: with open(filename, "w") as fileout: json.dump(messages, fileout) sys.exit(0) output = "test.out" signal.signal(signal.SIGINT, partial(signal_handler, output)) for port in [1,2]: messages[port] = [] handler = Handler(port) print("debug1") t = threading.Thread(target=handler.run()) print("debug2") t.daemon = True t.start() threads.append(t) # Keep the main thread running, waiting for CTRL-C try: while True: pass except KeyboardInterrupt: signal_handler(output, signal.SIGINT, None) However, this code blocks execution after the first debug1 has been printed. How to "unblock" this line so the two threads are started until the user presses CRTL-C (and the output is saved to a file)? ...The above code is just a template of a more complicated code that actually does something useful...
Apart from various typos (forgotten import json and threads=[]) the main problem is in t = threading.Thread(target=handler.run()). The target parameter is expected to be a function object, not the result of the call of that function. Here this code immediately calls the never ending function to assign its never coming result to t, actually blocking the main program. The solution is just to remove the parens: t = threading.Thread(target=handler.run)
2
2
78,972,018
2024-9-11
https://stackoverflow.com/questions/78972018/polars-replacing-values-of-other-groups-to-the-values-of-a-certain-group
I have the following Polars.DataFrame: df = pl.DataFrame( { "timestamp": [1, 2, 3, 1, 2, 3], "var1": [1, 2, 3, 3, 4, 5], "group": ["a", "a", "a", "b", "b", "b"], } ) print(df) out: shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ var1 ┆ group β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════║ β”‚ 1 ┆ 1 ┆ a β”‚ β”‚ 2 ┆ 2 ┆ a β”‚ β”‚ 3 ┆ 3 ┆ a β”‚ β”‚ 1 ┆ 3 ┆ b β”‚ β”‚ 2 ┆ 4 ┆ b β”‚ β”‚ 3 ┆ 5 ┆ b β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I want to replace the values of group b with the values of group a that are having the same timestamps. Desired output: shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ var1 ┆ group β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════║ β”‚ 1 ┆ 1 ┆ a β”‚ β”‚ 2 ┆ 2 ┆ a β”‚ β”‚ 3 ┆ 3 ┆ a β”‚ β”‚ 1 ┆ 1 ┆ b β”‚ β”‚ 2 ┆ 2 ┆ b β”‚ β”‚ 3 ┆ 3 ┆ b β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I have the current solution with generating a helper df: def group_value_replacer( df: pl.DataFrame, target_group_col: str, target_var: str, target_group: str, ): helper_df = df.filter(pl.col(target_group_col) == target_group) df = df.drop(target_var).join( helper_df.drop(target_group_col), on=["timestamp"], how="left", ) return df group_value_replacer(df, "group", "var1", "a") out: shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ group ┆ var1 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ══════║ β”‚ 1 ┆ a ┆ 1 β”‚ β”‚ 2 ┆ a ┆ 2 β”‚ β”‚ 3 ┆ a ┆ 3 β”‚ β”‚ 1 ┆ b ┆ 1 β”‚ β”‚ 2 ┆ b ┆ 2 β”‚ β”‚ 3 ┆ b ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ I want to improve the solution by using Polars.Expr: For example, is there a way for me to achieve the same operation using expressions like df.with_columns(pl.col(target_var).operationxx).
I think for generic solution your approach with join works fine, you could probably try something like this as well: filter() to filter var1 column to leave only values where group == a first() to get the value. over() to limit it to certain timestamp. coalesce() to fallback to actual value if value for group == a doesn't exist. df.with_columns( pl.coalesce( pl.col.var1.filter(pl.col.group == "a").first().over("timestamp"), pl.col.var1 ) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ var1 ┆ group β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════║ β”‚ 1 ┆ 1 ┆ a β”‚ β”‚ 2 ┆ 2 ┆ a β”‚ β”‚ 3 ┆ 3 ┆ a β”‚ β”‚ 1 ┆ 1 ┆ b β”‚ β”‚ 2 ┆ 2 ┆ b β”‚ β”‚ 3 ┆ 3 ┆ b β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ You can also skip coalesce() if you don't need a fallback. df.with_columns( pl.col.var1.filter(pl.col.group == "a").first().over("timestamp") )
2
1
78,971,919
2024-9-11
https://stackoverflow.com/questions/78971919/error-when-executing-regular-expression-with-python
the file "catalogo_no_linebreak.txt" contains a list of products, grouped in one line, i'am using the regular expression (re.split()) to try Retrieve the record of each product and logo after saving to the "saida.txt" file. The txt_format() function is where I process and use the regular expression "result = re.split(r',00', arq.readline())". The way I'm using the regular expression with product records that are returning incomplete, we can see it highlighted in red. How can I retrieve the content of each product that is in the pattern "0453. [content] ,00"? saida.txt 0453.000045-00453.213.00022733-0UMA ALIANÇA, DE: OURO, OURO BRANCO; CONSTAM: amolgada(s), PESOLOTE: 4,40G (QUATRO GRAMAS E QUARENTA CENTIGRAMAS)R$ 652,00 0453.000047-70453.213.00022959-6DUAS ALIANÇASCOM 3 ELOS CADA, DE: OURO; CONSTAM: amolgada(s),inscriçáes, PESO LOTE: 4 G (QUATRO GRAMAS)R$ 529,00 0453.000053-10453.213.00023492-1QUATRO ANÉIS, DOIS BRINCOS, UMA PULSEIRA, DE: OURO, OURO BAIXO,2,93G; CONTÉM: diamantes, massa, pedra branca, pedras; CONSTAM:amolgada(s), PESO LOTE: 31,98G (TRINTA E UM GRAMAS E NOVENTA E OITOCENTIGRAMAS)R$ 2.196,00 output saida.txt product records return "broken" catalogo_no_linebreak.txt 0453.000001-90453.213.00000175-7DUAS ALIANÇAS, SEIS ANÉIS, DUAS PULSEIRAS, DE: OURO BRANCO, OURO;CONTÉM: diamantes, pedras, pérola cultivada; CONSTAM: amolgada(s),inscriçáes, PESO LOTE: 17,99G (DEZESSETE GRAMAS E NOVENTA E NOVECENTIGRAMAS)0453.000002-70453.213.00000571-0UM ANEL, DOIS BRINCOS, UM COLAR, UM PENDENTE, DE: OURO; CONTÉM:pérola cultivada, PESO LOTE: 4,82G (QUATRO GRAMAS E OITENTA E DOISCENTIGRAMAS)R$ 623,000453.000005-10453.213.00001496-4UM ALFINETE, TRES ANÉIS, DOIS BRINCOS, QUATRO COLARES, UMPENDENTE, DUAS PULSEIRAS, DE: OURO BRANCO, OURO; CONTÉM: coral,pérola cultivada, diamantes, 1 D BRI KL VS APROX 0,25CT E 1 D LAP BRASIL KLVS APROX 0,40CT CC, PESO LOTE: 81,70G (OITENTA E UM GRAMAS ESETENTA CENTIGRAMAS)R$ 10.587,00 convertPdfToTxt.py from PyPDF2 import PdfReader import re import os.path pdf_reader = PdfReader("catalogo.pdf") parts = [] def set_number_of_pages(): total_pages = len(pdf_reader.pages) valid_pages = total_pages - 2 return valid_pages def get_number_of_pages(): return set_number_of_pages() def visitor_body(text, cm, tm, fontDict, fontSize): y = tm[5] if y > 0 and y < 750: parts.append(text) def txt_save(): numberOfpages = get_number_of_pages() for i in range(1, numberOfpages): page = pdf_reader.pages[i] page.extract_text(visitor_text=visitor_body) text_body = "".join(parts) with open("catalogo.txt", mode='a+', encoding='utf-8') as file: file.write(text_body + "\n") def remove_line_break(): file = open("catalogo.txt", mode="r", encoding="utf-8") for line in file.readlines(): a = line.rstrip('\n') with open("catalogo_no_linebreak.txt", mode='a+', encoding='utf-8') as arq: arq.write('{}'.format(a)) file.close() def txt_format(): with open('catalogo_no_linebreak.txt', mode='r', encoding='utf-8') as arq: result = re.split(r',00', arq.readline()) for item in result: with open('saida.txt', mode='a+', encoding='utf-8') as arq: arq.write(item + '\n') def delete_files(): file_catalog = os.path.isfile('catalogo.txt') file_catalog_no_linebreak = os.path.isfile('catalogo_no_linebreak.txt') if file_catalog: os.remove('catalogo.txt') if file_catalog_no_linebreak: os.remove('catalogo_no_linebreak.txt') def convert_to_txt(): txt_save() remove_line_break() txt_format() def start(): if pdf_reader: print('Encontrou o catalogo!') convert_to_txt() delete_files() start()
This code assumes that each product line starts with an ID of the same format. then separate the text using the ID and print the ID and Content without line breaks Solution import re content = """ 0453.000045-00453.213.00022733-0UMA ALIANÇA, DE: OURO, OURO BRANCO; CONSTAM: amolgada(s), PESOLOTE: 4,40G (QUATRO GRAMAS E QUARENTA CENTIGRAMAS)R$ 652,00 0453.000047-70453.213.00022959-6DUAS ALIANÇASCOM 3 ELOS CADA, DE: OURO; CONSTAM: amolgada(s),inscriçáes, PESO LOTE: 4 G (QUATRO GRAMAS)R$ 529,00 0453.000053-10453.213.00023492-1QUATRO ANÉIS, DOIS BRINCOS, UMA PULSEIRA, DE: OURO, OURO BAIXO,2,93G; CONTÉM: diamantes, massa, pedra branca, pedras; CONSTAM:amolgada(s), PESO LOTE: 31,98G (TRINTA E UM GRAMAS E NOVENTA E OITOCENTIGRAMAS)R$ 2.196,00 """ pattern_base=r'(\d{4}\.\d{6}-\d{5}\.\d{3}\.\d{8}-\w+)' result = re.split(pattern_base,content) for section in result: if section != "\n": if re.match(pattern_base, section): print('head: '+ section) else : section = section.replace("\n", " ") section = section.replace(" ", " ") section = section.strip() print('content: '+section) Output head: 0453.000045-00453.213.00022733-0UMA content: ALIANÇA, DE: OURO, OURO BRANCO; CONSTAM: amolgada(s), PESOLOTE: 4,40G (QUATRO GRAMAS E QUARENTA CENTIGRAMAS)R$ 652,00 head: 0453.000047-70453.213.00022959-6DUAS content: ALIANÇASCOM 3 ELOS CADA, DE: OURO; CONSTAM: amolgada(s),inscriçáes, PESO LOTE: 4 G (QUATRO GRAMAS)R$ 529,00 head: 0453.000053-10453.213.00023492-1QUATRO content: ANÉIS, DOIS BRINCOS, UMA PULSEIRA, DE: OURO, OURO BAIXO,2,93G; CONTÉM: diamantes, massa, pedra branca, pedras; CONSTAM:amolgada(s), PESO LOTE: 31,98G (TRINTA E UM GRAMAS E NOVENTA E OITOCENTIGRAMAS)R$ 2.196,00
2
3
78,971,681
2024-9-11
https://stackoverflow.com/questions/78971681/how-can-i-import-polars-type-definitions-like-joinstrategy
JoinStrategy is an input to join: https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.join.html My static type checking tool seems to be able to get a hold of JoinStrategy, but I'm not sure how/from where. Usually, type stub packages are available on PyPI, but nothing obvious stands out in this case: https://pypi.org/user/ritchie46/ How do I import JoinStrategy (or other type definitions provided by Polars) for my own use?
Types can be provided by one of two things: a separate type package, and by the same package. In this case it's being provided by the same package. If you read the source code, you can find where the type comes from. Example: from polars._typing import JoinStrategy Note that _typing denotes that this is part of the Polars private API, and is subject to change between releases. You can also print out the value of JoinStrategy: >>> JoinStrategy typing.Literal['inner', 'left', 'right', 'full', 'semi', 'anti', 'cross', 'outer'] You can also use this as a type definition. import typing JoinStrategy = typing.Literal['inner', 'left', 'right', 'full', 'semi', 'anti', 'cross', 'outer'] This has the advantage of not using the private API, but the disadvantage that if Polars adds a new JoinStrategy, your code won't automatically allow that in this type.
4
4
78,957,805
2024-9-6
https://stackoverflow.com/questions/78957805/packing-python-wheel-with-pybind11-using-bazel
I am trying to generate a wheel file using bazel, for a target that has pybind dependencies. The package by itself works fine (though testing), but when I'm packing it, the .so file is missing from the site_packges folder. This is my build file: load("@pybind11_bazel//:build_defs.bzl", "pybind_extension") load("@python_pip_deps//:requirements.bzl", "requirement") load("@rules_python//python:defs.bzl", "py_library", "py_test") load("@rules_python//python:packaging.bzl", "py_wheel", "py_package") # wrapper for so file py_library( name = "example", srcs = ["example.py","__init__.py"], deps = [ ], data = [":pyexample_inf"], imports = ["."], ) # compile pybind cpp pybind_extension( name = "pyexample_inf", srcs = ["pyexample_inf.cpp"], deps = [], linkstatic = True, ) # test wrapper py_test( name = "pyexample_test", srcs = ["tests/pyexample_test.py"], deps = [ ":example", ], ) # Use py_package to collect all transitive dependencies of a target, # selecting just the files within a specific python package. py_package( name = "pyexample_pkg", visibility = ["//visibility:private"], # Only include these Python packages. deps = [":example"], ) # using pip, this copies the files to the site_packges, but not the so file py_wheel( name = "wheel", abi = "cp311", author = "me", distribution = "example", license = "Apache 2.0", platform = select({ "@bazel_tools//src/conditions:linux_x86_64": "linux_x86_64", }), python_requires = ">=3.9.0", python_tag = "cpython", version = "0.0.1", deps = [":example"], ) How can I make the py_wheel copy the so file?
try deps = [":example.so"]
3
1
78,964,057
2024-9-9
https://stackoverflow.com/questions/78964057/can-i-perform-a-bit-wise-group-by-and-aggregation-with-polars-or
Let's say I have an auth field that use bit flags to indicate permissions (example bit-0 means add and bit-1 means delete). How do I bitwise-OR them together? import polars as pl df_in = pl.DataFrame( { "k": ["a", "a", "b", "b", "c"], "auth": [1, 3, 1, 0, 0], } ) The dataframe: df_in: shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 1 β”‚ β”‚ a ┆ 3 β”‚ β”‚ b ┆ 1 β”‚ β”‚ b ┆ 0 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ When I group by and sum, things look good, I sum the auth by k dfsum = df_in.group_by("k").agg(pl.col("auth").sum()) dfsum: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 4 β”‚ β”‚ b ┆ 1 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ So, it looks as if I am using group_by and agg correctly, when using sum. Not so good when using or_. dfor = df_in.group_by("k").agg(pl.col("auth").or_()) gives dfor: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ a ┆ [1, 3] β”‚ β”‚ b ┆ [1, 0] β”‚ β”‚ c ┆ [0] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Expectations: for the or_ I was expecting this result instead: df_wanted_or: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 3 β”‚ β”‚ b ┆ 1 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Now, I did find a workaround by using map_batches to call a Python function. Very simple something like functools.reduce(lambda x,y: x|y) but how do I do this without leaving Polars?
Update. Bitwise aggregation was implemented in version 1.9.0. So now you can use pl.Expr.bitwise_or(): ( df_in .group_by("k", maintain_order=True) .agg(pl.col.auth.bitwise_or()) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 3 β”‚ β”‚ b ┆ 1 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Previous answer. Bitwise aggregation is not yet implemented in polars - issue. There're a few ways you could approach it though: 1. Pure polars solution. unique() - not strictly necessary, but can reduce size of the aggregated lists. list.to_struct() to convert aggregated data to Struct. .reduce() to apply bitwise or operator. field() to access all the fields of the Struct within reduce context. ( df_in .group_by("k") .agg(pl.col.auth.unique()) .with_columns(pl.col.auth.list.to_struct()) .with_columns( auth = pl.reduce( lambda acc, x: acc | x, exprs = pl.col.auth.struct.field("*") ).fill_null(0) ) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ b ┆ 1 β”‚ β”‚ a ┆ 3 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ 2. DuckDB integration with Polars. You can use DuckDB integration with Polars and bit_or(); duckdb.sql(""" select k, bit_or(auth) as auth from df_in group by k """).pl() shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ a ┆ 3 β”‚ β”‚ b ┆ 1 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ 3. Polars integration with NumPy Another possible way to do that would be to use Polars integration with NumPy. First, use pure polars to aggregate auth columns to lists and convert them to arrays. df_agg = df_in.group_by("k").agg("auth") w = df_agg["auth"].list.len().max() df_agg = ( df_agg .with_columns( pl.col.auth.list.concat( pl.lit(0).repeat_by(w - pl.col.auth.list.len()) ) ).with_columns(pl.col.auth.list.to_array(w)) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ array[i64, 2] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════════║ β”‚ b ┆ [1, 0] β”‚ β”‚ a ┆ [1, 3] β”‚ β”‚ c ┆ [0, 0] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Now we can get auth column as Series, convert it to 2d numpy array with to_numpy() and use np.bitwise_or and reduce(): ( df_agg .with_columns( auth = np.bitwise_or.reduce(df_agg["auth"].to_numpy(), axis=1) ) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ k ┆ auth β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ b ┆ 1 β”‚ β”‚ a ┆ 3 β”‚ β”‚ c ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
3
2
78,960,111
2024-9-7
https://stackoverflow.com/questions/78960111/singular-matrix-during-b-spline-interpolation
According to the literature about B Splines, including Wolfram Mathworld, the condition for Cox de Boor's recursive function states that: In python, this would translate to: if (d_ == 0): if ( knots_[k_] <= t_ < knots_[k_+1]): return 1.0 return 0.0 where: d_: degree of the curve knots_: knot vector k_: index of the knot t_: parameter value {0.0,...,1.0} (reparametrized) However, this seems to generate a Singular matrix, when creating the linear system intended for interpolation, not approximation. For example, with 4 points: A = [[1. 0. 0. 0. ] [0.2962963 0.44444444 0.22222222 0.03703704] [0.03703704 0.22222222 0.44444444 0.2962963 ] [0. 0. 0. **0.** ]] //The last element (bottom-right) should have been 1.0 # Error: LinAlgError: file C:\Users\comevo\AppData\Roaming\Python\Python310\site-packages\numpy\linalg\_linalg.py line 104: Singular matrix If I change the second part of the condition to: if (d_ == 0): if ( knots_[k_] <= t_ <= knots_[k_+1]): // using <= instead of < return 1.0 return 0.0 I get the correct matrix and the correct spline. A = [[1. 0. 0. 0. ] [0.2962963 0.44444444 0.22222222 0.03703704] [0.03703704 0.22222222 0.44444444 0.2962963 ] [0. 0. 0. 1. ]] // OK Why does the code need to deviate from the mathematical condition in order to get the correct results and the iterator reaching the last element? See below the complete example code: import numpy as np import math from geomdl import knotvector def cox_de_boor( d_, t_, k_, knots_): if (d_ == 0): if ( knots_[k_] <= t_ <= knots_[k_+1]): return 1.0 return 0.0 denom_l = (knots_[k_+d_] - knots_[k_]) left = 0.0 if (denom_l != 0.0): left = ((t_ - knots_[k_]) / denom_l) * cox_de_boor(d_-1, t_, k_, knots_) denom_r = (knots_[k_+d_+1] - knots_[k_+1]) right = 0.0 if (denom_r != 0.0): right = ((knots_[k_+d_+1] - t_) / denom_r) * cox_de_boor(d_-1, t_, k_+1, knots_) return left + right def interpolate( d_, P_, n_, ts_, knots_ ): A = np.zeros((n_, n_)) for i in range(n_): for j in range(n_): A[i, j] = cox_de_boor(d_, ts_[i], j, knots_) control_points = np.linalg.solve(A, P_) return control_points def create_B_spline( d_, P_, t_, knots_): sum = MVector() for i in range( len(P_) ): sum += P_[i] * cox_de_boor(d_, t_, i, knots_) return sum def B_spline( points_ ): d = 3 P = np.array( points_ ) n = len( P ) ts = np.linspace( 0.0, 1.0, n ) knots = knotvector.generate( d, n ) # len = n + d + 1 control_points = interpolate( d, P, n, ts, knots) crv_pnts = [] for i in range(10): t = float(i) / 9 crv_pnts.append( create_B_spline(d, control_points, t, knots) ) return crv_pnts control_points = [ [float(i), math.sin(i), 0.0] for i in range(8) ] cps = B_spline( control_points ) Result:
The mathematical condition is the correct one, B-Spline basis functions are defined on a half-open interval. However, it presents a problem when the knot vector is clamped, as you show in your example. The problem is that in this case the mathematical function isn't defined at t=1, and the result in the code is that it evaluates to 0 as you showed. What you would like to have in this case, is for the function to be evaluated to 1 (which is the limit at t=1). One way to achieve this is to apply your fix of using <= instead of <. The figures below show a plot of the resulting basis function for n=5 (for which your default knot vector is [0,0,0,0,0.5,1,1,1,1]) using your fix and using the original mathematical definition. The functions were sampled at t = linspace(0,1,101). As can be seen, both implementations result in similar basis functions in "almost all" values. However, while the <= fix sets the value at t=1 to 1 as we wanted, it clearly ruins the functions at the inner knot value t=0.5. The original implementation, on the other hand, ruins the function at the end knot value t=1 as we already know, but only there and only for the last function. Basically, one can say that the basis functions implemented with the <=-fix are "almost-correct" except for inner-knot values. Here is another example figure to demonstrate the phenomeneon for n=7. So, now that we know what the problem is and what we want the result to be, one can think of different ways to fix it. The following code is an example of such a fix, it wraps the recursive definition with a function that checks for (d+1)-multiplicity knots and sets it to 1 (a clamped knot vector is the most common use-case of this). def cox_de_boor(d_, t_, k_, knots_): # Handling end-parameter of clamped knot vector (and also the not-so-useful case of (d+1)-multiplicity inner knots) if t_ == knots_[k_ + 1] and t_ == knots_[k_ + d_ + 1]: return 1.0 return cox_de_boor_recursive(d_, t_, k_, knots_) def cox_de_boor_recursive(d_, t_, k_, knots_): if (d_ == 0): if (knots_[k_] <= t_ < knots_[k_ + 1]): return 1.0 return 0.0 denom_l = (knots_[k_ + d_] - knots_[k_]) left = 0.0 if (denom_l != 0.0): left = ((t_ - knots_[k_]) / denom_l) * cox_de_boor_recursive(d_ - 1, t_, k_, knots_) denom_r = (knots_[k_ + d_ + 1] - knots_[k_ + 1]) right = 0.0 if (denom_r != 0.0): right = ((knots_[k_ + d_ + 1] - t_) / denom_r) * cox_de_boor_recursive(d_ - 1, t_, k_ + 1, knots_) return left + right And here is the result we get when using this function on the example from above.
2
2
78,970,312
2024-9-10
https://stackoverflow.com/questions/78970312/supabase-python-client-returns-an-empty-list-when-making-a-query
My configuration is very basic. A simple supabase database with one table. I use supabase-py to interact with it. The problem is that I always get an empty list : from supabase import create_client URL = "MY_URL_HERE" API_KEY = "MY_API_KEY_HERE" supabase = create_client(URL, API_KEY) response = supabase.table("prod_vxf").select("*").execute() print(respnse.data) # [] After checking some similar topics like this one, it seems that the only solution is by turning off RLS. So I went to the dashboard and turned off the RLS for the table prod_vxf and it worked. Now, the code above gives a non empty list : print(response.data) [ {"id": 1, "created_at": "2024-01-01T00:00:00+00:00"}, {"id": 2, "created_at": "2024-01-02T00:00:00+00:00"}, {"id": 3, "created_at": "2024-01-03T00:00:00+00:00"}, ] But what is very confusing is the warning below that hits my screen when I try to turn off the RLS for a given table in supabase dashboard. Does it mean that anyone on the internet (even without knowing url + api key) can access (read and write) my database and its tables ? Honestly, I'm super confused by the term publicly used by the warning.
Does it mean that anyone on the internet (even without knowing url + api key) can access (read and write) my database and its tables ? It's publicly as in the database role PUBLIC: The special β€œrole” name PUBLIC can be used to grant a privilege to every role on the system. That is every role inside that db, not the general, public internet public. It's not like that change suddenly peels away all security from your entire database cluster. The answer to your question is a hard no. Everyone still needs to know the connection string to get in or the URL plus an API key to perform operations indirectly. Anyone that does get in either way, still needs to do so as a user that has the privileges to access the table. Only then can they CRUD records in that table freely, but since RLS is a per-table setting, it only applies to that one single table. You can check others to confirm they are still protected. The Auth policies they also mention won't work because they rely on RLS. If it's just you and your friends working on the app, this doesn't matter much. If you plan to make the app available to a broader audience, make sure you read up on this and other security features and keep them in mind. Later on it might be pretty difficult and laborious to redesign everything if it's built as a ring-0-only, everyone-is-superuser creative mode playground.
3
2
78,964,911
2024-9-9
https://stackoverflow.com/questions/78964911/optapy-hard-constraint-is-not-respected-in-a-vrp
I am just starting using OptaPy, I tried to mimic the VRP quickstart and created the classes as such: # The place to start the journey and end it. @problem_fact class Depot: def __init__(self, name, location): self.name = name self.location = location def __str__(self): return f'Depot {self.name}' # The customers information. @problem_fact class Customer: def __init__(self, id, # Initially 1 name, # Will be the order_id location, demand, # Turned out, only 1 order per customer. So, Will always be initited with "1". Will leave it for flexibility, in case more than one order per order ID is placed. cbm ,required_skills = set(), order_weight=None, polygon = None, district = None, ): self.id = id self.name = name self.location = location # The location of the customer, a location object self.demand = demand # Number of Orders self.cbm = cbm # Order CBM self.required_skills = required_skills # A set of the skills in his orders. self.order_weight = order_weight self.polygon = polygon self.district = district def __str__(self): return f'Customer {self.name}, In Polygon: {self.polygon}, In District: {self.district}' And then the Vehicle Class: from optapy import planning_entity, planning_list_variable @planning_entity class Vehicle: def __init__(self, name, max_number_orders, # max_number_orders here refers to vehicles maximum number of orders it can carry. cbm, depot, customer_list=None, working_seconds = 28_800, # 8 Hours of work days service_time = 900, # Defaults to 15 minutes to drop an order. car_skills = set(), weight = None ,# If None, This means the vehicle has no constraints over weight. fixed_cost = 0.0, # If 0.0, This means the vehicle has no cost, same goes for variable. variable_cost = 0.0, # This should be the cost per kilometer E.G: 15 Price Unit / KM this means that this vehicle is paid 15 (any currency) Per Kilometer ): self.name = name self.max_number_orders = max_number_orders # Vehicle Constraint self.cbm = cbm # Vehicle Constraint self.depot = depot # Pass Object if customer_list is None: # Pass Object Else Empty List self.customer_list = [] else: self.customer_list = customer_list self.working_seconds = working_seconds # 8 Hours Shift self.service_time = service_time # It typically takes 15 minutes to drop the order from the car to the retailer. # Can be ignored and be precomputed with pandas and only assign vehicles to orders it can take ALL of it. # But for testing purpose, I will implement it using sets and loops in a constraint fashion. self.car_skills = car_skills # Should be a set that contains the contains the skills a vehicle can take, matching it with orders self.weight = weight self.fixed_cost = fixed_cost self.variable_cost = variable_cost # Because the order of the list is significant, optapy can alter or reindex the list given a Customer object # And assign a range (index) to each customer @planning_list_variable(Customer, ['customer_range']) def get_customer_list(self): return self.customer_list def set_customer_list(self, customer_list): self.customer_list = customer_list def get_route(self): """ The route is typically: depot > location_1 > location_2 ..... > location_n > depot again If no routes at all, return an empty list. Optapy will change the order of the location for each customer after each evaluation iteration after the score updates. """ if len(self.customer_list) == 0: return [] route = [self.depot.location] for customer in self.customer_list: route.append(customer.location) route.append(self.depot.location) return route def __str__(self): return f'Vehicle {self.name}' The problem is here: from optapy.score import HardSoftScore from optapy.constraint import Joiners from optapy import get_class def get_total_demand(vehicle): """ Calculate the total demand (e.g., number of items) assigned to a vehicle. Args: vehicle (Vehicle): The vehicle for which to calculate the total demand. Returns: int: The total demand assigned to the vehicle. """ total_demand = 0 for customer in vehicle.customer_list: total_demand += int(customer.demand) # Explicitly cast to int return total_demand def vehicle_capacity(constraint_factory): """ Enforce the vehicle capacity constraint. This constraint ensures that the total demand assigned to a vehicle does not exceed its capacity. Args: constraint_factory (ConstraintFactory): The factory to create constraints. Returns: Constraint: The constraint penalizing vehicles that exceed their capacity. """ return constraint_factory \ .for_each(get_class(Vehicle)) \ .filter(lambda vehicle: get_total_demand(vehicle) > int(vehicle.max_number_orders)) \ .penalize("Over vehicle max_number_orders", HardSoftScore.ONE_HARD, lambda vehicle: int(get_total_demand(vehicle) - int(vehicle.max_number_orders))) This constraint is not respected and then I followed the quickstart in configuring the model from optapy import planning_solution, planning_entity_collection_property, problem_fact_collection_property, \ value_range_provider, planning_score @planning_solution class VehicleRoutingSolution: """ The VehicleRoutingSolution class represents both the problem and the solution in the vehicle routing domain. It stores references to all the problem facts (locations, depots, customers) and planning entities (vehicles) that define the problem. Attributes: name (str): The name of the solution. location_list (list of Location): A list of all locations involved in the routing. depot_list (list of Depot): A list of depots where vehicles start and end their routes. vehicle_list (list of Vehicle): A list of all vehicles used in the routing problem. customer_list (list of Customer): A list of all customers to be served by the vehicles. south_west_corner (Location): The southwestern corner of the bounding box for visualization. north_east_corner (Location): The northeastern corner of the bounding box for visualization. score (HardSoftScore, optional): The score of the solution, reflecting the quality of the solution. """ def __init__(self, name, location_list, depot_list, vehicle_list, customer_list, south_west_corner, north_east_corner, score=None): self.name = name self.location_list = location_list self.depot_list = depot_list self.vehicle_list = vehicle_list self.customer_list = customer_list self.south_west_corner = south_west_corner self.north_east_corner = north_east_corner self.score = score @planning_entity_collection_property(Vehicle) def get_vehicle_list(self): return self.vehicle_list @problem_fact_collection_property(Customer) @value_range_provider('customer_range', value_range_type=list) def get_customer_list(self): return self.customer_list @problem_fact_collection_property(Location) def get_location_list(self): return self.location_list @problem_fact_collection_property(Depot) def get_depot_list(self): return self.depot_list @planning_score(HardSoftScore) def get_score(self): return self.score def set_score(self, score): self.score = score def get_bounds(self): """ Get the bounding box coordinates for visualizing the solution. Returns: list: A list containing the coordinates of the southwest and northeast corners. """ return [self.south_west_corner.to_lat_long_tuple(), self.north_east_corner.to_lat_long_tuple()] def total_score(self): """ Calculate the total soft score. """ return -self.score.getSoftScore() if self.score is not None else 0 Here's the problem, I gave the model a vehicle with maximum_orders of 26, the model should not assign more than 26 customers or orders classes to that vehicle, but it gives it all the orders. If Increased the number of cars, it divides the routes on them randomly, also violating the constraint # Step 1: Setup the solver manager with the appropriate config solver_config = optapy.config.solver.SolverConfig() solver_config \ .withEnvironmentMode(optapy.config.solver.EnvironmentMode.FULL_ASSERT)\ .withSolutionClass(VehicleRoutingSolution) \ .withEntityClasses(Vehicle) \ .withConstraintProviderClass(vehicle_routing_constraints) \ .withTerminationSpentLimit(Duration.ofSeconds(20)) # Adjust termination as necessary # Step 2: Create the solver manager solver_manager = solver_manager_create(solver_config) # # Create the initial solution for the solver solution = VehicleRoutingSolution( name="Vehicle Routing Problem with Random Data", location_list=locations, depot_list=depots, vehicle_list=vehicles, customer_list=customers, south_west_corner=Location(29.990707246305476, 31.229210746581806), north_east_corner=Location(30.024396202211875, 31.262640488654238) ) # Step 3: Solve the problem and get the solver job SINGLETON_ID = 1 # A unique problem ID (can be any number) solver_job = solver_manager.solve(SINGLETON_ID, lambda _: solution) # Step 4: Get the best solution from the solver job best_solution = solver_job.getFinalBestSolution() # Step 5: Extract and print the results def extract_vehicle_routes(best_solution): for vehicle in best_solution.vehicle_list: print(f"Vehicle: {vehicle.name}") print("Route:") total_orders = 0 total_weight = 0 total_cbm = 0 for customer in vehicle.customer_list: location = customer.location.to_lat_long_tuple() total_orders += 1 total_weight += customer.order_weight total_cbm += customer.cbm print(f"Customer {customer.name}: {location}") # Print the return to depot print(f"Return to depot: {vehicle.depot.location.to_lat_long_tuple()}") print(f"Total Orders: {total_orders}") print(f"Total Weight: {total_weight}") print(f"Total CBM: {total_cbm}") print("=" * 30) # Call the function to display the routes extract_vehicle_routes(best_solution) The Output: Customer Order ID: 8595424: (30.24544623697703, 31.24484896659851) ...... ...... Return to depot: (29.996699, 31.278772) Total Orders: 50 Total Weight: 3924.2847300000003 Total CBM: 7.518601279012999 ============================== And here're the vehicle's info: vehicles[0].cbm, vehicles[0].weight, vehicles[0].max_number_orders >>> (4.0, 1700.0, 27)
The problem was that the model cannot relax the problem, so instead of breaking it returned unreasonable results, When I gave it 1 vehicle and 50 orders and the vehicle capacity was 26, it assigned all the 50 orders on that vehicle. But when I increased vehicles to 2, it found a feasible solution and returned 26, 24 respectively.
2
0
78,966,048
2024-9-9
https://stackoverflow.com/questions/78966048/how-to-change-background-color-of-st-text-input-in-streamlit
I am trying to change the background color of st.text_input() box but unable to do so. I am not from web/app development background or with any html css skills so please excuse my naive or poor understanding in this field. So far I have tried: using this link test_color = st.write('test color') def text_input_color(url): st.markdown( f'<p style="background-color:#0066cc;color:#33ff33;">{url}</p>', unsafe_allow_html=True ) text_input_color("test_color") Above code works on st.write() but not on st.text_input() I have also come across this link so using this approach and I have modified css for Textinput instead of stForm but this also didn't work and I am not sure what id to use for text input css=""" <style> [data-testid="stTextinput"] { background: LightBlue; } </style> """ Below is the inspect element screenshot of the webapp:
I wouldn't rely on classes like .st-bd, .st-bb, or .st-b7 they are dynamically generated by Streamlit and can change (versions or runs). I would rather use aria-label. import streamlit as st st.markdown(""" <style> .stTextInput input[aria-label="test color"] { background-color: #0066cc; color: #33ff33; } .stTextInput input[aria-label="test color2"] { background-color: #cc0066; color: #ffff33; } </style> """, unsafe_allow_html=True) st.text_input("test color") st.text_input("test color2")
2
1
78,955,489
2024-9-6
https://stackoverflow.com/questions/78955489/odoo-models-linking-issues-self-id-giving-unknown43
I'm working with Odoo 16 and have encountered a problem with linking two models and handling _unknown values. Specifically, I have two models: nkap_custom_paiement and account.payment.register. The former is a custom model inheriting from account.payment, and the latter is a TransientModel used for payment registration. Models Overview: nkap_custom_paiement (Inheriting account.payment) Has a field related_payment_id that links to account.register.payment. Handles payment approvals and has methods like action_cashier_approval to perform actions on linked payments. CustomPaymentRegister (Inheriting account.payment.register) Handles the creation of account.payment records and sets the related_payment_id to link back to the wizard. I am trying to add an approval process to the payments that are registered through the wizard. After the approval, it should call back the initial method on the wizard (that's why I need to store id) # -*- coding: utf-8 -*- from odoo import models, fields, api, _ from odoo.exceptions import ValidationError, UserError import re class nkap_custom_paiement(models.Model): # _inherit = 'account.payment' _inherit = ['account.payment'] related_payment_id = fields.Many2one('account.register.payment', string='Related Wizard') state = fields.Selection([ ('draft', 'Drafted'), ('waiting_approval', 'Waiting Approval'), ('approved', 'Approved'), ('rejected', 'Rejected'), ('posted', 'Posted'), ], string="Approval Status",default='draft') DFC_approver_sign = fields.Binary('DFC Signature') DG_approver_sign = fields.Binary('DG Signature') current_approval = fields.Selection([('1', '1'), ('2', '2'), ('3', '3')], string="Is Current Approver") def action_submit_for_approval(self): company_id=self.env.company self.write({'state': 'waiting_approval'}) message = "Vous avez un paiement pour la 1ere approbation" self.current_approval = '1' self.activity_schedule('purchase_order_approval.mail_activity_data_approval', user_id=company_id.po_third_approver_ids.id, note=message) self.env['bus.bus']._sendone(company_id.po_third_approver_ids.partner_id, 'simple_notification', {'title': _("Information"), 'message': message}) def action_DFC_approval(self): company_id=self.env.company if self.env.user.id in company_id.po_third_approver_ids.ids: self.current_approval = '2' self.write({'state': 'waiting_approval'}) self.DFC_approver_sign = self.env.user.user_signature message = "Vous avez un paiement pour la 2ere approbation " self.activity_schedule('purchase_order_approval.mail_activity_data_approval', user_id=company_id.po_DG_approver_ids.id, note=message) else: raise ValidationError(_("Seul %s peut approver !"% company_id.po_third_approver_ids.id )) def action_DG_approval(self): company_id=self.env.company if self.env.user.id in company_id.po_DG_approver_ids.ids: self.write({'state': 'approved'}) self.current_approval = '3' self.DG_approver_sign = self.env.user.user_signature message = "Vous avez un paiement pour la validation" self.activity_schedule('purchase_order_approval.mail_activity_data_approval', user_id=company_id.po_fourth_approver_ids.id, note=message) else: raise ValidationError(_("Seul %s peut approver !"% company_id.po_DG_approver_ids.id )) def action_cashier_approval(self): company_id=self.env.company if self.env.user.id in company_id.po_fourth_approver_ids.ids: self.write({'state': 'posted'}) if self.related_payment_id: related_payment = self.env['account.payment.register'].browse(self.related_payment_id) if related_payment.exists(): # Call a method on the related payment record related_payment.action_create_payments() else: _logger.error("Related payment record with ID %s does not exist", self.related_payment_id.id) else: raise ValidationError(_("Seul %s peut approver !"% company_id.po_fourth_approver_ids.id )) def action_reject(self): company_id=self.env.company current_approver = None if self.current_approval =='1': current_approver=company_id.po_third_approver_ids elif self.current_approval =='2': current_approver=company_id.po_DG_approver_ids else: raise UserError(_(f"{self.current_approval},{type(self.current_approval)}")) if self.env.user.id in current_approver.ids: self.write({'state': 'rejected'}) else: raise ValidationError(_("Seul %s peut refuser cette DA !" % current_approver.name)) def rollback(self): '''to cancel all the signature already done to false''' self.write({ 'DFC_approver_sign': False, 'DG_approver_sign': False, 'current_approval': False, }) class CustomPaymentRegister(models.TransientModel): _inherit = 'account.payment.register' payment_id = fields.Many2one('account.payment', string='Created Payment') def _collect_payment_vals(self): payment_vals = { 'amount': self.amount, 'partner_id': self.partner_id.id, 'journal_id': self.journal_id.id, 'payment_method_line_id': self.payment_method_line_id.id, 'date': self.payment_date, 'currency_id': self.currency_id.id, 'ref': self.communication, 'partner_bank_id': self.partner_bank_id.id, 'bank_reference': self.bank_reference, 'cheque_reference': self.cheque_reference, } return payment_vals def action_submit_for_approval(self): payment_vals = self._collect_payment_vals() payment = self.env['account.payment'].create(payment_vals) self.payment_id = payment.id payment.related_payment_id = self.id # raise ValidationError(_(f"{self.id}// {payment.related_payment_id}//{number}")) # Call the approval flow for each payment created payment.action_submit_for_approval() I have tried several ways to link the both model through the field payment_id in account.register.payment model and related_payment_id in account.payment, but it is still giving the self_id in this _unknown(actual_id,)
That's just a typo on your class nkap_custom_paiement. The comodel for the field related_payment_id is account.payment.register and not account.register.payment. If Odoo can't find the related model on e.g. a Many2one field in its model pool, it will fill it with model name _unknown (click) def setup_nonrelated(self, model): super().setup_nonrelated(model) if self.comodel_name not in model.pool: _logger.warning("Field %s with unknown comodel_name %r", self, self.comodel_name) self.comodel_name = '_unknown'
2
0
78,960,340
2024-9-7
https://stackoverflow.com/questions/78960340/how-can-i-follow-an-http-redirect
I have 2 different views that seem to work on their own. But when I try to use them together with a http redirect then that fails. The context is pretty straightforward, I have a view that creates an object and another view that updates this object, both with the same form. The only thing that is a bit different is that we use multiple sites. So we check if the site that wants to update the object is the site that created it. If yes then it does a normal update of the object. If no (that's the part that does not work here) then I http redirect the update view to the create view and I pass along the object so the new site can create a new object based on those initial values. Here is the test to create a new resource (passes successfully) : @pytest.mark.resource_create @pytest.mark.django_db def test_create_new_resource_and_redirect(client): data = { "title": "a title", "subtitle": "a sub title", "status": 0, "summary": "a summary", "tags": "#tag", "content": "this is some updated content", } with login(client, groups=["example_com_staff"]): response = client.post(reverse("resources-resource-create"), data=data) resource = models.Resource.on_site.all()[0] assert resource.content == data["content"] assert response.status_code == 302 Here is the test to create a new resource from an existing object (passes successfully) : @pytest.mark.resource_create @pytest.mark.django_db def test_create_new_resource_from_pushed_resource_and_redirect(request, client): existing_resource = baker.make(models.Resource) other_site = baker.make(Site) existing_resource.site_origin = other_site existing_resource.sites.add(other_site) our_site = get_current_site(request) existing_resource.sites.add(our_site) original_content = "this is some original content" existing_resource.content = original_content existing_resource.save() data = { "title": "a title", "subtitle": "a sub title", "status": 0, "summary": "a summary", "tags": "#tag", "content": "this is some updated content", } url = reverse("resources-resource-create-from-shared", args=[existing_resource.id]) with login(client, groups=["example_com_staff"]): response = client.post(url, data=data) assert response.status_code == 302 existing_resource.refresh_from_db() assert existing_resource.content == original_content assert our_site not in existing_resource.sites.all() new_resource = models.Resource.on_site.get() assert new_resource.content == data["content"] Here is the create view : @login_required def resource_create(request, pushed_resource_id=None): """ Create new resource In case of a resource that is pushed from a different site create a new resource based on the pushed one. """ has_perm_or_403(request.user, "sites.manage_resources", request.site) try: pushed_resource = models.Resource.objects.get(id=pushed_resource_id) pushed_resource_as_dict = model_to_dict(pushed_resource) initial_data = pushed_resource_as_dict except ObjectDoesNotExist: pushed_resource = None initial_data = None if request.method == "POST": form = EditResourceForm(request.POST, initial=initial_data) if form.is_valid(): resource = form.save(commit=False) resource.created_by = request.user with reversion.create_revision(): reversion.set_user(request.user) resource.save() resource.sites.add(request.site) if pushed_resource: pushed_resource.sites.remove(request.site) pushed_resource.save() resource.site_origin = request.site resource.save() form.save_m2m() next_url = reverse("resources-resource-detail", args=[resource.id]) return redirect(next_url) else: form = EditResourceForm() return render(request, "resources/resource/create.html", locals()) Here is the test to update the resource from the original site (passes successfully) : @pytest.mark.resource_update @pytest.mark.django_db def test_update_resource_from_origin_site_and_redirect(request, client): resource = baker.make(models.Resource) our_site = get_current_site(request) resource.site_origin = our_site resource.save() previous_update = resource.updated_on url = reverse("resources-resource-update", args=[resource.id]) data = { "title": "a title", "subtitle": "a sub title", "status": 0, "summary": "a summary", "tags": "#tag", "content": "this is some updated content", } with login(client, groups=["example_com_staff"]): response = client.post(url, data=data) assert response.status_code == 302 resource.refresh_from_db() assert resource.content == data["content"] assert resource.updated_on > previous_update And finally the test to update from a different site that should create a new resource from the original one (that one fails): @pytest.mark.resource_update @pytest.mark.django_db def test_update_resource_from_non_origin_site_and_redirect(request, client): original_resource = baker.make(models.Resource) our_site = get_current_site(request) other_site = baker.make(Site) original_resource.sites.add(our_site, other_site) original_resource.site_origin = other_site previous_update = original_resource.updated_on original_content = "this is some original content" original_resource.content = original_content original_resource.save() assert models.Resource.on_site.all().count() == 1 url = reverse("resources-resource-update", args=[original_resource.id]) updated_data = { "title": "a title", "subtitle": "a sub title", "status": 0, "summary": "a summary", "tags": "#tag", "content": "this is some updated content", } with login(client, groups=["example_com_staff"]): response = client.post(url, data=updated_data) assert response.status_code == 302 original_resource.refresh_from_db() assert original_resource.content == original_content assert original_resource.updated_on == previous_update assert other_site in original_resource.sites.all() assert our_site not in original_resource.sites.all() assert models.Resource.on_site.all().count() == 1 new_resource = models.Resource.on_site.get() assert new_resource.content == updated_data["content"] assert other_site not in new_resource.sites.all() assert our_site in new_resource.sites.all() What happens is that no new object gets created here and the original object is modified instead. Here is the update view : @login_required def resource_update(request, resource_id=None): """Update informations for resource""" has_perm_or_403(request.user, "sites.manage_resources", request.site) resource = get_object_or_404(models.Resource, pk=resource_id) if resource.site_origin is not None and resource.site_origin != request.site: pushed_resource_id = resource.id next_url = reverse("resources-resource-create-from-shared", args=[pushed_resource_id] ) return redirect(next_url) next_url = reverse("resources-resource-detail", args=[resource.id]) if request.method == "POST": form = EditResourceForm(request.POST, instance=resource) if form.is_valid(): resource = form.save(commit=False) resource.updated_on = timezone.now() with reversion.create_revision(): reversion.set_user(request.user) resource.save() form.save_m2m() return redirect(next_url) else: form = EditResourceForm(instance=resource) return render(request, "resources/resource/update.html", locals()) And the model form : class EditResourceForm(forms.ModelForm): """Create and update form for resources""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Queryset needs to be here since on_site is dynamic and form is read too soon self.fields["category"] = forms.ModelChoiceField( queryset=models.Category.on_site.all(), empty_label="(Aucune)", required=False, ) self.fields["contacts"] = forms.ModelMultipleChoiceField( queryset=addressbook_models.Contact.on_site.all(), required=False, ) # Try to load the Markdown template into 'content' field try: tmpl = get_template( template_name="resources/resource/create_md_template.md" ) self.fields["content"].initial = tmpl.render() except TemplateDoesNotExist: pass content = MarkdownxFormField(label="Contenu") title = forms.CharField( label="Titre", widget=forms.TextInput(attrs={"class": "form-control"}) ) subtitle = forms.CharField( label="Sous-Titre", widget=forms.TextInput(attrs={"class": "form-control"}), required=False, ) summary = forms.CharField( label="RΓ©sumΓ© bref", widget=forms.Textarea( attrs={"class": "form-control", "rows": "3", "maxlength": 400} ), required=False, ) class Meta: model = models.Resource fields = [ "title", "status", "subtitle", "summary", "tags", "category", "departments", "content", "contacts", "expires_on", ] Any idea about what I did wrong is welcome. And if you think a better strategy should be employed then feel free to comment.
My bad. I use initial=initial_data in the POST part of the create view. Which makes no sense. When moving the initial=initial_data to the GET part then it works. The test_update_resource_from_non_origin_site_and_redirect test still fails though. I'm going to investigate since the feature works fine from within the web interface.
3
0
78,971,305
2024-9-10
https://stackoverflow.com/questions/78971305/how-can-i-optimize-the-performance-of-this-numpy-function
Is there any way optimizing the performance speed of this function? def func(X): n, p = X.shape R = np.eye(p) delta = 0.0 for i in range(100): delta_old = delta Y = X @ R alpha = 1. / n Y2 = Y**2 Y3 = Y2 * Y W = np.sum(Y2, axis=0) transformed = X.T @ (Y3 - (alpha * Y * W)) U, svals, VT = np.linalg.svd(transformed, full_matrices=False) R = U @ VT # is used as a stopping criterion delta = np.sum(svals) return R Naively, I thought using numba would help because of the loop (the actual number of loops is higher), from numba import jit @jit(nopython=True, parallel=True) def func_numba(X): n, p = X.shape R = np.eye(p) delta = 0.0 for i in range(100): delta_old = delta Y = X @ R alpha = 1. / n Y2 = Y**2 Y3 = Y2 * Y W = np.sum(Y2, axis=0) transformed = X.T @ (Y3 - (alpha * Y * W)) U, svals, VT = np.linalg.svd(transformed, full_matrices=False) R = U @ VT delta = np.sum(svals) # is used as a stopping criterion return R but to my surprise the numbaized function is actually slower. Why is numba not more effective in this case? Is there another option for me (preferably using numpy)? Note: You can assume X to be a "tall-and-skinny" matrix. MWE import numpy as np size = (10_000, 15) X = np.random.normal(size=size) %timeit func(X) # 1.28 s %timeit func_numba(X) # 2.05 s
I tried to rewrite some stuff to speed things up. The only things I changed (apart from some formatting maybe) were removing the computation of the unused delta, pulling the transposition of X out of the loop as well as factorizing out the multiplication with Y for the computation of transformed which results in one fewer multiplications. def func2(X): n, p = X.shape R = np.eye(p) alpha = 1. / n XT = X.T for i in range(100): Y = X @ R Y2 = Y**2 W = np.sum(Y2, axis=0) transformed = XT @ (Y * (Y2 - (alpha * W))) U, svals, VT = np.linalg.svd(transformed, full_matrices=False) R = U @ VT return R When I compare this func2 it to the function func as follows X = np.random.normal(size=(10_000, 15)) assert np.allclose(func(X), func2(X)) %timeit func(X) %timeit func2(X) I get a speedup of more than 1.5x (it's not always as nice as that, however) 197 ms Β± 44.5 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) 127 ms Β± 21.3 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
3
2
78,971,412
2024-9-10
https://stackoverflow.com/questions/78971412/how-to-measure-new-change-in-data
Suppose you have this dataframe d = {'date':['2019-08-25', '2019-09-01', '2019-09-08'], 'data':[31, 31, 31]} df_sample = pd.DataFrame(data=d) df_sample.head() and you want to measure how much new data comes in on average each week. For example, we had 31 new rows on 8/25 and then on 9/1 we got an additional 31 rows so thats like a 100% increase. What I want to know is on average from one week to the next how much new data comes in? I know there is diff() and pct_change() but since this will just be 0 in these 3 samples I am wondering what would be the better approach here.
If your data is cumulative, you need a cumsum before pct_change: df_sample['change'] = df_sample['data'].cumsum().pct_change().mul(100) Output: date data change 0 2019-08-25 31 NaN 1 2019-09-01 31 100.0 2 2019-09-08 31 50.0 Intermediate: date data cumsum change 0 2019-08-25 31 31 NaN 1 2019-09-01 31 62 100.0 2 2019-09-08 31 93 50.0 week to week If you want a week to week change, go with shift: df_sample['change'] = df_sample['data'].div(df_sample['data'].shift()).mul(100) Output: date data change 0 2019-08-25 31 NaN 1 2019-09-01 31 100.0 2 2019-09-08 31 100.0
2
2
78,966,115
2024-9-9
https://stackoverflow.com/questions/78966115/how-to-correctly-use-ctypes-get-errno
I'm trying to test some binary library with ctypes and some of my tests involve errno. I'm therefore trying to retrieve it to check the error cases handling but when trying to use ctypes.get_errno() I weirdly get 0 as errno ("Success") which isn't what I was expecting. Why does this occur? Is ctypes.get_errno() actually reliable? test.py: #!/usr/bin/env python3 import os import ctypes import errno libc = ctypes.cdll.LoadLibrary("libc.so.6") libc.write.restype = ctypes.c_ssize_t libc.write.argtypes = ctypes.c_int, ctypes.c_void_p, ctypes.c_size_t TMP_FILE = "/tmp/foo" def main(): fd: int errno: int = 0 fd = os.open(TMP_FILE, os.O_RDONLY | os.O_CREAT) if fd == -1: errno = ctypes.get_errno() print(strerror(errno)) if (not errno and libc.write(fd, "foo", 3) == -1): errno = ctypes.get_errno() print(f"ERRNO: {errno}") print(os.strerror(errno)) os.close(fd); os.remove(TMP_FILE) if errno: raise OSError(errno, os.strerror(errno)) if __name__ == "__main__": main() output: $ ./test.py ERRNO: 0 Success NB: I already have a workaround from an answer under an other post (see MRE below) but I'd like to understand what's going on with ctypes.get_errno(). test_with_workaround.py: #!/usr/bin/env python3 import os import ctypes libc = ctypes.cdll.LoadLibrary("libc.so.6") libc.write.restype = ctypes.c_ssize_t libc.write.argtypes = ctypes.c_int, ctypes.c_void_p, ctypes.c_size_t TMP_FILE = "/tmp/foo" _get_errno_loc = libc.__errno_location _get_errno_loc.restype = ctypes.POINTER(ctypes.c_int) def get_errno() -> int: return _get_errno_loc()[0] def main(): fd: int errno: int = 0 fd = os.open(TMP_FILE, os.O_RDONLY | os.O_CREAT) if fd == -1: errno = get_errno() print(strerror(errno)) if (not errno and libc.write(fd, "foo", 3) == -1): errno = get_errno() print(f"ERRNO: {errno}") print(os.strerror(errno)) os.close(fd); os.remove(TMP_FILE) if errno: raise OSError(errno, os.strerror(errno)) if __name__ == "__main__": main() output: $ ./test_with_workaround.py ERRNO: 9 Bad file descriptor Traceback (most recent call last): File "/mnt/nfs/homes/vmonteco/Code/MREs/MRE_python_fdopen_cause_errno/simple_python_test/./test_with_workaround.py", line 41, in <module> main() File "/mnt/nfs/homes/vmonteco/Code/MREs/MRE_python_fdopen_cause_errno/simple_python_test/./test_with_workaround.py", line 37, in main raise OSError(errno, os.strerror(errno)) OSError: [Errno 9] Bad file descriptor
Unlike your handwritten get_errno, ctypes get_errno does not access the os errno value, but a private copy that is filled after certain function calls. The docs state: [...] a ctypes mechanism that allows accessing the system errno error number in a safe way. ctypes maintains a thread-local copy of the systems errno variable; if you call foreign functions created with use_errno=True then the errno value before the function call is swapped with the ctypes private copy, the same happens immediately after the function call. Thus ctypes.get_errno() will never work for getting errno that might have been set as a side-effect of os.open. But you can (and probably should) use it for your function that you call via ctypes. But there you need to set use_errno to True. use_errno is a parameter of function definition creation. When you use the mylib.myfunc syntax as you did with libc.write, the function creation is implicit and inherits some defaults from the library loader. Here you use ctypes.cdll, which sets use_errno to False. You an change that by loading the Library more explicitly: libc = ctypes.CDll("libc.so.6", use_errno=True) Note that this will apply use_errno (and the associated overhead) to all functions. If you want to use use_errno for single functions, you can use function prototypes instead.
2
4
78,967,924
2024-9-10
https://stackoverflow.com/questions/78967924/polars-cumsum-alternatives
I have below pandas snippet which I want to convert to polars to try, Expected output for polars is same as pandas but failing as cumsum is missing, how to achieve similar output?: import pandas as pd import numpy as np data = { 'date': pd.date_range(start='2024-09-01', periods=12), 'reserved_before': [0, 1, 2, np.nan, np.nan, 1, 2, 3, np.nan, 3, 4, 5] } df = pd.DataFrame(data) df['reserved'] = df['reserved_before'].notna() def assign_group_id(series): return (series != series.shift()).cumsum() df['group'] = assign_group_id(df['reserved']) def min_max(group): non_nan = group['reserved_before'].dropna() if len(non_nan) > 0: return pd.Series({'min': non_nan.min(), 'max': non_nan.max()}) return pd.Series({'min': np.nan, 'max': np.nan}) result = df.groupby('group').apply(min_max).reset_index() df = df.merge(result, on='group', how='left') df = df.drop(columns=['group', 'reserved']) df = df.rename(columns={'min': 'block_min', 'max': 'block_max'}) Expected output for polars is same as pandas but failing as cumsum is missing, how to achieve similar output?: import polars as pl from datetime import date df = pl.DataFrame({ 'date': pl.date_range(start=date(2024, 9, 1), end=date(2024, 9, 12), interval='1d', eager=True), 'reserved_before': [0, 1, 2, None, None, 1, 2, 3, None, 3, 4, 5] }) df = df.with_columns( (df['reserved_before'].is_not_null()).alias('reserved') ) df = df.with_columns( (df['reserved'] != df['reserved'].shift(1)).cumsum().alias('group') ) min_max_df = ( df.groupby('group') .agg( pl.col('reserved_before').min().alias('block_min'), pl.col('reserved_before').max().alias('block_max') ) ) df = df.join(min_max_df, on='group', how='left') df = df.drop('group', 'reserved') Resulting error: AttributeError: 'Series' object has no attribute 'cumsum'
Here's one approach: import polars as pl from datetime import date pl_df = pl.DataFrame({ 'date': pl.date_range(start=date(2024, 9, 1), end=date(2024, 9, 12), interval='1d', eager=True), 'reserved_before': [0, 1, 2, None, None, 1, 2, 3, None, 3, 4, 5] }) groups = pl.col('reserved_before').is_not_null().rle_id() pl_df = pl_df.with_columns( pl.col('reserved_before').min().over(groups).alias('block_min'), pl.col('reserved_before').max().over(groups).alias('block_max'), ) Explanation Use pl.Expr.is_not_null and pl.Expr.rle_id to define your groups. IDs will start with 0, not with 1 (as with OP's use of pd.Series.cumsum); irrelevant difference for the purpose. Use pl.Expr.over to add pl.Expr.min and max. Equality check original pandas method: pl_df.to_pandas().equals(df.astype(pl_df.to_pandas().dtypes)) # True Update for a slightly different logic Instead of: a group ends where the next value is None We use: a group ends where the next value is None or where the next value is smaller pl_df = pl.DataFrame({ 'date': pl.date_range(start=date(2024, 9, 1), end=date(2024, 9, 12), interval='1d', eager=True), 'reserved_before': [0, 1, 2, None, 4, 1, 2, 10, None, 3, 4, 5] }) # desired groups: # [[0, 1, 2], [4], [1, 2, 10], [3, 4, 5]] To get groups here, we can use pl.Expr.diff + pl.Expr.fill_null, check < 0, and apply pl.Expr.cum_sum to the result. groups = (pl.col('reserved_before').diff().fill_null(-1) < 0).cum_sum() Adjusted output: shape: (12, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ reserved_before ┆ block_min ┆ block_max β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════β•ͺ═══════════β•ͺ═══════════║ β”‚ 2024-09-01 ┆ 0 ┆ 0 ┆ 2 β”‚ β”‚ 2024-09-02 ┆ 1 ┆ 0 ┆ 2 β”‚ β”‚ 2024-09-03 ┆ 2 ┆ 0 ┆ 2 β”‚ β”‚ 2024-09-04 ┆ null ┆ null ┆ null β”‚ β”‚ 2024-09-05 ┆ 4 ┆ 4 ┆ 4 β”‚ β”‚ 2024-09-06 ┆ 1 ┆ 1 ┆ 10 β”‚ β”‚ 2024-09-07 ┆ 2 ┆ 1 ┆ 10 β”‚ β”‚ 2024-09-08 ┆ 10 ┆ 1 ┆ 10 β”‚ β”‚ 2024-09-09 ┆ null ┆ null ┆ null β”‚ β”‚ 2024-09-10 ┆ 3 ┆ 3 ┆ 5 β”‚ β”‚ 2024-09-11 ┆ 4 ┆ 3 ┆ 5 β”‚ β”‚ 2024-09-12 ┆ 5 ┆ 3 ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
3
78,968,248
2024-9-10
https://stackoverflow.com/questions/78968248/counting-number-of-separate-events-in-dataframe
I am trying to count the number of separate events in a Dataframe (not number of occurences) Let's say i have this dataframe : df = pd.DataFrame([1, 2, 2, 2, 1, 1, 1, 1, 2, 1], columns=["events"]) And i want to count the number of separate events. If i use df.value_counts() It' going to give me the number of occurences of each event : events 1 6 2 4 Name: count, dtype: int64. But i want to count the number of separate events regardless of their lenght. The result should be something like this : events 1 3 2 2 I wanted to know if there is a built in function for this. Any help or advice would be greatly appreciated. Thank you
Craft a mask (with shift and ne) to remove the successive duplicates and value_counts: df.loc[df['events'].ne(df['events'].shift()), 'events'].value_counts() Output: events 1 3 2 2 Name: count, dtype: int64
2
2
78,966,219
2024-9-9
https://stackoverflow.com/questions/78966219/smoothing-out-the-sharp-corners-and-jumps-of-a-piecewise-regression-load-displac
I am having a stubborn problem with smoothing out some sharp corners that the simulation software does not really like. I have the following displacement/ load/ damage vs step/time: The source data can be found here. Here's the code for importing the data and plotting the above plot: df = pd.read_csv("ExampleforStack.txt") # read data x = df["Displacement"] # get displacement y = df["Load"] # get load d = df["Damage"] # get damage # plot stuff plt.figure() plt.subplot(3,1,1) plt.plot(x) plt.grid() plt.ylabel("Displacement") plt.subplot(3,1,2) plt.plot(y) plt.grid() plt.ylabel("Load") plt.subplot(3,1,3) plt.plot(d) plt.grid() plt.ylabel("Damage") plt.xlabel("Step") plt.gcf().align_ylabels() plt.tight_layout() When plotted against displacement, the load and damage look something like this: The breaking points in the above plots are: print(bps) # [0.005806195310298627, 0.02801208361344569] My aim would be to smooth the data around the vertical black lines for both the load and the damage. So far, I tried lowess from statsmodels.api.nonparametric, with the results looking very suboptimal: The above picture is with a frac of 0.03, changing the frac of course changes a lot, but sadly not in a desirable way either. Others stuff that I have tried are Gaussian regression models, Singular Spectrum Analysis, Savitzky-Golay filters, cubic splines, etc... The only thing that I have not checked so far is curve fitting, which I might check tomorrow. Background information: Displacement is the result of DIC analysis Load is measured by the testing machine Damage is a calculated value from displacement, load and the stiffness of the material in the elastic region. Qualitatively, here's what I would like the end result to look like: An additionaly requirement would be that the derivative of the smothed data should also be smooth and not jumpy. I would appreciate any hints to help me solve this task! :D As suggested by Martin Brown, I did the following to smooth out the curves: def boxCar(data, winSize): kernel = np.ones(winSize) / winSize # generate the kernel dataSmoothed = convolve(data, kernel, mode='same') # convolve # the next two lines is to correct the smoothing on the start and end of the arrays dataSmoothed[0:winSize] = data[0:winSize] # assign first elements to original data dataSmoothed[-winSize:] = data[-winSize:] # assign last elements to original data return dataSmoothed The convolve is from scipy.signal. Another approach with the gaussian would look something like this: def gaussian(data, sigma): dataSmoothed = gaussian_filter1d(data, sigma=sigma) dataSmoothed[0:50] = data[0:50] # assign first elements to original data dataSmoothed[-50:] = data[-50:] # assign last elements to original data return dataSmoothed The Gaussian seems to work a bit better than boxCar. gaussian_filter1d is from scipy.ndimage
Simplest solution is apply a low pass filter to your sharp cornered function(s). Convolving it with a Gaussian with an appropriate width should give it all of the smoothness properties that you desire. The data are so finely spaced that you might even be able to get away with a simple boxcar average over 11-21 samples (choose an odd number). However, it might be preferable to sort out the bugs in the simulation software that prevent it from working correctly with realistic data. The onset of damage is almost always an all or nothing sudden change so the code should be able to handle that. Filtering data to make analysis code work would not be my first choice.
1
2
78,963,645
2024-9-9
https://stackoverflow.com/questions/78963645/tkinter-progress-bar-works-on-linux-but-not-on-windows
I wrote the following basic code sample that creates a main window with a button. When the button is pressed a second window with a progress bar should appear and stay open until the progress is complete. This works fine on Linux but on Windows the second window appears blank with no progress bar. Given that this will be part of an application that runs on Windows. What should I do? Is this a bug or what's going on? I'm using python 3.12.5 on both Linux and Windows 11 (build 22631). The code import tkinter.ttk as ttk import tkinter as tk import time def cpw(): tasks = 5 increment = 100 / tasks pw = tk.Toplevel() pw.title("Progress Window") pw.geometry("300x135") pb = ttk.Progressbar( pw, name="pb", orient=tk.HORIZONTAL, length=200, mode="determinate", ) pb.pack() i = 0 while i < tasks: i += 1 pb["value"] += increment pb.update_idletasks() time.sleep(1) pw.destroy() root = tk.Tk() root.title("Example") root.geometry("150x100") tk.Button(root, text="Process", command=cpw).pack() root.mainloop()
Thanks to @acw1668 who pointed out in the comments that .update_idletasks() may not handle pending creation of widgets. Adding pb.update() or also pb.wait_visibility(pw) before pb.pack() will ensure the creation and visibility of the progress window and bar. Documentation here
2
1
78,965,088
2024-9-9
https://stackoverflow.com/questions/78965088/using-gekko-to-optimize-two-matrix-vector-equations
I wanted to use Gekko to solve an optimization (production mix) problem, I have a few numpy arrays, which I want to use in some vectorized equation. They idea is, I have two simple matrix equations: Ax < b and Cx = y Where I will use previously prepared (const) numpy arrays for A,b,C. Also, Y is a variable (int). X is a 1D array. But if 'C' , 'X' , are 1D Arrays, and 'Y' is just a regular int variable (resulted from something like multiplying 1x100 matrix by 100x1 matrix), so I cant use m.axb for the second equation (It keeps giving me errors). So I try to use them in a simple GEKKO script like this: m = GEKKO(remote=False) x = m.axb(const_machXparts, const_Mach_Max_Cap , etype='<') y = m.Intermediate(const_FP_Wt_Matrix @ x) m.Maximize(y) m.solve(disp=True) x.value But I get the error: Exception: This steady-state IMODE only allows scalar values. To my understanding, I don't need to define x and y using: x = m.Array(m.Var, len(part_matrix_.columns), lb=0) y = m.Var(lb=0) Because I already have them defined using another way, I'm not sure if this is the cause of the problem, but adding these two lines to the beginning of the script, does not solve it. I tried changing IMODE values using m.options.IMODE = 3,4,5 ..etc but it doesn't change anything. I tried doing x = m.Array(m.Var, len(part_matrix_.columns), lb=0) then m.axb(const_machXparts, const_Mach_Max_Cap , x=x , etype='<') But it gives an "int have no length" error. Which only goes away when I assign the whole expression to x. Also documentation says: Usage: x = m.axb(A,b,etype='=,<,>,<=,>=',sparse=[True,False]) So now I'm stuck with the lines: x = m.axb(const_machXparts, const_Mach_Max_Cap , etype='<') y = m.Intermediate(const_FP_Wt_Matrix @ x) I don't know what else to try. Any help is appreciated. Thank you very much for your time in advance.
Here is an example with sample values for A, b, and C. import numpy as np from gekko import GEKKO # Initialize model m = GEKKO(remote=False) # Given matrices and constants A = np.array([[1, 2], [3, 4]]) # Example matrix A b = np.array([55, 41]) # Example vector b C = np.array([2, 3]) # Example row vector C # Constraint: Ax <= b x = m.axb(A,b,x=None,etype='<=',sparse=False) # alternative definition #Ar,Ac = A.shape #x = m.Array(m.Var, Ac, lb=0) # Array of GEKKO variables for x (e.g., 1D array) #for i in range(A.shape[0]): # m.Equation(m.sum([A[i, j] * x[j] for j in range(A.shape[1])]) <= b[i]) # Equation: Cx = y yi = m.Var(lb=0,ub=10,integer=True) # Scalar variable m.Equation(m.sum([C[i] * x[i] for i in range(len(C))]) == yi) # Objective: Maximize yi m.Maximize(yi) # Solve the problem m.options.SOLVER = 1 m.solve(disp=True) # Results print('Optimized x:', [xi.value[0] for xi in x]) print('Optimized y:', yi.value[0]) I've also included an alternative definition (commented out) if you'd like to define the inequality constraint with a list comprehension. The second equation with C*x = y can't be used with the m.axb() function because y is a variable. The solution to this problem is: ---------------------------------------------------------------- APMonitor, Version 1.0.3 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 2 Constants : 0 Variables : 6 Intermediates: 0 Connections : 5 Equations : 4 Residuals : 4 Number of state variables: 6 Number of total equations: - 6 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 0 ---------------------------------------------- Steady State Optimization with APOPT Solver ---------------------------------------------- Iter: 1 I: 0 Tm: 0.00 NLPi: 1 Dpth: 0 Lvs: 2 Obj: -3.86E-01 Gap: NaN --Integer Solution: -1.00E+01 Lowest Leaf: -1.00E+01 Gap: 0.00E+00 Iter: 2 I: 0 Tm: 0.00 NLPi: 2 Dpth: 1 Lvs: 2 Obj: -1.00E+01 Gap: 0.00E+00 Successful solution --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 0.019 sec Objective : -10. Successful solution --------------------------------------------------- Optimized x: [2.3529411765, 1.7647058824] Optimized y: 10.0
2
0
78,964,171
2024-9-9
https://stackoverflow.com/questions/78964171/how-to-use-the-input-of-a-field-only-if-it-is-visible
How do I manage that an input field value is empty, if it is not shown. In the example the text caption field is empty and not shown. If I show it by ticking "Show text caption field" and enter any text, the text appears in the output field. If I then untick "Show text caption field" the output field should also be empty again without having to manually. Not in general of course, but for some use cases this is quite important. from shiny import App, Inputs, Outputs, Session, render, ui app_ui = ui.page_fluid( ui.input_checkbox("show", "Show text caption field", False), ui.panel_conditional( "input.show", ui.input_text("caption", "Caption:"), ), ui.output_text_verbatim("value"), ) def server(input: Inputs, output: Outputs, session: Session): @render.text def value(): return input.caption() app = App(app_ui, server)
Here you can extend the render.text such that the displayed value of the ui.output_text_verbatim is input.caption() if input.show() else "": from shiny import App, Inputs, Outputs, Session, render, ui app_ui = ui.page_fluid( ui.input_checkbox("show", "Show text caption field", False), ui.panel_conditional( "input.show", ui.input_text("caption", "Caption:"), ), ui.output_text_verbatim("value"), ) def server(input: Inputs, output: Outputs, session: Session): @render.text def value(): return input.caption() if input.show() else "" app = App(app_ui, server)
2
0
78,966,184
2024-9-9
https://stackoverflow.com/questions/78966184/how-can-i-inverse-a-slice-of-an-array
I want to do something along the lines of... import numpy as np arr = np.linspace(0, 10, 100) s = slice(1, 10) print(arr[s]) print(arr[~s]) How could I apply the "not" operator to a slice, so that in this case arr[~s] would be the concatenation of arr[0] and arr[10:]?
np.delete will do what you want: np.delete(arr, s) For something more complex than a slice, you may want to store the index. To do that, you can invert a mask built from the slice: mask = np.ones(arr.shape, dtype=bool) mask[s] = False arr[mask] OR mask = np.zeros(arr.shape, dtype=bool) mask[s] = True arr[~mask] You can also use np.delete on an index array: arr[np.delete(np.arange(arr.size), s)] This gets a bit more complex if you have a multidimensional array. For example, you would probably use np.indices instead of np.arange.
1
5
78,963,578
2024-9-9
https://stackoverflow.com/questions/78963578/dataclass-inheriting-using-kw-only-for-all-variables
I am practicing on using the super function and dataclass inheritance in general. I have enabled the kw_only attribute for cases when the parent class has default values. I completely understand that super doesn't need to be used in a dataclass if you're just passing variables and I can avoid using super here. My goal is to understand the super feature better through this example. I can't understand the error message I'm getting though. @dataclass(kw_only=True) class ZooAnimals(): food_daily_kg: int price_food: float area_required: float name: str c = ZooAnimals(food_daily_kg=565, price_food=40, area_required=10, name='Monkey' ) print(c) @dataclass(kw_only=True) class Cats(ZooAnimals): meowing: str def __init__(self, food_daily_kg, price_food, area_required, meowing, name): self.meowing = meowing super().__init__(food_daily_kg, price_food, area_required, name) z = Cats(food_daily_kg=465, price_food=30, area_required=10, meowing='Little Bit', name='Leopard' ) print(z) Output: ZooAnimals(food_daily_kg=565, price_food=40, area_required=10, name='Monkey') TypeError: ZooAnimals.__init__() takes 1 positional argument but 5 were given
You shouldn't define an __init__ method in a data class if you don't have any custom initialization logics. And if you do have custom initialization logics, you should define them in a __post_init__ method instead. Your code produces the error because the __init__ method of the subclass calls the __init__ method of the base class with positional arguments when the method is configured to accept keyword arguments only with your kw_only=True option. You can fix it by passing keyword arguments instead, or by simply removing the __init__ method from the subclass entirely since one would be generated for the data class with inheritance in mind already. For example, this subclass definition would work just fine: @dataclass(kw_only=True) class Cats(ZooAnimals): meowing: str Demo here Or if you would still like to define a custom __init__ method for some reason: @dataclass(kw_only=True) class Cats(ZooAnimals): meowing: str def __init__(self, food_daily_kg, price_food, area_required, meowing, name): self.meowing = meowing super().__init__( food_daily_kg=food_daily_kg, price_food=price_food, area_required=area_required, name=name) Demo here
1
4
78,963,338
2024-9-8
https://stackoverflow.com/questions/78963338/polars-transform-string-containing-key-values
Trying to figure out how to transform a k-v string that is inside a column where the k-v string is separated by commas, and could contain different keys. The different keys would then be transformed into their own columns, where missing values would contain nulls. For example, pl.DataFrame({ "apple": [1, 2, 3], "data": ["a=b, b=c", "a=y, y=z", "k1=v1, k2=v2"] }) would look like: pl.DataFrame({ "apple": [1, 2, 3], "a": ["b", "y", None], "b": ["c", None, None], "y": [None, "z", None], "k1": [None, None, "v1"], "k2": [None, None, "v2"], "data": ["a=b, b=c", "a=y, y=z", "k1=v1, k2=v2"] }) once transformed. Does anyone know what is the most efficient way to do this (perhaps without pre-processing of the data, if possible?)
You could attempt to reformat it as JSON objects. df.with_columns(pl.format('{"{}"}', pl.col("data").str.replace_many({"=": '":"', ", ": '","'}) )) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ apple ┆ data ┆ literal β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════════════β•ͺ═══════════════════════║ β”‚ 1 ┆ a=b, b=c ┆ {"a":"b","b":"c"} β”‚ β”‚ 2 ┆ a=y, y=z ┆ {"a":"y","y":"z"} β”‚ β”‚ 3 ┆ k1=v1, k2=v2 ┆ {"k1":"v1","k2":"v2"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which would allow you to .str.json_decode() into a struct. And then .unnest() into columns. df.with_columns( pl.format('{"{}"}', pl.col("data").str.replace_many({"=": '":"', ", ": '","'}) ) .str.json_decode() .alias("json") ).unnest("json") shape: (3, 7) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ apple ┆ data ┆ a ┆ b ┆ y ┆ k1 ┆ k2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════════════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════║ β”‚ 1 ┆ a=b, b=c ┆ b ┆ c ┆ null ┆ null ┆ null β”‚ β”‚ 2 ┆ a=y, y=z ┆ y ┆ null ┆ z ┆ null ┆ null β”‚ β”‚ 3 ┆ k1=v1, k2=v2 ┆ null ┆ null ┆ null ┆ v1 ┆ v2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
1
78,962,922
2024-9-8
https://stackoverflow.com/questions/78962922/add-transition-argument-after-must-be-a-mapping-not-str-transitions-py
I just try make simple graph with transitions library : from transitions.extensions.diagrams import HierarchicalGraphMachine from IPython.display import display, Markdown states = ['engoff' , 'poweron' , 'engon' , 'FCCActions' #'emgstatus' , "whevent" , 'new data receive' ,{'name' : 'cores', 'final': True, 'parallel' : [{ 'name' : 'mapeng', 'children': ['maploaded', {"name": "update", "final": True}], 'initial' : 'maploaded', 'transitions': [['delay', 'maploaded', "update"]]}, { 'name' : 'EAA' , 'children': ['newdata', {"name": "done!", "final": True}], #environment analayser 'initial' : 'newdata', 'transitions': [['Analaysing', 'newdata', 'done!']] },{ 'name' : 'FAI' , 'children': ['newdata', {"name": "done!", "final": True}], 'initial' : 'newdata', 'transitions': ['CalculateandLearning', 'newdata', 'done!'] }] } ] transitions = [['flightcommand', 'engon', 'FCCActions'], ['poweroff-command', 'engon', 'engoff'], ['init', 'engoff', 'poweron'], ['engstart-command', 'poweron', 'engon'], ['startservice', 'poweron', 'cores']] m = HierarchicalGraphMachine(states=states, transitions=transitions, initial="engoff", show_conditions=True, title="Mermaid", auto_transitions=False) m.init() I just make some change in this example but I got Error : Traceback (most recent call last): File ".../3.transitions/test.py", line 29, in <module> m = HierarchicalGraphMachine(states=states, transitions=transitions, initial="engoff", show_conditions=True, File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 137, in __init__ super(GraphMachine, self).__init__( File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 61, in __init__ super(MarkupMachine, self).__init__( File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 407, in __init__ super(HierarchicalMachine, self).__init__( File "...\Python\Python38\lib\site-packages\transitions\core.py", line 601, in __init__ self.add_states(states) File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 230, in add_states super(GraphMachine, self).add_states( File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 126, in add_states super(MarkupMachine, self).add_states(states, on_enter=on_enter, on_exit=on_exit, File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 521, in add_states self._add_dict_state(state, ignore, remap, **kwargs) File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 978, in _add_dict_state self.add_states(state_children, remap=remap, **kwargs) File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 230, in add_states super(GraphMachine, self).add_states( File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 126, in add_states super(MarkupMachine, self).add_states(states, on_enter=on_enter, on_exit=on_exit, File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 521, in add_states self._add_dict_state(state, ignore, remap, **kwargs) File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 980, in _add_dict_state self.add_transitions(transitions) File "...\Python\Python38\lib\site-packages\transitions\core.py", line 1032, in add_transitions self.add_transition(**trans) TypeError: add_transition() argument after ** must be a mapping, not str how can I fix this?
This value: 'transitions': ['CalculateandLearning', 'newdata', 'done!'] should be wrapped in another list: 'transitions': [['CalculateandLearning', 'newdata', 'done!']] Full source code: from transitions.extensions.diagrams import HierarchicalGraphMachine from IPython.display import display, Markdown states = ['engoff' , 'poweron' , 'engon' , 'FCCActions' #'emgstatus' , "whevent" , 'new data receive' ,{'name' : 'cores', 'final': True, 'parallel' : [{ 'name' : 'mapeng', 'children': ['maploaded', {"name": "update", "final": True}], 'initial' : 'maploaded', 'transitions': [['delay', 'maploaded', "update"]]}, { 'name' : 'EAA' , 'children': ['newdata', {"name": "done!", "final": True}], #environment analayser 'initial' : 'newdata', 'transitions': [['Analaysing', 'newdata', 'done!']] },{ 'name' : 'FAI' , 'children': ['newdata', {"name": "done!", "final": True}], 'initial' : 'newdata', 'transitions': [['CalculateandLearning', 'newdata', 'done!']] }] } ] transitions = [['flightcommand', 'engon', 'FCCActions'], ['poweroff-command', 'engon', 'engoff'], ['init', 'engoff', 'poweron'], ['engstart-command', 'poweron', 'engon'], ['startservice', 'poweron', 'cores']] m = HierarchicalGraphMachine(states=states, transitions=transitions, initial="engoff", show_conditions=True, title="Mermaid", auto_transitions=False) m.init() I found the issue by editing the library code and adding these print functions: transitions/core.py def add_transitions(self, transitions): """Add several transitions. Args: transitions (list): A list of transitions. """ print(f'{transitions=}') # Added here. for trans in listify(transitions): print(f'{trans=}') # And here. if isinstance(trans, list): self.add_transition(*trans) else: self.add_transition(**trans) Which showed me that the error happened on this case: transitions=['CalculateandLearning', 'newdata', 'done!'] trans='CalculateandLearning'
2
2
78,962,124
2024-9-8
https://stackoverflow.com/questions/78962124/daytime-and-nightime-occurrence-duration-of-an-event
I want to find the daytime and night-time occurrence duration of an event from its' start time to end time. The event duration is volatile and can take place for a long time. I can't figure out a formula. I am seeking to figure out the duration with a formula or VBA code or Python code. Link to the sample file with manually calculated duration for each event. The event occurs in multiple scenarios to be considered.
If you find it hard to calculate NightTime, I suggest you calculate DayTime first, and use TotalHourse - DayTime for night time, then you wouldn't need to worry about "crossing midnight". So, for day time duration, here is the formula (in K10) =(INT(E10)-INT(D10))*(F10-G10)-MEDIAN(MOD(D10,1),G10,F10)+MEDIAN(MOD(E10,1),G10,F10) For night time duration: =E10-D10-K10 (in L10) Finally, remember to format your cell as [h]:mm:ss: Explaining the formula: think of INT(E10)-INT(D10) as number of days between start date and end date. (including start date, but excluding end date). INT(E10)-INT(D10) * (18 -6) can be interpreted as number of dayTime hours from start date to end date. (start date inclusive, end date exclusive) To get number of hours between 12/09/2024 7:00 and 13/09/2024 16:00, we need to deduct extra hours from start date, and add extra hours to the end date. i.e we need to deduct (7:00-6:00) from 12/09, and add (16:00 -6:00) for 13/09. that is, (13/09/2024 - 12/09/2024) * (18 -6) -(7-6) + (16-6) is the number of dayTime hours between 12/09/2024 7:00 to 13/09/2024 16:00. It can be simplified to (13/09/2024 - 12/09/2024) * (18 -6) + 16 -7 If start time on start date is outside DayTime Hours, e.g for 12/09/2024 4:00 to 13/09/2024 16:00, we just treat it as ifstart date starts at 6:00. i.e (13/09/2024 - 12/09/2024) * (18 -6) - (6-6) + (16 - 6) = (13/09/2024 - 12/09/2024) * (18 -6) + 16 -6 If end time on end date is outside DayTime hours, e. for 12/09/2024 4:00 to 13/09/2024 19:00, we just treat it as if end date ends at 18:00. i.e (13/09/2024 - 12/09/2024) * (18 -6) - (6-6) + (18 - 6) = (13/09/2024 - 12/09/2024) * (18 -6) + 18 -6 Finally, 4 & 5 can be simplified with -MEDIAN(start hour,6,18)+MEDIAN(end hour, 6, 18)
3
6
78,953,580
2024-9-5
https://stackoverflow.com/questions/78953580/run-anthropic-api-in-parallel
I successfully ran OpenAI GPT4o in parallel with multiprocessing: def llm_query(chunk): context, query = get_prompt_synonyms() input1, output1 = get_example() response = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": context}, {"role": "user", "content": f'Input data is {input1}' + ' ' + query}, {"role": "assistant", "content": output1}, {"role": "user", "content": f'Input data is {chunk}' + ' ' + query} ], temperature=0, max_tokens=4090 ) reply = response.choices[0].message.content return reply def description_from_llm(chunk): reply = llm_query(chunk) df_synonyms_codes = get_frame_llm(reply) # reply to dataframe return df_synonyms_codes if __name__ == '__main__': # some stuff with Pool(processes=cpu_count()) as pool: freeze_support() dfs_syn = pool.map(description_from_llm, list_chunks) df_final = pd.concat(dfs_syn) pool.close() It runs (locally) really fast without any issues. However when I try to do the same thing with Anthropic Claude 3.5 (I made sure that I imported all needed updated packages and have valid key etc.): def llm_query(chunk, temperature=0, max_tokens=4096): model = "claude-3-5-sonnet-20240620" data = "Input data for analysis and enrichment: {x}".format(x=list_benefits_chunk) context, query = get_query() examples = get_few_shot_learning() messages = get_messages(context, data, query, examples) response = client.messages.create( model=model, messages=messages, temperature=temperature, max_tokens=max_tokens ) return response It doesn't work with exception: TypeError: APIStatusError.__init__() missing 2 required keyword-only arguments: 'response' and 'body' It works in loop: df_all = pd.DataFrame() for chunk in list_chunks: df= llm_query(chunk) df_all = pd.concat[df_all, df],axis=0) But too slow! Is there a way to parallelize calls to anthropic API? Or other solution that will reduce time x7 - x10 (as mp does with GPT4o)?
I usually use ThreadPoolExecutor. Minimal example from anthropic import Anthropic from concurrent.futures import ThreadPoolExecutor TEMPERATURE = 0.5 CLAUDE_SYSTEM_MESSAGE = "You are a helpful AI assistant." anthropic_client = Anthropic(api_key=ANTHROPIC_API_KEY) def call_anthropic( prompt, model_id="claude-3-haiku-20240307", temperature=TEMPERATURE, system=CLAUDE_SYSTEM_MESSAGE, ): try: message = anthropic_client.messages.create( model=model_id, temperature=temperature, max_tokens=4096, system=system, messages=[ { "role": "user", "content": prompt, } ], ) return message.content[0].text except Exception as e: print(f"Error: {e}") return None BASE_PROMPT = "What is the capital of {country}?" COUNTRIES = ["Switzerland", "Sweden", "Sri Lanka", "Spain"] prompts = [BASE_PROMPT.format(country=country) for country in COUNTRIES] with ThreadPoolExecutor(max_workers=4) as executor: responses = list(executor.map(call_anthropic, prompts)) print(responses) Output ['The capital of Switzerland is Bern.', 'The capital of Sweden is Stockholm.', 'The capital of Sri Lanka is Colombo.', 'The capital of Spain is Madrid.'] Adjust max_workers to the limits to what your tier allows to speed up parallel processing. This depends on the token count of your prompts and probably needs a little bit of experimentation in order to avoid hitting the API limits.
2
0
78,958,965
2024-9-6
https://stackoverflow.com/questions/78958965/how-do-i-read-a-struct-contents-in-a-running-process
I compiled a C binary on a linux machine and executed it, in that binary I have a struct called Location defined as follows typedef struct { size_t x; size_t y; } Location; and here is my main function int main(void) { srand(0); Location loc; while (1) { loc.x = rand()%10; loc.y = rand()%10; sleep(2); } return 0; } How do I monitor the values of x and y? There are some limitations to consider I can't modify the binary code monitoring should be done with python ASLR always enabled Things I tried Reading /proc/pid/maps location stack then reading /proc/pid/mem didn't find anything I used gdb to find the address of loc but it is outside the range of stack found in maps (most probably ASLR)
We know the location is located in the stack somewhere, open maps file calculated stack size, start and end address, then open memory file in bytes mode and read stack bytes, loop over the bytes until you find a bytes sequence that maps to a given struct. PS I have to add a third attribute to struct for sake of making the struct easier to find, will remove it in my actual code. import sys import ctypes class Location(ctypes.Structure): _fields_ = [ ("x", ctypes.c_size_t), ("y", ctypes.c_size_t), ("z", ctypes.c_size_t) ] pid = sys.argv[1] with open(f"/proc/{pid}/maps", "r") as f: lines = f.readlines() for line in lines: if "[stack]" in line: start, end = line.split()[0].split("-") start = int(start, 16) end = int(end, 16) loc_size = ctypes.sizeof(Location) with open (f"/proc/{pid}/mem", "rb") as f: f.seek(start) data = f.read(end-start) print (len(data)) for i in range(0, len(data), loc_size): chunk = data[i:i+loc_size] if len (chunk) < loc_size: continue location = Location.from_buffer_copy(chunk) if location.z == 1337: print (f"\tx: {location.x}") print (f"\ty: {location.y}") print (f"\tz: {location.z}")
4
2
78,947,404
2024-9-4
https://stackoverflow.com/questions/78947404/can-the-superres-module-in-opencv-only-be-used-in-c
I have built OpenCV4 on Windows 11. When I attempt to use the superres module by Python, I encountered an error: module 'cv2' has no attribute 'superres' import cv2 model = cv2.superres.createSuperResolution_BTVL1() I had included the superres module when building OpenCV. Cmake info: Then I found using C++ is ok. So can this module only be used in C++? #include <iostream> #include <cstring> #include "opencv2/imgproc.hpp" #include "opencv2/imgcodecs.hpp" #include "opencv2/superres.hpp" int main(int argc, char* argv[]) { // no error. cv::superres::createSuperResolution_BTVL1(); return 0; }
The Python wrappers for OpenCV are generally automatically generated (there are some hand-written ones, but that's a tiny fraction). In order for this to happen, annotations in form of macros have to present in the header file(s) of the respective module. For free-standing functions and classes this is usually CV_EXPORTS_W and friends, for class member functions some variation of CV_WRAP, and there are some more that may need to be used for function arguments. I've described a few in an answer to this question, and the OpenCV documentation contains some useful bits as well. If we examine the header of superres module, we can se no such annotations (CV_EXPORTS is just a generic export/import doohickey, no wrappers are generated there). The superres module was moved out from main to contrib during transition to 4.0, apparently due to lack of maintenance. When I look at the 3.x branch, I can't find any annotations either. I didn't dig any deeper, so who knows, it might have had something available in 2.x, but that's not really relevant today. Generally, when Python wrappers are generated, so is a function signature for the documentation. None can be seen in the documentation, so that's some more evidence to support our conclusion. Based on the above, this module's functionality is not available in the standard Python bindings for OpenCV. That does not necessarily mean it's available only from C++, since some bindings for other languages are implemented differently, but since this is an unmaintained contrib module, it's likely the case. Possible solutions: As Christoph mentioned, have a look at dnn_superres -- that has Python bindings available. Adding the annotations to superres likely wouldn't be too difficult, if you took inspiration from some other modules that have them on classes. I have no idea how easy it would be to get such a pull request accepted there, but might be worth a try, unless you want to ship your own custom build with your program. I suppose you could try to use it via ctypes, but given it's C++, it's like opening another can of worms, and portability goes outta the window. You could write a thin wrapper around the needed functionality using Boost.Python or pybind11. Both have support for numpy arrays, so it shouldn't be too hard to make it play nice with regular OpenCV Python code.
2
2
78,956,628
2024-9-6
https://stackoverflow.com/questions/78956628/static-type-checking-or-ide-intelligence-support-for-a-numpy-array-matrix-shape
Is it possible to have static type checking or IDE intelligence support for a numpy array/matrix shape? For example, if I imagine something like this: A_MxN: NDArray(3,2) = ... B_NxM: NDArray(2,3) = ... Even better would be: N = 3 M = 2 A_MxN: NDArray(M,N) = ... B_NxM: NDArray(N,M) = ... And if I assign A to B, I would like to have an IDE hint during development time (not runtime), that the shapes are different. Something like: A_MxN = B_NxM Hint/Error: Declared shape 3,2 is not compatible with assigned shape 2,3 As mentioned by @simon, this seems to be possible: M = Literal[3] N = Literal[2] A_MxN: np.ndarray[tuple[M,N], np.dtype[np.int32]] But if I assign an array which does not fulfill the shape requirement, the linter does not throw an error. Does someone know if there is a typechecker like mypy or pyright which supports the feature?
It's possible to type the shape of an array, like was mentioned before. But at the moment (numpy 2.1.1), the shape-type of the ndarray is lost in most of numpys own functions. But the shape-typing support is gradually improving, and I'm actually personally involved in this. But that doesn't mean that you can't use shape-typing yet. If you write a function yourself, you can add shape-typing support to it without too much hassle. For instance, in Python 3.12 (with PEP 695 syntax) you can e.g. do: from typing import Any import numpy as np def get_shape[ShapeT: tuple[int, ...]](a: np.ndarray[ShapeT, Any]) -> ShapeT: return a.shape This will be valid in all static type-checkers with numpy>=2.1. If this current syntax is too verbose for you, you could use the lightweight optype (of which I'm the author) to make it more readable: from optype.numpy import Array, AtLeast0D def get_shape[ShapeT: AtLeast0D](a: Array[ShapeT]) -> ShapeT: return a.shape Array is a handy alias for ndarray, that uses (PEP 696) type parameter defaults. See the docs if you want to know the details.
2
4
78,960,741
2024-9-7
https://stackoverflow.com/questions/78960741/offset-points-in-matplotlib-pyplot-annotate-gives-unexpected-results
I am using the following code to generate a plot with a sine curve marked with 24 'hours' over 360 degrees. Each 'hour' is annotated, however the arrow lengths decrease (shrivel?) with use and even their direction is incorrect. The X axis spans 360 degrees whereas the Y axis spans 70 degrees. The print statement verifies that the arrows on 6 and 18 hours have the same length and are vertical, according to the offsets specified. This is not so as seen in the resulting plot: Matplotlib version = 3.9.2; Numpy version = 2.1.1 Here is the python code: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import numpy as np import matplotlib.pyplot as plt arrow_style = {'arrowstyle': '-|>'} def make_plot(): fig, ax = plt.subplots() ax.axis([0,+360,-35,+35]) ax.set(xlabel = 'X (degrees)', ylabel = 'Y (degrees)', title='Vanishing arrow length example') degree = np.pi / 180.0 # degrees to radians arrow_angle = 72.0 # degrees arrow_length = 27.0 eps = 23.43927945 eps_x = np.linspace(0, 360, 200) eps_y = -eps * np.sin(2 * np.pi * eps_x / 360) ax.plot(eps_x, eps_y, 'r') hr_x = np.linspace(0, 360, 25) hr_y = -eps * np.sin(2 * np.pi * hr_x / 360) i = 0 for x in hr_x: ax.plot(hr_x[i], hr_y[i],'bo') if hr_y[i] > 0 and arrow_angle > 0: arrow_angle = -75 # degrees arrow_x = np.cos(arrow_angle*degree) * arrow_length arrow_y = np.sin(arrow_angle*degree) * arrow_length ax.annotate(str(i), xy=(hr_x[i], hr_y[i]), xytext=(arrow_x, arrow_y), \ xycoords='data', textcoords='offset points', arrowprops=arrow_style) print("arrow_x {:.2f} arrow_y {:.2f} arrow_angle {:.1f} at {} hours" \ .format(arrow_x, arrow_y, arrow_angle, i)) if hr_y[i] <= 0: arrow_angle += 3.0 if hr_y[i] > 0: arrow_angle -= 3.0 i += 1 ax.grid() ax.axhline(0.0) ax.axvline(180.0) plt.show() return fig make_plot().savefig('vanishing_arrow_length.png') I have only a few days experience with matplotlib so I guess this is a really simple user error. However the documentation is no help to me in this case.
By default the text is aligned at its bottom left corner. You might want to change this to align at its centre. ax.annotate(str(i), xy=(hr_x[i], hr_y[i]), xytext=(arrow_x, arrow_y), xycoords='data', textcoords='offset points', arrowprops=arrow_style, verticalalignment='center', horizontalalignment='center')
2
5
78,957,889
2024-9-6
https://stackoverflow.com/questions/78957889/avoiding-double-for-loops-with-polars
I am trying to use Polars to determine revenue forecast for many products. I have these product names and prices and current revenues based on these prices. Some of these products' revenues are not direct multiplication of quantity and prices but involve a complicated function (distributor percentage etc and more) so I have created a separate function for that. I want to simulate 50 different scenarios of prices and apply these to the existing product portfolio to determine range of revenues etc. How can I do this using Polars without using for loops? Specifically, I want to search for the exact product name in the column name of the price dataframe and then for each of the product names in the main dataframe create an updated price column in the main dataframe corresponding to the prices in price dataframe. This will be my first scenario. I will save this scenario as scenario1. This way I want to create as many scenarios as there are rows in the prices dataframe. How do I apply this using Polars without using for loops please? Thanks in advance. I am new to polars and haven't been successful in this without using for loops in pandas. Update: here is my main_df: main_df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ xxx ┆ price β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 100 β”‚ β”‚ B ┆ 150 β”‚ β”‚ C ┆ 200 β”‚ β”‚ D ┆ 250 β”‚ β”‚ A ┆ 230 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ """) here is my pixies_df: pixies_df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ C ┆ D β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 110 ┆ 160 ┆ 210 ┆ 260 β”‚ β”‚ 120 ┆ 170 ┆ 220 ┆ 270 β”‚ β”‚ 130 ┆ 180 ┆ 230 ┆ 280 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ """) here is my expected output: shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ xxx ┆ price ┆ 0 ┆ 1 ┆ 2 β”‚ β”‚ --- ┆ ----- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ A ┆ 100 ┆ 110 ┆ 120 ┆ 130 β”‚ β”‚ B ┆ 150 ┆ 160 ┆ 170 ┆ 180 β”‚ β”‚ C ┆ 200 ┆ 210 ┆ 220 ┆ 230 β”‚ β”‚ D ┆ 250 ┆ 260 ┆ 270 ┆ 280 β”‚ β”‚ A ┆ 230 ┆ 110 ┆ 120 ┆ 130 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ num_rows = pixies_df.height def transform_column(col_name, col_values): return [f"{col_name}-{i}-{col_values[i]}" for i in range(num_rows)] transformed_data = {col: transform_column(col, pixies_df[col].to_list()) for col in pixies_df.columns} pixies_df_transformed = pl.DataFrame(transformed_data) print(pixies_df_transformed) shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ A ┆ B ┆ C ┆ D β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ index-0-0 ┆ A-0-110 ┆ B-0-160 ┆ C-0-210 ┆ D-0-260 β”‚ β”‚ index-1-1 ┆ A-1-120 ┆ B-1-170 ┆ C-1-220 ┆ D-1-270 β”‚ β”‚ index-2-2 ┆ A-2-130 ┆ B-2-180 ┆ C-2-230 ┆ D-2-280 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ after this i want to extract the prices into main_df. Can you help please? Also , this is a toy example. The retail HQ mimght contain more than 100000 SKUs (products) and price scenarios can exceed thousand of rows so a cartesian product using for loops is not efficient. i am adding more details here in the main question: here i am joining the two dfs (it crashes the system for the actual rows so i took smaller size) and then i am using a for loop for applying the revenue function over each subset. I am sure there is a better polars way to do this? Thanks. '''python Get integer-like columns starting with 'column_' integer_columns = [col for col in aa.columns if col.startswith('simprice_')] # Define a function that processes each column def process_column(col): subset = aa.select(non_integer_columns + [pl.col(col)]) # Rename 'column_n' to 'underlying_price' #print(subset.columns) return subset.rename({col: 'new_price'}) # Use a list comprehension with map to apply the function to each column subsets = list(map(process_column, integer_columns)) changes=[] for subset in subsets: # Calculate values and update the subset updated_subset = revenue_function(subset) changes.append(sum(updated_subset['revenues'])) print(changes) def revenue_function(prices_df): # Extract prices prices = prices_df['price'] price_adjusted = prices * np.random.uniform(0.9, 1.1,len(prices)) # Random adjustment between 90%-110% norm_dist_factor = norm.cdf(price_adjusted / np.mean(prices)) # Normalize prices and apply CDF price_to_revenue_ratio = 1 + norm_dist_factor * np.random.uniform(0.8, 1.2) # Further randomization revenues = price_adjusted * price_to_revenue_ratio noise = np.random.normal(0, 0.05, len(prices)) # Small random noise revenues = revenues * (1 + noise) revenues = np.round(revenues) prices_df.with_columns(revenues = pl.Series(revenues)) return prices_df '''
transpose() to convert pixies_df rows to columns. join() to link it to main_df. main_df.join( pixies_df .transpose( include_header=True, header_name="xxx", column_names = list(map(str, range(pixies_df.height))) ), on="xxx" ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ xxx ┆ price ┆ 0 ┆ 1 ┆ 2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ A ┆ 100 ┆ 110 ┆ 120 ┆ 130 β”‚ β”‚ B ┆ 150 ┆ 160 ┆ 170 ┆ 180 β”‚ β”‚ C ┆ 200 ┆ 210 ┆ 220 ┆ 230 β”‚ β”‚ D ┆ 250 ┆ 260 ┆ 270 ┆ 280 β”‚ β”‚ A ┆ 230 ┆ 110 ┆ 120 ┆ 130 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
1
78,958,661
2024-9-6
https://stackoverflow.com/questions/78958661/merge-two-dataframes-with-only-one-column
I would like to merge two dataframes which have only one column say df1 and df2 as below. Expected output dataframe as df3. import pandas as pd data1 = [ 'A', 'B', 'C'] df1 = pd.DataFrame(data1,columns=['name']) data2 = [ 'B', 'C', 'D'] df2 = pd.DataFrame(data2,columns=['name']) data3 = [ ['A',], ['B','B'], ['C','C'], [None,'D']] df3 = pd.DataFrame(data3,columns=['name_x','name_y']) print(df1) print(df2) print(df3) Output: name 0 A 1 B 2 C name 0 B 1 C 2 D name_x name_y 0 A None 1 B B 2 C C 3 None D Should I use merge to do it or any other way?
Quick hack, you could merge passing one of the keys as Series: df1.merge(df2, left_on='name', right_on=df2['name'], how='outer').drop(columns='name') Output: name_x name_y 0 A NaN 1 B B 2 C C 3 NaN D If you don't drop you'll also get a column with the merged values: df1.merge(df2, left_on='name', right_on=df2['name'], how='outer') name name_x name_y 0 A A NaN 1 B B B 2 C C C 3 D NaN D If you had extra columns in the input, this would give you: df1.merge(df2, left_on='name', right_on=df2['name'], how='outer') name name_x col1 name_y col2 0 A A x NaN NaN 1 B B x B x 2 C C x C x 3 D NaN NaN D x generic method If you have more than two inputs to combine, you can generalize with concat: dfs = [df1, df2, df1, df2] out = pd.concat( [ d.set_axis(d['name']).add_suffix(f'_{i}') for i, d in enumerate(dfs, start=1) ], axis=1, ).reset_index(drop=True) Output: name_1 name_2 name_3 name_4 0 A NaN A NaN 1 B B B B 2 C C C C 3 NaN D NaN D
2
2
78,955,298
2024-9-6
https://stackoverflow.com/questions/78955298/python-opencv-draws-polygons-outside-of-lines
[edited] It appears there is a new bug in opencv that introduces an issue causing fillPoly's boundaries to exceed polylines's. Here is humble code to draw a red filled polygon with a blue outline. import cv2 import numpy as np def draw_polygon(points, resolution=50): # create a blank black canvas img = np.zeros((resolution, resolution, 3), dtype=np.uint8) pts = np.array(points, np.int32) pts = pts.reshape((-1, 1, 2)) # draw a filled polygon in blue cv2.fillPoly(img, [pts], (0, 0, 255)) # draw an outline in red cv2.polylines(img, [pts], True, (255, 0, 0), 1) # show the image cv2.imshow("Polygon", img) cv2.waitKey(0) cv2.destroyAllWindows() # why is the infill outside the line? if __name__ == "__main__": # 4 vertices of the quad (clockwise) quad = np.array([[[44, 27], [7, 37], [7, 19], [38, 19]]]) draw_polygon(quad) QUESTION The polygon's infill appears to bleed outside of the outline (two highlighted pixels). I'm looking for a temporary solution until this bug is addressed so the infill stays completely inside the outline. Solution has to work with concave polygons.
Where there's a will there's a way. fillPoly appears to be able to draw as a line when given only two vertices. And that line matches the edges of the previously drawn polygon. \o/ I modified my code to draw the edges as a single fillPoly call and it seems to work decently. I would still prefer if fillPoly and polyLines would match, but for now I am unblocked. import cv2 import numpy as np def draw_polygon(points, resolution=50): # create a blank black canvas img = np.zeros((resolution, resolution, 3), dtype=np.uint8) poly_pts = np.array(points, np.int32) edge_pts = np.vstack([poly_pts, poly_pts[0]]) # draw a filled polygon in blue cv2.fillPoly(img, [poly_pts], (0, 0, 255)) # draw the outlines as individual edges, # drawn in a single fillPoly call, in red e0 = edge_pts[:-1] e1 = edge_pts[1:] edge_polygons = np.hstack((e0[:,None], e1[:,None])) cv2.fillPoly(img, edge_polygons, (255, 0, 0)) # show the image cv2.imshow("Polygon", img) cv2.waitKey(0) cv2.destroyAllWindows() # better outline if __name__ == "__main__": # 4 vertices of the quad (clockwise) quad = np.array([[44, 27], [7, 37], [7, 19], [38, 19]]) draw_polygon(quad)
3
4
78,956,204
2024-9-6
https://stackoverflow.com/questions/78956204/selenium-failed-to-download-document
I am currently working on a web scraper and each time i am trying to click or try to get the href of a certain link button with it, it gives me absolutly nothing. However, I tried and I must point out that when I go to the website myself, the link which i need to click works and the data is accessible but when i'm am using my webscraper it doesn't why ? from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from time import sleep import urllib.request import os WEBSITE_URL = 'https://www.i-de.es/conexion-red-electrica/produccion-energia/mapa-capacidad-acceso' BUTTON_COOKIE_XPATH = '//*[@id="onetrust-accept-btn-handler"]' BUTTON_AVISO_XPATH = '//*[@id="MapaCapaciadaModalButton"]/span[1]' BUTTON_PDF_XPATH = '//*[@id="portlet_com_liferay_journal_content_web_portlet_JournalContentPortlet_INSTANCE_aVVDHaAKM4S6"]/div/div/div/div/div/p/a' DOWNLOAD_PATH = '/path/to/download/directory' PROFILE_PATH = 'my personal path to my chrome profile' def setup_driver(profile_path: str = None) -> webdriver.Chrome: chrome_options = Options() if profile_path: chrome_options.add_argument(f"user-data-dir={profile_path}") chrome_options.add_experimental_option("prefs", { "download.default_directory": DOWNLOAD_PATH, "download.prompt_for_download": False, "download.directory_upgrade": True, "safebrowsing.enabled": True }) driver = webdriver.Chrome(options=chrome_options) return driver def wait_and_click(driver: webdriver.Chrome, by: By, value: str): element = WebDriverWait(driver, 10).until( EC.element_to_be_clickable((by, value)) ) element.click() def get_pdf_url(driver: webdriver.Chrome) -> str: pdf_link_element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, BUTTON_PDF_XPATH)) ) url = pdf_link_element.get_attribute('href') if not url: raise ValueError("Failed to retrieve the PDF URL") return url def download_pdf(url: str, download_path: str) -> str: local_pdf_path = os.path.join(download_path, "downloaded_file.pdf") urllib.request.urlretrieve(url, local_pdf_path) sleep(10) if not os.path.isfile(local_pdf_path): raise FileNotFoundError("PDF file was not found after downloading") return local_pdf_path def main(): driver = setup_driver() try: driver.get(WEBSITE_URL) sleep(10) wait_and_click(driver, By.XPATH, BUTTON_COOKIE_XPATH) wait_and_click(driver, By.XPATH, BUTTON_AVISO_XPATH) pdf_url = get_pdf_url(driver) downloaded_pdf_path = download_pdf(pdf_url, DOWNLOAD_PATH) print(f"PDF downloaded to: {downloaded_pdf_path}") finally: driver.quit() if __name__ == "__main__": main() As you can see it's not a really big scraper and only want to have this one file described as 'BUTTON_PDF_XPATH'. So i tried things in order to fix it like using my chrome profile with the web scrapper which sometimes resulted in giving me the error: Err_HTTP2_Protocol_Error ,infinite loading until it timed out or in some cases it loaded the website but it could click on nothing (all the XPATH work i can assure you). I also tried to slow down the scraper with some sleep() but it resulted in just making me wait for nothing, or i even tried to directly click on it but it just keeped making me leave. Finally i wanted to try to use an argument such as :options.add_argument('--disable-http2') for the Err_HTTP2_Protocol_Error but i don't know how to use it.
You can get the pdf link from the static html, no need for selenium: import requests from bs4 import BeautifulSoup from urllib.parse import urljoin import os def extract_pdf_link(url): response = requests.get(url, headers=HEADERS) soup = BeautifulSoup(response.text, 'html.parser') pdf_url = urljoin(url, soup.select_one('a[href*=".pdf/"]').get('href')) return pdf_url def download_pdf(url, download_path): local_pdf_path = os.path.join(download_path, "downloaded_file.pdf") response = requests.get(url, headers=HEADERS) with open(local_pdf_path, 'wb') as f: f.write(response.content) return local_pdf_path WEBSITE_URL = 'https://www.i-de.es/conexion-red-electrica/produccion-energia/mapa-capacidad-acceso' DOWNLOAD_PATH = '' HEADERS = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36'} pdf_url = extract_pdf_link(WEBSITE_URL) downloaded_pdf_path = download_pdf(pdf_url, DOWNLOAD_PATH)
2
2
78,958,489
2024-9-6
https://stackoverflow.com/questions/78958489/sort-a-list-of-objects-based-on-the-index-of-the-objects-property-from-another
I have a tuple, RELAY_PINS that holds GPIO pin numbers in the order that the relays are installed. RELAY_PINS is immutable, and its ordering does not change, while the order that the devices are defined in changes frequently. The MRE: from random import shuffle, randint class Device: def __init__(self, pin_number): self.pin_number = pin_number def __str__(self): return str(self.pin_number) RELAY_PINS = ( 14, 15, 18, 23, 24, 25, 1, 12, 16, 20, 21, 26, 19, 13, 6, 5 ) def MRE(): devices = [ Device(pin) for pin in RELAY_PINS ] # the ordering for the list of devices should be considered random for the sake of this question shuffle(devices) return devices My solution "works", but frankly, it's embarrassing: def main(): devices = MRE() pin_map = { pin_number : index for index, pin_number in enumerate(RELAY_PINS) } ordered_devices = [ None for _ in range(len(RELAY_PINS)) ] for device in devices: index = pin_map[device.pin_number] ordered_devices[index] = device return [ dev for dev in ordered_devices if dev is not None ] I know there is a better solution, but I can't quite wrap my head around it. What is the pythonic solution to this problem?
You can use sorted with a key function: from random import randint, shuffle class Device: def __init__(self, pin_number: int): self.pin_number = pin_number def __str__(self) -> str: return str(self.pin_number) def __repr__(self) -> str: return f'Device(pin_number={self.pin_number})' RELAY_PINS: tuple[int, ...] = (14, 15, 18, 23, 24, 25, 1, 12, 16, 20, 21, 26, 19, 13, 6, 5) def MRE() -> None: devices = [Device(pin) for pin in RELAY_PINS] devices.pop(randint(0, len(RELAY_PINS) - 1)) shuffle(devices) return devices def main() -> None: devices = MRE() ordered_devices = sorted(devices, key=lambda d: RELAY_PINS.index(d.pin_number)) print(ordered_devices) if __name__ == '__main__': main() Example Output (device with pin_number=12 randomly popped): [Device(pin_number=14), Device(pin_number=15), Device(pin_number=18), Device(pin_number=23), Device(pin_number=24), Device(pin_number=25), Device(pin_number=1), Device(pin_number=16), Device(pin_number=20), Device(pin_number=21), Device(pin_number=26), Device(pin_number=19), Device(pin_number=13), Device(pin_number=6), Device(pin_number=5)]
2
1
78,957,769
2024-9-6
https://stackoverflow.com/questions/78957769/pandas-2-0-3-problems-keeping-format-when-file-is-saved-in-json-or-csv-format
Here is some random code. # create df import pandas as pd df2 = pd.DataFrame({'var1':['1_0','1_0','1_0','1_0','1_0'], 'var2':['X','y','a','a','a']}) df2.to_json('df2.json') # import df df2 = pd.read_json('df2.json') df2 This is the expected output: var1 var2 0 1_0 X 1 1_0 y 2 1_0 a 3 1_0 a 4 1_0 a However it generates: var1 var2 0 10 X 1 10 y 2 10 a 3 10 a 4 10 a If I modify an entry inside ['var1'] to a string, then the code it generates when df is imported is correct. Here is an example to illustrate it. I replaced one of the entries with 'hello' df2 = pd.DataFrame({'var1':['1_0','hello','1_0','1_0','1_0'], 'var2':['X','y','a','a','a']}) df2.to_json('df2.json') # import df df2 = pd.read_json('df2.json') df2 Generates this var1 var2 0 1_0 X 1 hello y 2 1_0 a 3 1_0 a 4 1_0 a Same problem is observed if file is saved in csv format and then imported. Has anyone encountered the same issue?
This is due to the fact that underscores are valid separators in python (often used as thousand separator: 1_000 is 1000). You could force the dtype upon import (or use dtype=False): df2 = pd.read_json('df2.json', dtype='str') If you want to keep dtype detection for the other columns: df2 = pd.read_json('df2.json', dtype={'var1': 'str'}) Output: var1 var2 0 1_0 X 1 1_0 y 2 1_0 a 3 1_0 a 4 1_0 a When you have a string in the json, there is no ambiguity that the values are not numbers and the conversion is not done.
3
4
78,955,408
2024-9-6
https://stackoverflow.com/questions/78955408/specify-attributes-in-constructor-in-python
I'm confused about the differences of the following codes: class car: def __init__(self, weight): self.weight = weight class car: def __init__(self, weight): self.weight = 0 class car: def __init__(self, weight=0): self.weight = weight class car: def __init__(self, weight=0): self.weight = 0 class car: def __init__(self, weight=0): self.weight = 1 class car: def __init__(self): self.weight = 0 My understanding it's that if I specify weight=0 in def __init__(self, weight=0), then the default attributes is 0. But it seems that I can also do the similar if I don't specify def __init__(self, weight) but add self.weight = 0 afterwards. What's the difference? What if we give two different values?
You get confused by concepts of function arg, default value for the arg, and member variable, you are mixing them so badly. Function argument For def __init__(self,weight) you are expecting a variable passed in when calling the constructor, which is given the name "weight". It is a function argument. Default value for function argument For def __init__(self,weight=0), you are still expecting a variable passed in, but you are fine if it's absent, in which case the default value 0 will be simply assigned to symbol "weight". This is a default value for a function argument. Member attribute Now for the code below: class car: def __init__(self, weight=0): self.weight = weight It's effectively saying, pass me a variable, I'll assign it to a symbol "weight", otherwise I'll assign 0 to the symbol. And I'll assign the value of symbol "weight" to "self.weight", they locate at totally different scopes and hence are different symbols. It's not like C++ when you do memberwise initialization. "self.weight" is an attribute of instances of the class. weight vs self.weight Hopefully that, now for the one last example, the logic will be clear enough: class car: def __init__(self, weight=0): self.weight = 1 Here you are getting a symbol named "weight", no matter by default or passed-in, it gets simply ignored, i.e., not referred anywhere, instead, you assign a 1 to member variable self.weight.
2
2
78,957,463
2024-9-6
https://stackoverflow.com/questions/78957463/convert-empty-lists-to-nulls
I have a polars DataFrame with two list columns. However one column contains empty lists and the other contains nulls. I would like consistency and convert empty lists to nulls. In [306]: df[["spcLink", "proprietors"]] Out[306]: shape: (254_654, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ spcLink ┆ proprietors β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════════════║ β”‚ [] ┆ null β”‚ β”‚ [] ┆ null β”‚ β”‚ [] ┆ null β”‚ β”‚ [] ┆ null β”‚ β”‚ [] ┆ null β”‚ β”‚ … ┆ … β”‚ β”‚ [] ┆ ["The Steel Company of Canada … β”‚ β”‚ [] ┆ ["Philips' Gloeilampenfabrieke… β”‚ β”‚ [] ┆ ["AEG-Telefunken"] β”‚ β”‚ [] ┆ ["xxxx… β”‚ β”‚ [] ┆ ["yyyy… β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I have attempted this: # Convert empty lists to None for col, dtype in df.schema.items(): if isinstance(dtype, pl.datatypes.List): print(col, dtype) df = df.with_columns( pl.when(pl.col(col).list.len() == 0).then(None).otherwise(pl.col(col)) ) But no change happens in the output; the empty lists remain as such and are not converted.
selectors.by_dtype to select all columns of type pl.List(pl.String). list.len() to determine if list is empty. df = pl.DataFrame({ "spcLink": [[],[]], "proprietors": [None,["xxx"]] }, schema={"spcLink": pl.List(pl.String), "proprietors": pl.List(pl.String)}) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ spcLink ┆ proprietors β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════║ β”‚ [] ┆ null β”‚ β”‚ [] ┆ ["xxx"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ import polars.selectors as cs df.with_columns( pl.when( cs.by_dtype(pl.List(pl.String)).list.len() > 0 ).then( cs.by_dtype(pl.List(pl.String)) ) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ spcLink ┆ proprietors β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════║ β”‚ null ┆ null β”‚ β”‚ null ┆ ["xxx"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
78,953,307
2024-9-5
https://stackoverflow.com/questions/78953307/assertdataframeequal-doesnt-throw-error-with-none-dataframe-in-pyspark
When I try to assert a dataframe using the PySpark API, if a dataframe is none, I do not get the assertion error, but instead, the method returns false. Is it a bug, or should I handle my test verification differently? from pyspark.testing.utils import assertDataFrameEqual assertDataFrameEqual(spark.createDataFrame([("v1", "v3")]), None) # False from pyspark.testing.utils import assertDataFrameEqual assertDataFrameEqual(spark.createDataFrame([("v1", "v3")]), spark.createDataFrame([("v1", "v2")])) # PySparkAssertionError
When running the code you run, I do get an error for your code: [INVALID_TYPE_DF_EQUALITY_ARG] Expected type Union[DataFrame, ps.DataFrame, List[Row]] for `expected` but got type None. I do not see why this could be different for you: all versions containing this function start with the same check for None values, as found in the code: if actual is None and expected is None: return True elif actual is None: raise PySparkAssertionError( error_class="INVALID_TYPE_DF_EQUALITY_ARG", message_parameters={ "expected_type": "Union[DataFrame, ps.DataFrame, List[Row]]", "arg_name": "actual", "actual_type": None, }, ) elif expected is None: raise PySparkAssertionError( error_class="INVALID_TYPE_DF_EQUALITY_ARG", message_parameters={ "expected_type": "Union[DataFrame, ps.DataFrame, List[Row]]", "arg_name": "expected", "actual_type": None, }, ) This means that in your case it should return the error I got, getting raised by the last elif. Despite it being unclear what the reason is for your different behaviour, a valid comparison based on the types would be wrapping the None in a list. Edit: Note that the code was run on Azure Databricks DBR 14.3LTS, pyspark 3.5.0.
2
2
78,957,022
2024-9-6
https://stackoverflow.com/questions/78957022/apply-multiple-window-sizes-to-rolling-aggregation-functions-in-polars-dataframe
In a number of aggregation function, such as rolling_mean, rolling_max, rolling_min, etc, the input argument window_size is supposed to be of type int I am wondering how to efficiently compute results when having a list of window_size. Consider the following dataframe: import polars as pl pl.Config(tbl_rows=-1) df = pl.DataFrame( { "symbol": ["A", "A", "A", "A", "A", "B", "B", "B", "B"], "price": [100, 110, 105, 103, 107, 200, 190, 180, 185], } ) shape: (9, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ price β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 100 β”‚ β”‚ A ┆ 110 β”‚ β”‚ A ┆ 105 β”‚ β”‚ A ┆ 103 β”‚ β”‚ A ┆ 107 β”‚ β”‚ B ┆ 200 β”‚ β”‚ B ┆ 190 β”‚ β”‚ B ┆ 180 β”‚ β”‚ B ┆ 185 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Let's say I have a list with n elements, such as periods = [2, 3]. I am looking for a solution to compute the rolling means for all periods grouped by symbol in parallel. Speed and memory efficiency is of the essence. The result should be a tidy/long dataframe like this: shape: (18, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ price ┆ mean_period ┆ rolling_mean β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ u8 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════β•ͺ══════════════║ β”‚ A ┆ 100 ┆ 2 ┆ null β”‚ β”‚ A ┆ 110 ┆ 2 ┆ 105.0 β”‚ β”‚ A ┆ 105 ┆ 2 ┆ 107.5 β”‚ β”‚ A ┆ 103 ┆ 2 ┆ 104.0 β”‚ β”‚ A ┆ 107 ┆ 2 ┆ 105.0 β”‚ β”‚ B ┆ 200 ┆ 2 ┆ null β”‚ β”‚ B ┆ 190 ┆ 2 ┆ 195.0 β”‚ β”‚ B ┆ 180 ┆ 2 ┆ 185.0 β”‚ β”‚ B ┆ 185 ┆ 2 ┆ 182.5 β”‚ β”‚ A ┆ 100 ┆ 3 ┆ null β”‚ β”‚ A ┆ 110 ┆ 3 ┆ null β”‚ β”‚ A ┆ 105 ┆ 3 ┆ 105.0 β”‚ β”‚ A ┆ 103 ┆ 3 ┆ 106.0 β”‚ β”‚ A ┆ 107 ┆ 3 ┆ 105.0 β”‚ β”‚ B ┆ 200 ┆ 3 ┆ null β”‚ β”‚ B ┆ 190 ┆ 3 ┆ null β”‚ β”‚ B ┆ 180 ┆ 3 ┆ 190.0 β”‚ β”‚ B ┆ 185 ┆ 3 ┆ 185.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
You can use comprehension to generate a DataFrame for each value in periods list and then concat() DataFrames into single long DataFrame: periods = [2, 3] pl.concat( df.with_columns( mean_period = pl.lit(p), rolling_mean = pl.col.price.rolling_mean(p).over("symbol") ) for p in periods ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ price ┆ mean_period ┆ rolling_mean β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i32 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════β•ͺ══════════════║ β”‚ A ┆ 100 ┆ 2 ┆ null β”‚ β”‚ A ┆ 110 ┆ 2 ┆ 105.0 β”‚ β”‚ A ┆ 105 ┆ 2 ┆ 107.5 β”‚ β”‚ A ┆ 103 ┆ 2 ┆ 104.0 β”‚ β”‚ A ┆ 107 ┆ 2 ┆ 105.0 β”‚ β”‚ B ┆ 200 ┆ 2 ┆ null β”‚ β”‚ B ┆ 190 ┆ 2 ┆ 195.0 β”‚ β”‚ B ┆ 180 ┆ 2 ┆ 185.0 β”‚ β”‚ B ┆ 185 ┆ 2 ┆ 182.5 β”‚ β”‚ A ┆ 100 ┆ 3 ┆ null β”‚ β”‚ A ┆ 110 ┆ 3 ┆ null β”‚ β”‚ A ┆ 105 ┆ 3 ┆ 105.0 β”‚ β”‚ A ┆ 103 ┆ 3 ┆ 106.0 β”‚ β”‚ A ┆ 107 ┆ 3 ┆ 105.0 β”‚ β”‚ B ┆ 200 ┆ 3 ┆ null β”‚ β”‚ B ┆ 190 ┆ 3 ┆ null β”‚ β”‚ B ┆ 180 ┆ 3 ┆ 190.0 β”‚ β”‚ B ┆ 185 ┆ 3 ┆ 185.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
78,957,012
2024-9-6
https://stackoverflow.com/questions/78957012/itertools-product-in-dataframe
Inputs: arr1 = ["A","B"] arr2 = [[1,2],[3,4,5]] Expected output: short_list long_list 0 A 1 1 A 2 2 B 3 3 B 4 4 B 5 Current output: short_list long_list 0 A [1, 2] 1 A [3, 4, 5] 2 B [1, 2] 3 B [3, 4, 5] Current Code (using itertools): import pandas as pd from itertools import product def custom_product(arr1, arr2): expand_short_list = [[a1]*len(a2) for a1, a2 in zip(arr1,arr2)] return [[a1,a2] for a1, a2 in zip(sum(expand_short_list,[]),sum(arr2,[]))] arr1 = ["A","B"] arr2 = [[1,2],[3,4,5]] df2 = pd.DataFrame(data = product(arr1,arr2),columns=["short_list", "long_list"]) Alternative code using nested list comprehensions to get the desired output: import pandas as pd def custom_product(arr1, arr2): expand_short_list = [[a1]*len(a2) for a1, a2 in zip(arr1,arr2)] return [[a1,a2] for a1, a2 in zip(sum(expand_short_list,[]),sum(arr2,[]))] arr1 = ["A","B"] arr2 = [[1,2],[3,4,5]] df1 = pd.DataFrame(data = custom_product(arr1, arr2),columns=["short_list", "long_list"]) Question: I'm wondering how could I achieve the desired output using itertools?
IIUC use DataFrame contructor with DataFrame.explode: arr1 = ["A","B"] arr2 = [[1,2],[3,4,5]] df = (pd.DataFrame({'short_list':arr1, 'long_list':arr2}) .explode('long_list') .reset_index(drop=True)) print (df) short_list long_list 0 A 1 1 A 2 2 B 3 3 B 4 4 B 5 Another idea is use flattening zipped arrays to list of tuples and pass to DataFrame constructor: df = pd.DataFrame([(a, x) for a, b in zip(arr1, arr2) for x in b], columns=['short_list','long_list']) print (df) short_list long_list 0 A 1 1 A 2 2 B 3 3 B 4 4 B 5
2
2
78,956,588
2024-9-6
https://stackoverflow.com/questions/78956588/polars-selector-for-columns-of-dtype-pl-list
In the polars documentation regarding selectors, there are many examples for selecting columns based on their dtypes. I am missing pl.List How can I quickly select all columns of type pl.List within a pl.DataFrame?
At the moment it's not possible to select all lists with selectors, but if you have lists of specific type, you can do it: df.select(cs.by_dtype(pl.List(pl.Int64)))
5
3
78,955,998
2024-9-6
https://stackoverflow.com/questions/78955998/add-new-column-with-multiple-literal-values-to-polars-dataframe
Consider the following toy example: import polars as pl pl.Config(tbl_rows=-1) df = pl.DataFrame({"group": ["A", "A", "A", "B", "B"], "value": [1, 2, 3, 4, 5]}) print(df) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════║ β”‚ A ┆ 1 β”‚ β”‚ A ┆ 2 β”‚ β”‚ A ┆ 3 β”‚ β”‚ B ┆ 4 β”‚ β”‚ B ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Further, I have a list of indicator values, such as vals=[10, 20, 30]. I am looking for an efficient way to insert each of these values in a new column called Γ¬ndicator using pl.lit() while expanding the dataframe vertically in a way all existing rows will be repeated for every new element in vals. My current solution is to insert a new column to df, append it to a list and subsequently do a pl.concat. lit_vals = [10, 20, 30] print(pl.concat([df.with_columns(indicator=pl.lit(val)) for val in lit_vals])) shape: (15, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value ┆ indicator β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════════║ β”‚ A ┆ 1 ┆ 10 β”‚ β”‚ A ┆ 2 ┆ 10 β”‚ β”‚ A ┆ 3 ┆ 10 β”‚ β”‚ B ┆ 4 ┆ 10 β”‚ β”‚ B ┆ 5 ┆ 10 β”‚ β”‚ A ┆ 1 ┆ 20 β”‚ β”‚ A ┆ 2 ┆ 20 β”‚ β”‚ A ┆ 3 ┆ 20 β”‚ β”‚ B ┆ 4 ┆ 20 β”‚ β”‚ B ┆ 5 ┆ 20 β”‚ β”‚ A ┆ 1 ┆ 30 β”‚ β”‚ A ┆ 2 ┆ 30 β”‚ β”‚ A ┆ 3 ┆ 30 β”‚ β”‚ B ┆ 4 ┆ 30 β”‚ β”‚ B ┆ 5 ┆ 30 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As df could potentially have quite a lot of rows and columns, I am wondering if my solution is efficient in terms of speed as well as memory allocation? Just for my understanding, if I append a new pl.DataFrame to the list, will this dataframe use additional memory or will just some new pointers be created that link to the chunks in memory which hold the data of the original df?
You could assign it as a column and .explode() df.with_columns(indicator=vals).explode("indicator") shape: (15, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value ┆ indicator β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════════║ β”‚ A ┆ 1 ┆ 10 β”‚ β”‚ A ┆ 1 ┆ 20 β”‚ β”‚ A ┆ 1 ┆ 30 β”‚ β”‚ A ┆ 2 ┆ 10 β”‚ β”‚ A ┆ 2 ┆ 20 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ B ┆ 4 ┆ 20 β”‚ β”‚ B ┆ 4 ┆ 30 β”‚ β”‚ B ┆ 5 ┆ 10 β”‚ β”‚ B ┆ 5 ┆ 20 β”‚ β”‚ B ┆ 5 ┆ 30 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ To specify a dtype, you can use pl.lit() (df.with_columns(indicator=pl.lit(vals, dtype=pl.List(pl.UInt8))) .explode("indicator") ) shape: (15, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ value ┆ indicator β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ u8 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════════║ β”‚ A ┆ 1 ┆ 10 β”‚ β”‚ A ┆ 1 ┆ 20 β”‚ β”‚ A ┆ 1 ┆ 30 β”‚ β”‚ A ┆ 2 ┆ 10 β”‚ β”‚ A ┆ 2 ┆ 20 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ B ┆ 4 ┆ 20 β”‚ β”‚ B ┆ 4 ┆ 30 β”‚ β”‚ B ┆ 5 ┆ 10 β”‚ β”‚ B ┆ 5 ┆ 20 β”‚ β”‚ B ┆ 5 ┆ 30 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
78,956,523
2024-9-6
https://stackoverflow.com/questions/78956523/valueerror-could-not-use-apoc-procedures-please-ensure-the-apoc-plugin-is-inst
I'm trying to use the Neo4jGraph class from the langchain_community.graphs module in my Python project to interact with a Neo4j database. My script here: from langchain.chains import GraphCypherQAChain from langchain_community.graphs import Neo4jGraph from langchain_openai import ChatOpenAI enhanced_graph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="password", enhanced_schema=True, ) print(enhanced_graph.schema) chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=enhanced_graph, verbose=True ) chain.invoke({"query": "Who is Bob?"}) Error here: ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration neo4j.exceptions.ClientError: {code: Neo.ClientError.Procedure.ProcedureNotFound} {message: There is no procedure with the name `apoc.meta.data` registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.} How to solve the problem?
This is a known error: copy this file 'apoc-5.14.0-core.jar' from /var/lib/neo4j/labs/ to /var/lib/neo4j/plugins update this file /var/lib/neo4j/conf/neo4j.conf dbms.security.procedures.unrestricted=apoc.* dbms.security.procedures.allowlist=apoc.* Git link with solutions : https://github.com/langchain-ai/langchain/issues/12901
2
3
78,948,684
2024-9-4
https://stackoverflow.com/questions/78948684/why-does-the-size-of-a-python-struct-depend-on-the-endianess
Why does the size of a struct change if endianness is specified, notably even when the endianness matches the native endianness of the platform? Example: >>> import struct >>> struct.calcsize("BI") 8 >>> struct.calcsize(">BI") 5 >>> struct.calcsize("<BI") 5 >>> Why is the extra padding added?
If byte ordering is not specified then it's implicitly "@" in which case both the size and alignment are native. See the "Format Strings" section of this document. You will note that "@" is the only format string prefix that forces alignment. The format character "I" denotes an unsigned integer which has a size of 32 bits. The format character "B" denotes an unsigned char (8 bits). Thus, the integer will be aligned on a 32-bit boundary which leads to the padding. When alignment does not occur then the packed bytes object will just be the size of the sum of its constituent parts which, in this case will be 1+4==5
3
1
78,949,086
2024-9-4
https://stackoverflow.com/questions/78949086/install-a-pre-release-version-of-python-on-m1-mac-using-conda
I would like to install python 3.13.0rc1 with conda on an M1 Mac. However, conda create fails with error message "python 3.13.0rc1** is not installable because it requires _python_rc, which does not exist (perhaps a missing channel)": % conda search python ... python 3.12.4 h99e199e_1 pkgs/main python 3.12.5 h30c5eda_0_cpython conda-forge python 3.13.0rc1 h17d3ab0_0_cp313t conda-forge python 3.13.0rc1 h17d3ab0_1_cp313t conda-forge python 3.13.0rc1 h17d3ab0_2_cp313t conda-forge python 3.13.0rc1 h8754ccd_100_cp313 conda-forge python 3.13.0rc1 h8754ccd_101_cp313 conda-forge python 3.13.0rc1 h8754ccd_102_cp313 conda-forge % conda create --name py python=3.13.0rc1 --channel conda-forge --override-channels Channels: - conda-forge Platform: osx-arm64 Collecting package metadata (repodata.json): done Solving environment: failed LibMambaUnsatisfiableError: Encountered problems while solving: - nothing provides _python_rc needed by python-3.13.0rc1-h17d3ab0_0_cp313t Could not solve for environment specs The following package could not be installed Γ’Γ’ python 3.13.0rc1** is not installable because it requires Γ’Γ’ _python_rc, which does not exist (perhaps a missing channel). Note that installing the latest python without specifying the version (conda create --name py python) installs python 3.12.5. See also: How to install the latest development version of Python with conda? Install python 3.12 using mamba on mac
python_rc is under the python_rc label, seehere, you can specify it directly in your command line like this: conda create --name py python=3.13.0rc1 conda-forge/label/python_rc::_python_rc --channel conda-forge --override-channels
2
3
78,955,088
2024-9-5
https://stackoverflow.com/questions/78955088/what-does-mean-to-numpy-apply-along-axis-and-how-does-it-differ-from-0
I was trying to get a good understanding of numpy apply along axis. Below is the code from the numpy documentation (https://numpy.org/doc/stable/reference/generated/numpy.apply_along_axis.html) import numpy as np def my_func(a): """Average first and last element of a 1-D array""" return (a[0] + a[-1]) * 0.5 b = np.array([[1,2,3], [4,5,6], [7,8,9]]) print(np.apply_along_axis(my_func, 0, b)) #array([4., 5., 6.]) print(np.apply_along_axis(my_func, 1, b)) #array([2., 5., 8.]) According to webpage, the above code has a similar functionality to the code below which I took from the webpage and modified it (played around with it) to understand it: arr = np.array([[1,2,3], [4,5,6], [7,8,9]]) axis = 0 def my_func(a): """Average first and last element of a 1-D array""" print(a, a[0], a[-1]) return (a[0] + a[-1]) * 0.5 out = np.empty(arr.shape[axis+1:]) Ni, Nk = arr.shape[:axis], arr.shape[axis+1:] print(Ni) for ii in np.ndindex(Ni): for kk in np.ndindex(Nk): f = my_func(arr[ii + np.s_[:,] + kk]) Nj = f.shape for jj in np.ndindex(Nj): out[ii + jj + kk] = f[jj] #The code below may help in understanding what I was trying to figure out. #print(np.shape(np.asarray(1))) #x = np.int32(1) #print(x, type(x), x.shape) I understand from the numpy documentation that scalars and arrays in numpy have the same attributes and methods. I am trying to understand the difference between '()' and 0. I understand that () is a tuple. See below. Example: In the code below, the first for-loop does not iterate but the second for-loop iterates once. I am trying to understand why. import numpy as np for i in np.ndindex(0): print(i) #does not run. for i in np.ndindex(()): print(i) #runs once In summary: Given the above context, what is the difference between () and 0?
In summary: Given the above context, what is the difference between () and 0? The first one represents a zero dimensional array with one element. The second one represents a one dimensional array with zero elements. A zero dimensional array always has a single element. Example: >>> array = np.array(42) >>> array array(42) Zero dimensional arrays have a shape of (). >>> array.shape () Indexing into a zero dimensional array produces a scalar. >>> array[()] 42 Zero dimensional arrays are kind of like scalars, in that both of them can only have a single element. However, they act differently in a few subtle ways. The differences between zero-dimensional arrays and scalars are out of scope of this post. One dimensional arrays, unlike zero dimensional arrays, can contain any number of elements. For example, this one dimensional array contains zero elements: >>> array = np.array([]) >>> array array([], dtype=float64) It has a shape of (0,). >>> array.shape (0,) (When you provide a shape of 0 to np.nditer(), this is implicitly converted to (0,). This is the same shape as your example.) If you loop over each array with for i in np.nditer(array.shape):, the loop over the array with one element will run once. The loop over the array with zero elements will run zero times.
3
4
78,954,564
2024-9-5
https://stackoverflow.com/questions/78954564/how-to-structure-python-package-to-allow-for-testing
I'm having trouble being able to get my code to be executable and testable. Here's my project's file structure: project/ β”œβ”€β”€ .pytest_cache/ β”‚ β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ __init__.py | β”œβ”€β”€ .pytest_cache/ β”‚ β”œβ”€β”€ module1.py β”‚ └── module2.py β”‚ β”œβ”€β”€ tests/ | β”œβ”€β”€ .pytest_cache/ | β”œβ”€β”€ __init__.py | └── test_module.py | β”œβ”€β”€ venv/ where both __init__.py files are empty I'm using pytest and calling test_module.py by calling pytest in the cli at the root level of the project. This works when I use absolute imports in all of my modules in src: from src.module2.py import some_func, some_other_func but then executing module1.py from the cli no longer works: Traceback (most recent call last): File "C:\project\src\module1.py", line 3, in <module> from src.module2 import some_func, some_other_func, ModuleNotFoundError: No module named 'src' However, when I remove the src. from the import statements, I can execute the scripts just fine, but no longer test them. I've tried appending my src directory to the project path in the testfile: test_module.py: import sys import os sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../src'))) Also tried this: test_module.py: import os, sys sys.path.append(os.path.join(os.getcwd(), os.path.pardir)) then calling my test in the root of my project: python -m tests.test_module and nothing happens. What can I do so executing tests & my scripts are possible? Are there any libs/packages to make this easier?
Before diving into testing, ensure your project is structured properly. Project Structure Your src/ directory should contain a subdirectory with the name of your module: super-cool-module/ β”œβ”€β”€ src/ β”‚ └── super_cool_module/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ submodule1.py β”‚ └── submodule2.py β”œβ”€β”€ tests/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ test_submodule1.py β”‚ └── test_submodule2.py β”œβ”€β”€ venv/ β”œβ”€β”€ setup.py # setuptools β”œβ”€β”€ README.md # Documentation └── .gitignore # Exclude venv, etc. Alternatively, if you prefer not to use a src/ directory: super-cool-module/ β”œβ”€β”€ super_cool_module/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ submodule1.py β”‚ └── submodule2.py β”œβ”€β”€ tests/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ test_submodule1.py β”‚ └── test_submodule2.py β”œβ”€β”€ venv/ β”œβ”€β”€ setup.py # setuptools β”œβ”€β”€ README.md # Documentation └── .gitignore # Exclude venv, etc. Testing Setup To run tests, you have a few options: Using PYTHONPATH: Run pytest with the path to your src directory: PYTHONPATH=src pytest Pytest Configuration: Alternatively, you can set up a pytest.ini configuration file: # pytest.ini [pytest] pythonpath = src Editable Mode Installation: If you’ve activated your virtual environment, install your module in editable mode: source venv/bin/activate pip install -e . Now you can run tests by simply calling: pytest This works because your module is installed in your local environment. Note: Remember to exclude the venv/ directory from version control by adding it to your .gitignore file.
2
2
78,953,917
2024-9-5
https://stackoverflow.com/questions/78953917/why-are-aes-256-cbc-results-in-php-and-python-different-when-using-the-same-keys
I'm trying to encrypt the same string in PHP and Python using AES-256-CBC with the same keys and IVs. However, the results of both languages ​​are different, even though I am using the same encryption method and the same data. In PHP, I am using openssl_encrypt, while in Python I am using pycryptodome with PKCS7 padding. Below are the two code snippets I'm using, and the results I'm getting. Here is my PHP code: <?php $plaintextstr = 'AAAAAAAAAAAAAAAA'; $encrypt_method = "AES-256-CBC"; $secret_key = "SSSSSSSSSSSS"; $secret_iv = "LLLLLLLLLLLL"; $key = substr(hash('sha256', $secret_key), 0, 32); $iv = substr(hash('sha256', $secret_iv), 0, 16); $encrypted_str = openssl_encrypt($plaintextstr, $encrypt_method, $key, 0, $iv); echo base64_encode($encrypted_str); ?> Here is my Python code: from Crypto.Cipher import AES from Crypto.Util.Padding import pad from hashlib import sha256 import base64 plaintextstr = 'AAAAAAAAAAAAAAAA' secret_key = "SSSSSSSSSSSS" secret_iv = "LLLLLLLLLLLL" # Generar key y iv key = sha256(secret_key.encode('utf-8')).digest()[:32] iv = sha256(secret_iv.encode('utf-8')).digest()[:16] # Padding padded_data = pad(plaintextstr.encode('utf-8'), AES.block_size) # Crear el cifrador cipher = AES.new(key, AES.MODE_CBC, iv) encrypted = cipher.encrypt(padded_data) # Convertir a base64 encrypted_base64 = base64.b64encode(encrypted).decode('utf-8') print(encrypted_base64) I have verified the following: The keys and the IV in both languages ​​are generated in the same way using SHA-256. I am using CBC mode and applying PKCS7 padding in both languages. Both results are Base64 encoded. Despite these steps, the results remain different. I'm not sure why this happens.
The php hash function returns a hexadecimal representation of the bytes, so cutting 16 characters of that with substr leads to "20139ebeee312271" for your IV (similar result for key). These are not true bytes, they're characters in the range [0-9a-f]. This is not what you intend to do. The Python sha256 .digest() function returns a byte string, not a hexadecimal represntation. Cutting 16 characters of that leads to 16 true bytes from the hash. This is probably what you intend to do. The results are different because you're encrypting with different keys and initialization vectors. From the documentation, PHP hash can take a third parameter, binary, which defaults to false: hash( string $algo, string $data, bool $binary = false, array $options = [] ): string binary When set to true, outputs raw binary data. false outputs lowercase hexits. Changing the PHP code to: <?php $plaintextstr = 'AAAAAAAAAAAAAAAA'; $encrypt_method = "AES-256-CBC"; $secret_key = "SSSSSSSSSSSS"; $secret_iv = "LLLLLLLLLLLL"; $key = substr(hash('sha256', $secret_key, true), 0, 32); $iv = substr(hash('sha256', $secret_iv, true), 0, 16); $encrypted_str = openssl_encrypt($plaintextstr, $encrypt_method, $key, 0, $iv); echo $encrypted_str; // Result: V7zSp9KHPke9QuPbWoUNvjCHVJ7giluD9YaOnX9E57k= yields the same result as the python code.
2
3
78,953,646
2024-9-5
https://stackoverflow.com/questions/78953646/creating-json-style-api-call-dict-from-pandas-df-data
Scenario: I have a dataframe which contains one row of data. Each column is an year and it has the relevant value. I am trying to use the data from this df to create a json style structure to pass to an API requests.post. Sample DF: +-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ | | 2020 | 2021 | 2022 | 2023 | 2024 | 2025 | 2026 | 2027 | 2028 | 2029 | 2030 | +-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ | Total | 23648 | 20062 | 20555 | 22037 | 26208 | 28224.88801 | 29975.87934 | 31049.01582 | 32170.68853 | 33190.35298 | 34031.93951 | +-------+-------+-------+-------+-------+-------+-------------+-------------+-------------+-------------+-------------+-------------+ Sample JSON style structure: parameters = { "first_Id":first_id, "version":2, "overrideData":[ { "period":2024, "TOTAL":101.64, }, { "period":2025, "TOTAL":104.20, } ] } Question: What would be the best approach to use the data from the Df to fill and expand the JSON style object? I tried the following, but this only separates two lines, one for total and one for period, which does not result in the pairings needs: parameters = {} parameters['first_Id'] = first_id parameters['version'] = 2 parameters['overrideData'] = { } parameters['overrideData']['total'] = test_input.iloc[0].tolist() parameters['overrideData']['period'] = list(test_input.columns) This results in: { "companyId": 11475, "version": 2, "overrideData": { "TOTAL": [ 23647.999999999996, 20061.999999999996, 20555, 22036.999999999996, 26207.999999999993, 28224.88800768, 29975.879336500002, 31049.015816740008, 32170.68852577, 33190.3529754, 34031.93951397 ], "period": [ 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2030 ] } }
You could transpose, rename_axis, reset_index, convert to_dict as records: test_input.T.rename_axis('period').reset_index().to_dict('records') In your case: parameters['overrideData'] = (test_input.T.rename_axis('period') .reset_index().to_dict('records') ) Output: [{'period': '2020', 'Total': 23648.0}, {'period': '2021', 'Total': 20062.0}, {'period': '2022', 'Total': 20555.0}, {'period': '2023', 'Total': 22037.0}, {'period': '2024', 'Total': 26208.0}, {'period': '2025', 'Total': 28224.88801}, {'period': '2026', 'Total': 29975.87934}, {'period': '2027', 'Total': 31049.01582}, {'period': '2028', 'Total': 32170.68853}, {'period': '2029', 'Total': 33190.35298}, {'period': '2030', 'Total': 34031.93951}]
3
2
78,953,239
2024-9-5
https://stackoverflow.com/questions/78953239/minimum-periods-in-rolling-mean
Say I have: data = { 'id': ['a', 'a', 'a', 'b', 'b', 'b', 'b'], 'd': [1,2,3,0,1,2,3], 'sales': [5,1,3,4,1,2,3], } I would like to add a column with a rolling mean with window size 2, with min_periods=2, over 'id' In Polars, I can do: import polars as pl df = pl.DataFrame(data) df.with_columns(sales_rolling = pl.col('sales').rolling_mean(2).over('id')) shape: (7, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ d ┆ sales ┆ sales_rolling β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════β•ͺ═══════════════║ β”‚ a ┆ 1 ┆ 5 ┆ null β”‚ β”‚ a ┆ 2 ┆ 1 ┆ 3.0 β”‚ β”‚ a ┆ 3 ┆ 3 ┆ 2.0 β”‚ β”‚ b ┆ 0 ┆ 4 ┆ null β”‚ β”‚ b ┆ 1 ┆ 1 ┆ 2.5 β”‚ β”‚ b ┆ 2 ┆ 2 ┆ 1.5 β”‚ β”‚ b ┆ 3 ┆ 3 ┆ 2.5 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ What's the DuckDB equivalent? I've tried import duckdb duckdb.sql(""" select *, mean(sales) over ( partition by id order by d range between 1 preceding and 0 following ) as sales_rolling from df """).sort('id', 'd') but get β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id β”‚ d β”‚ sales β”‚ sales_rolling β”‚ β”‚ varchar β”‚ int64 β”‚ int64 β”‚ double β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ 1 β”‚ 5 β”‚ 5.0 β”‚ β”‚ a β”‚ 2 β”‚ 1 β”‚ 3.0 β”‚ β”‚ a β”‚ 3 β”‚ 3 β”‚ 2.0 β”‚ β”‚ b β”‚ 0 β”‚ 4 β”‚ 4.0 β”‚ β”‚ b β”‚ 1 β”‚ 1 β”‚ 2.5 β”‚ β”‚ b β”‚ 2 β”‚ 2 β”‚ 1.5 β”‚ β”‚ b β”‚ 3 β”‚ 3 β”‚ 2.5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This is very close, but duckdb still calculates the rolling mean when there's only a single value in the window. How can I replicate the min_periods=2 (default) behaviour from Polars?
You can use case statement and count: duckdb.sql(""" from df select *, case when count(*) over rolling2 = 2 then mean(sales) over rolling2 end as sales_rolling window rolling2 as ( partition by id order by d rows between 1 preceding and current row ) """).sort('id', 'd') β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id β”‚ d β”‚ sales β”‚ sales_rolling β”‚ β”‚ varchar β”‚ int64 β”‚ int64 β”‚ double β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ 1 β”‚ 5 β”‚ NULL β”‚ β”‚ a β”‚ 2 β”‚ 1 β”‚ 3.0 β”‚ β”‚ a β”‚ 3 β”‚ 3 β”‚ 2.0 β”‚ β”‚ b β”‚ 0 β”‚ 4 β”‚ NULL β”‚ β”‚ b β”‚ 1 β”‚ 1 β”‚ 2.5 β”‚ β”‚ b β”‚ 2 β”‚ 2 β”‚ 1.5 β”‚ β”‚ b β”‚ 3 β”‚ 3 β”‚ 2.5 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Note I've used named window and row framing here.
4
4
78,950,277
2024-9-4
https://stackoverflow.com/questions/78950277/how-to-insert-to-a-table-with-auto-increment
I'm trying to insert new users to a table in MySQL when they register. I am using a FlaskApp on PythonAnywhere. Here is my query: INSERT INTO user_profile (email, user_name, first_foo) VALUES (%s, %s, 0); This is run from my flask_app code: def connect_db(query, params): db_connection= MySQLdb.connect("<username>.mysql.eu.pythonanywhere-services.com","<username>","<password","<db_name>", cursorclass=MySQLdb.cursors.DictCursor) cursor=db_connection.cursor() cursor.execute(query, params) result = cursor.fetchone() return result connect_db(query, (email, username,)) Here is my table structure: +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | user_id | int | NO | PRI | NULL | auto_increment | | email | varchar(50) | YES | UNI | NULL | | | user_name | varchar(15) | YES | UNI | NULL | | | first_foo | tinyint(1) | YES | | NULL | | +--------------+-------------+------+-----+---------+----------------+ Unfortunately, I keep getting: MySQLdb._exceptions.OperationalError: (1048, "Column 'user_id' cannot be null") I have tried several queries including: INSERT INTO user_profile (user_id, email, user_name, first_foo) VALUES (NULL, %s, %s, 0); INSERT INTO user_profile (user_id, email, user_name, first_foo) VALUES (DEFAULT, %s, %s, 0); INSERT INTO user_profile (user_id, email, user_name, first_foo) VALUES (0, %s, %s, 0); but all return the same error. If I run the first query in the MySQL console on Python Anywhere, the query is successful. Thanks in advance for any help.
From the OP's comment this worked, after each transaction such as INSERT, UPDATE, DELETE queries, adding commit() was pivotal after execute(). cursor = db_connection.cursor() cursor.execute(query, params) db_connection.commit() # <- add this
3
1
78,951,465
2024-9-5
https://stackoverflow.com/questions/78951465/why-the-result-is-different-numpy-slicing-and-indexing
Basically I want to obtain a part of the variable "cubote". I tried two methods that should work in the same way, but it didn't. My code: import numpy as np # Create a 3x3x3 cube cubote = np.arange(27).reshape(3,3,3) # Compare the results result1 = cubote[0:2,0:2,0:2] result2 = cubote[0:2][0:2][0:2] print(result1) print("Shape of result1:", result1.shape) print(result2) print("Shape of result2:", result2.shape) OUTPUT: result1: [[[ 0 1] [ 3 4]] [[ 9 10] [12 13]]] Shape of result1: (2, 2, 2) result2: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]]] Shape of result2: (2, 3, 3) I expected the two results to be the same. I refer to the result 1 makes sense, but result 2 did not work as I expected. WHY?
The difference in the results between result1 and result2 comes down to how indexing works in NumPy. Explanation of result1: result1 = cubote[0:2, 0:2, 0:2] In this case, you are applying slicing across all three dimensions at once. This means you are extracting a sub-cube with indices in the ranges [0:2] in all three dimensions, which gives you a 2x2x2 cube. Explanation of result2: result2 = cubote[0:2][0:2][0:2] Here, you're doing something different. Let’s break it down: 1.cubote[0:2] extracts the first two "layers" (2D arrays) from the cube, so the result of this step is: array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]]]) This is a 2x3x3 array. Then, [0:2] is applied to this result, which further extracts the first two rows (along the first axis) of the new array. But since the array is already 2x3x3, this doesn’t reduce the size, and you still have a 2x3x3 array. Finally, [0:2] is applied again, but since you're now operating on the first axis of the current 2D slices (each of shape 3x3), it doesn't behave as expected because you've already "peeled off" dimensions. Key Difference: Slicing across all dimensions at once (as in result1) extracts the correct sub-cube. Chained indexing (as in result2) performs indexing sequentially, and since each index slice operates on the current result rather than on all dimensions at once, you don't get the same shape or result. You should use slicing in all dimensions simultaneously, like in result1, to obtain the expected 2x2x2 sub-cube. cubote[0:2, 0:2, 0:2]: Extracts a sub-cube across all three dimensions. cubote[0:2][0:2][0:2]: Extracts slices sequentially, leading to an unexpected result. To get consistent results, always apply slicing across all dimensions at once if you are trying to extract a subarray.
2
5
78,950,848
2024-9-4
https://stackoverflow.com/questions/78950848/get-size-of-png-from-bytes
I am trying to extract the size of an PNG image from a datastream Consider the starting data of the stream 137 80 78 71 13 10 26 10 0 0 0 13 73 72 68 82 0 0 2 84 0 0 3 74 8 2 0 0 0 195 81 71 33 0 0 0 ... ^ ^ ^ ^ ^ ^ which contains the following information signature: 137 80 78 71 13 10 26 10 IHDR chunk of: length 0 0 0 13 type 73 72 68 82 data 0 0 2 84 0 0 3 74 8 2 0 0 0 crc: 195 81 71 33 then a new chunk start. The information about the size of the image are encoded in the 8 bytes of the data chunk: width 0 0 2 84 or in bytes b'\x00\x00\x02T' height 0 0 3 74 or in bytes b'\x00\x00\x03J'. I know that the image has a width of 596 px and a height of 842 px but I cannot figure out how to compute the actual size of the image. PS the values are given in Python and the here the datastream in binary form b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x02T\x00\x00\x03J\x08\x02\x00\x00\x00\xc3QG!\x00\x00\x00\tpHY'
You can think of each byte as a base-256 digit of the respective dimension. So 0 * 256^3 + 0 * 256^2 + 2 * 256 + 84 = 596, and 0 * 256^3 + 0 * 256^2 + 3 * 256 + 74 = 842. The next two bytes are important as well, where 8 is the bit depth, and 2 is the color type. 8 means 8 bits per component, and 2 means three components per pixel: red, green, and blue.
2
1
78,950,899
2024-9-4
https://stackoverflow.com/questions/78950899/getting-the-group-key-when-using-group-by-applylist
This is the first time I'm working with Pandas so I'm completely new to this. I was able to group an instance list per account. Now while iterating into that list I would need the account number (group key) to be able to do something with it. This is an example of the csv file: enter image description here #Using Pandas df = pd.read_csv(os.path.join(__location__, 'instances.csv')) df_group = df.groupby('account')['instance-id'].apply(list) print(df_group.groups.keys()) for account in df_group: #Initialize Prettytable t = PrettyTable(['Instance ID', 'Instance Name', 'AMI-ID', 'AMI Name']) #Initialize the EC2 client to assume role sts = boto3.client('sts') assume_role = sts.get_caller_identity().get('Arn').split('/')[1] assumed_role=sts.assume_role( RoleArn="arn:aws:iam::" + str(account[0]) + ":role/" + assume_role, RoleSessionName= 'Temporary_Session' ) I've read about getting the key using groups.key() but it seems that when it has been already converted to a list is not possible to do it that way as I get the following error when trying. line 9, in <module> print(df_group.groups.keys()) line 6299, in __getattr__return object.__getattribute__(self, name) AttributeError: 'Series' object has no attribute 'groups'. Did you mean: 'groupby'? What I'm trying to do at the end is pulling the account number to be able to use sts to assume another role. Reproducible input: import pandas as pd data = { 'account': [ '111111111111111', '111111111111111', '111111111111111', '111111111111111', '111111111111111', '111111111111111', '222222222222222', '222222222222222', '222222222222222', '222222222222222', '222222222222222', '222222222222222' ], 'instance-id': [ 'i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c7', 'i-124f1c3c401ijk3c8', 'i-124f1c3c401ijk3c9', 'i-124f1c3c401ijk3c176', 'i-124f1c3c401ijk3c177', 'i-124f1c3c401ijk3c178', 'i-124f1c3c401ijk3c179', 'i-124f1c3c401ijk3c180', 'i-124f1c3c401ijk3c182' ] } df = pd.DataFrame(data)
You should use Series.items: for group, account in df_group.items(): print(f'{group=}') print(account) Or, maybe better, don't aggregate and loop over the GroupBy object: for group, account in df.groupby('account')['instance-id']: print(f'{group=}') print(list(account)) Output: group='111111111111111' ['i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c4', 'i-124f1c3c401ijk3c7', 'i-124f1c3c401ijk3c8', 'i-124f1c3c401ijk3c9'] group='222222222222222' ['i-124f1c3c401ijk3c176', 'i-124f1c3c401ijk3c177', 'i-124f1c3c401ijk3c178', 'i-124f1c3c401ijk3c179', 'i-124f1c3c401ijk3c180', 'i-124f1c3c401ijk3c182']
2
1
78,950,667
2024-9-4
https://stackoverflow.com/questions/78950667/group-elements-in-dataframe-and-show-them-in-chronological-order
Consider the following dataframe, where Date is in the format DD-MM-YYY: Date Time Table 01-10-2000 13:00:03 B 01-10-2000 13:00:04 A 01-10-2000 13:00:05 B 01-10-2000 13:00:06 A 01-10-2000 13:00:07 B 01-10-2000 13:00:08 A How can I 1) group the observations by Table, 2) sort the rows according to Date and Time within each group, 3) show the groups in chronological order according to Date and Time of their first observation? Date Time Table 01-10-2000 13:00:03 B 01-10-2000 13:00:05 B 01-10-2000 13:00:07 B 01-10-2000 13:00:04 A 01-10-2000 13:00:06 A 01-10-2000 13:00:08 A Input data: data = { 'Date': ['01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000', '01-10-2000'], 'Time': ['13:00:03', '13:00:04', '13:00:05', '13:00:06', '13:00:07', '13:00:08'], 'Table': ['B', 'A', 'B', 'A', 'B', 'A'] } df = pd.DataFrame(data)
Use groupby.transform and numpy.lexsort: date = pd.to_datetime(df['Date']+' '+df['Time']) out = df.iloc[np.lexsort([ date, df['Table'], date.groupby(df['Table']).transform('min') ])] Alternatively, using an intermediate column: date = pd.to_datetime(df['Date']+' '+df['Time']) out = (df.assign(date=date, min_date=date.groupby(df['Table']).transform('min')) .sort_values(by=['min_date', 'Table', 'date']) .drop(columns=['date', 'min_date']) ) Output: Date Time Table 0 01-10-2000 13:00:03 B 2 01-10-2000 13:00:05 B 4 01-10-2000 13:00:07 B 1 01-10-2000 13:00:04 A 3 01-10-2000 13:00:06 A 5 01-10-2000 13:00:08 A
2
4
78,950,432
2024-9-4
https://stackoverflow.com/questions/78950432/is-there-a-scenario-where-foo-in-listbar-cannot-be-replaced-by-foo-in-bar
I'm digging into a codebase containing thousands of occurrences of foo in list(bar), e.g.: as a boolean expression: if foo in list(bar) or ...: ... in a for loop: for foo in list(bar): ... in a generator expression: ",".join(str(foo) for foo in list(bar)) Is there a scenario (like a given version of Python, a known behavior with a type checker, etc.) where foo in list(bar) is not just a memory-expensive version of foo in bar? What am I missing here?
I've sometimes done/seen that when bar got modified in the loop, e.g.: bar = {1, 2, 3} for foo in list(bar): bar.add(foo + 1) With your replacement, that raises RuntimeError: Set changed size during iteration. Attempt This Online! An example from Python's standard library for k in list(_config_vars): if k.startswith(_INITPRE): del _config_vars[k] Dozens more (many done for the above reason, though not all).
8
11
78,950,520
2024-9-4
https://stackoverflow.com/questions/78950520/use-format-specifier-to-convert-float-int-column-in-polars-dataframe-to-string
I have this code: import polars as pl df = pl.DataFrame({'size': [34.2399, 1232.22, -479.1]}) df.with_columns(pl.format('{:,.2f}', pl.col('size'))) But is fails: ValueError - Traceback, line 3 2 df = pl.DataFrame({'size': [34.2399, 1232.22, -479.1]}) ----> 3 df.with_columns(pl.format('{:,.2f}', pl.col('size'))) File polars\functions\as_datatype.py:718, in format(f_string, *args) 717 msg = "number of placeholders should equal the number of arguments" --> 718 raise ValueError(msg) ValueError: number of placeholders should equal the number of arguments How can I format a float or int column using a format specifier like '{:,.2f}'?
As outlined by @mozway, general format strings are not yet supported as part of pl.format. The corresponding feature request already contains a nice polars implementation of (the most common) C-style sprint formatting. If efficiency is not too much of an issue (e.g. in exploratory data analysis), you can simply use pl.Expr.map_elements and fall back to the naive python solution. df.with_columns( pl.col("size").map_elements(lambda x: f"{x:,.2f}", return_dtype=pl.String) ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ size β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 34.24 β”‚ β”‚ 1,232.22 β”‚ β”‚ -479.10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
3
78,950,364
2024-9-4
https://stackoverflow.com/questions/78950364/abstract-base-class-property-setter-absence-not-preventing-class-instantiation
I'm trying to get abstract properties to work, enforcing property getter & setter definitions in downstream classes. from abc import ABC, abstractmethod class BaseABC(ABC): @property @abstractmethod def x(self): pass @x.setter @abstractmethod def x(self, value): pass class MyClass(BaseABC): def __init__(self, value): self._x = value @property def x(self): return self._x # @x.setter # def x(self, val): # self._x = val obj = MyClass(10) print(obj.x) obj.x = 20 print(obj.x) Having read the documentation it seems to indicate the above should trigger a TypeError, when the class is being build, but it only triggers an AttributeError once the attribute is being set. Why does the absent setter, explicitly defined in BaseABC through an @abstractmethod, not trigger the expected TypeError? How does one ensure a setter is required in the daughter class?
TL;DR property doesn't just override the getter; it overrides the setter with None as well. In MyClass, property creates a brand new property with the given getter and no setter; it doesn't simply override the getter of the inherited property. The definition of MyClass.x is equivalent to def x_getter(self): return self._x x = property(x_getter, None) and that None as a setter is "good enough" as far as ABC is concerned with respect to overriding the abstract setter. To "inherit" the abstract setter as well, use class MyClass(BaseABC): @BaseABC.x.getter def x(self): return self._x This creates a new property not from scratch, but from the existing BaseABC.x property, using its (abstract) setter but a new getter (just like x.setter before created a new property using the old getter but a new setter). To make MyClass instantiable, you still need to provide a concrete setter using x.setter. Unfortunately, nothing forces you to to use BaseABC.x.getter in place of property. ABC only cares that x gets set to something appropriate. This is a general problem with ABC, not properties in particular. There is only so much you can do at the library level; abstract base classes are not a feature of Python itself.
2
1
78,948,241
2024-9-4
https://stackoverflow.com/questions/78948241/os-environ-and-os-getenv-interact-strangely-in-a-unittest
I have a python class class EnvironmentParser: def __init__(self): self.A = os.getenv('A', 'a') + ".json" self.B = os.getenv('B', 'b') + ".json" The purpose of this class is to have some default file identifiers (eg. a.json and b.json) but if the need arises, this should be changeable at runtime by running the python script with some set environment variables (the actual keys are different but I don't want to write production code here). In another class, an instance of EnvironmentParser is passed as a constructor argument, and these file identifiers are read off from the instance variables. I have tried to unit test this as follows: os.environ['A'] = 'herp' os.environ['B'] = 'derp' path = Path("some path here") environment = EnvironmentParser() folder = AdviceFolder(path, environment) self.assertEqual(folder.file_ids['A'], 'herp.json') self.assertEqual(folder.file_ids['B'], 'derp.json') where folder.file_ids is a dictionary {'A': environment.A, 'B': environment.B} However the asserts fail, apparantly, folder.file_ids['A'] is 'a.json' as if the os.environ lines weren't there. I am surprised because as far as I am aware, os.getenv reads from os.environ, so the execution order should be os.environ['A'] and os.environ['B'] are set to 'herp' and 'derp' respectively; the EnvironmentParser class gets instantiated, so upon instantiation it asks the 'A' and 'B' keys from os.environ, hence these values should be 'herp' and 'derp' respectively. The AdviceFolder class is instantiated with the 'environment' variable pointing to the just instantiated object of EnvironmentParser, which should thus have environment.A == 'herp' and environment.B = 'derp'. The assert should succeed. But evidently, this goes wrong somewhere and I can't point out where. At any rate, if I want to have unit tests for both default values for getenv as well as manually set values, how can I do them at the same time? I could run the test again with externally set env vars, but then one of the two tests would always fail. Reproducible example: Create two python files: example.py ----------------------- import os class EnvironmentParser: def __init__(self): self.A = os.getenv('A', 'a') + ".json" self.B = os.getenv('B', 'b') + ".json" class Example: def __init__(self, environment: EnvironmentParser): self.map = {'A': environment.A, 'B': environment.B} test_example.py ----------------------- import unittest import os from example import EnvironmentParser, Example class TestExample(unittest.TestCase): def test_example_with_default_values(self): environment = EnvironmentParser() example = Example(environment) self.assertEqual(example.map['A'], 'a.json') self.assertEqual(example.map['B'], 'b.json') def test_example_with_custom_values(self): os.environ['A'] = 'herp' os.environ['B'] = 'derp' environment = EnvironmentParser() example = Example(environment) self.assertEqual(example.map['A'], 'herp.json') self.assertEqual(example.map['B'], 'derp.json') if __name__ == '__main__': unittest.main() Actually, I was wrong before. It is the first test method that fails because for some reason the values A = 'herp' and B = 'derp' are already set even in the first test method. Nonetheless, the problem exists that I can't seem to be able to simultaneously test default and nondefault values. I guess I can del from os.environ, but surely there is a better way?
What is going on here is that unit tests are run in lexicographic order. This means that even though test_example_with_custom_values() is defined after test_example_with_default_values(), it's run before it and the environment variables are set. One way to manage this would be to use the approaches suggested in the link above, e.g. rename the methods test_1() and test_2(), or change the unittest.TestLoader.sortTestMethodsUsing function to one that will sort meaningful names in your desired order. However, in this case, I think it is preferable to not depend on the order and instead not leave environment variables set after the method which changes them, by using the unittest.mock.patch() decorator: patch() acts as a function decorator, class decorator or a context manager. Inside the body of the function or with statement, the target is patched with a new object. When the function/with statement exits the patch is undone. So your tests become: import unittest import os from example import EnvironmentParser, Example from unittest.mock import patch class TestExample(unittest.TestCase): def test_example_with_default_values(self): environment = EnvironmentParser() example = Example(environment) self.assertEqual(example.map['A'], 'a.json') self.assertEqual(example.map['B'], 'b.json') @patch.dict(os.environ, {'A': 'herp', 'B': 'derp'}) def test_example_with_custom_values(self): environment = EnvironmentParser() example = Example(environment) self.assertEqual(example.map['A'], 'herp.json') self.assertEqual(example.map['B'], 'derp.json') if __name__ == '__main__': unittest.main() This should run without errors now: $ python test_example.py .. ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK If you like you can also add the @patch.dict(os.environ, {}, clear=True) decorator to test_example_with_default_values() to ensure it is run in a context with all environment variables cleared, though this isn't necessary.
2
4
78,950,176
2024-9-4
https://stackoverflow.com/questions/78950176/failed-to-produce-a-json-response-containing-a-phone-number-based-on-a-license-n
I've created a script to fetch a phone number based on a license number from this webpage, using Python with the requests module. The script is supposed to produce a JSON response containing the phone number I'm interested in. When I manually input this license number 354206 in the search box and hit the search button, it produces a result with a phone number in it. I'm trying to create the script so that it will produce the same result in response. However, I get the following response instead. 200 {'event': {'descriptor': 'markup://aura:invalidSession', 'attributes': {'values': {}}, 'eventDef': {'descriptor': 'markup://aura:invalidSession', 't': 'APPLICATION', 'xs': 'I', 'a': {'newToken': ['newToken', 'aura://String', 'I', False]}}}, 'exceptionMessage': 'Guest user access is not allowed', 'exceptionEvent': True} This is the script I'm using to try to get the desired results: import requests import json link = 'https://azroc.my.site.com/AZRoc/s/sfsites/aura' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36', 'Referer': 'https://azroc.my.site.com/AZRoc/s/contractor-search', 'Origin': 'https://azroc.my.site.com', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, br, zstd', 'Accept-Language': 'en-US,en;q=0.9', 'X-Sfdc-Page-Scope-Id': 'dd1f6a24-3e7b-41e9-8fcb-f0bda9979cc2', 'X-Sfdc-Request-Id': '13815039000012b044', } params = { 'r': '5', 'other.ARCP_ContractorSearch.getRecords': '1' } payload = { 'message': '{"actions":[{"id":"176;a","descriptor":"apex://ARCP_ContractorSearch/ACTION$getRecords","callingDescriptor":"markup://c:ARCP_ContractorSearch","params":{"searchKey":"354206","classification":null,"city":""}}]}', 'aura.context': '{"mode":"PROD","fwuid":"eGx3MHlRT1lEMUpQaWVxbGRUM1h0Z2hZX25NdHFVdGpDN3BnWlROY1ZGT3cyNTAuOC40LTYuNC41","app":"siteforce:communityApp","loaded":{"APPLICATION@markup://siteforce:communityApp":"wi0I2YUoyrm6Lo80fhxdzA","MODULE@markup://lightning:f6Controller":"5PtsAUCMnPdpZDcNTHXtbg","COMPONENT@markup://instrumentation:o11ySecondaryLoader":"1JitVv-ZC5qlK6HkuofJqQ"},"dn":[],"globals":{},"uad":false}', 'aura.pageURI': '/AZRoc/s/contractor-search', 'aura.token': 'null', } with requests.Session() as session: session.headers.update(headers) res = session.post(link,params=params,data=json.dumps(payload)) print(res.status_code) print(res.json()) How can I get the phone number from that webpage using the license number with the requests module?
You can try: import json import requests url = "https://azroc.my.site.com/AZRoc/s/sfsites/aura?r=5&other.ARCP_ContractorSearch.getRecords=1" message = { "actions": [ { "id": "176;a", "descriptor": "apex://ARCP_ContractorSearch/ACTION$getRecords", "callingDescriptor": "markup://c:ARCP_ContractorSearch", "params": {"searchKey": "<ID>", "classification": None, "city": ""}, } ] } data = { "message": None, "aura.context": r'{"mode":"PROD","fwuid":"eGx3MHlRT1lEMUpQaWVxbGRUM1h0Z2hZX25NdHFVdGpDN3BnWlROY1ZGT3cyNTAuOC40LTYuNC41","app":"siteforce:communityApp","loaded":{"APPLICATION@markup://siteforce:communityApp":"wi0I2YUoyrm6Lo80fhxdzA","MODULE@markup://lightning:f6Controller":"5PtsAUCMnPdpZDcNTHXtbg","COMPONENT@markup://instrumentation:o11ySecondaryLoader":"1JitVv-ZC5qlK6HkuofJqQ"},"dn":[],"globals":{},"uad":false}', "aura.pageURI": "/AZRoc/s/contractor-search", "aura.token": "null", } to_search = ["354206", "354207"] for s in to_search: message["actions"][0]["params"]["searchKey"] = s data["message"] = json.dumps(message) r = requests.post(url, data=data).json() print(s, r["actions"][0]["returnValue"][0]["phone"]) Prints (edited to not show full number): 354206 9165951XXX 354207 (520) 391-0XXX
2
2
78,950,075
2024-9-4
https://stackoverflow.com/questions/78950075/pd-pivot-table-lambda-function-to-join-column-values-with-exceptions-not-working
I am currently working with a dataframe looking at Kentucky oil wells with pandas and want to create a pivot table using an API identifier. Since ther are various duplicates, I also wanted to join non unique values. Below is an example of the dataframe: import pandas as pd df = pd.DataFrame({'API': ['16101030580000', '16101030580000', '16129057600000','16013006300000'], 'Date': ['0000/00/00','6/15/2007', '5/25/2020', '7/31/2014'], 'Annual_Oil':[300,'nan',150, 360], 'State':['KY','None', 'None', 'KY']}) Additionally, I created a list of values that I did not want to join. However, when runnning the code, I get some values in the dataframe that should not be there. list_none = ['none', 'nan', 'NAN','None', '0000/00/00','000'] df1 = pd.pivot_table(df, index = 'API', aggfunc = lambda x: (','.join(x.unique().astype(str)) if x not in list_none else x), sort = False) The output for this example dataframe looks like Date Annual_Oil State API 16101030580000 0000/00/00,6/15/2007 300,nan KY,None 16129057600000 5/25/2020 150 None 16013006300000 7/31/2014 360 KY Is there a way to restructure the lambda function within the pivot table or would I have to get rid of the unwanted joins manually?
You should filter the values within the join: list_none = ['none', 'nan', 'NAN', 'None', '0000/00/00', '000'] df1 = pd.pivot_table( df, index='API', aggfunc=lambda x: ','.join( i for i in x.unique().astype(str) if i not in list_none ), sort=False, ) Alternative using a custom function: def cust_join(x): x = x.dropna().astype(str) return ','.join(x[~x.isin(list_none)].unique()) df1 = pd.pivot_table( df, index='API', aggfunc=cust_join, sort=False, ) Output: Date Annual_Oil State API 16101030580000 6/15/2007 300 KY 16129057600000 5/25/2020 150 16013006300000 7/31/2014 360 KY
2
1
78,949,093
2024-9-4
https://stackoverflow.com/questions/78949093/how-to-resolve-attributeerror-module-fiona-has-no-attribute-path
I have a piece of code that was working fine until last week, but now it's failing with the following error: AttributeError: module 'fiona' has no attribute 'path' I’ve ensured that all the necessary libraries are installed and imported. Does anyone have any ideas on what might be going wrong or how I can resolve this issue? Thanks! pip install geopandas pip install fiona import geopandas as gpd import fiona countries = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
TL;DR update to geopandas==0.14.4 OR pin fiona to version 1.9.6 -- It seems fiona recently upgraded to 1.10.0 (as of 2024-09-04 01:14 UTC) and that may have broken some older versions of geopandas, which only depend on fiona being higher than some version, not lower than. Upon closer look, geopandas up to version 0.14.3 still calls fiona.path, but in version 0.14.4 it no longer does. So upgrading geopandas to 0.14.4 should fix it. Alternatively, forcing fiona to stay on version 1.9.6 should also work. NOTE: upgrading geopandas to >=1.0 seems to remove fiona as a dependency altogether, so it will also solve this issue. But it opens up a whole new can of worms by removing geopandas.dataset. For details on that one, see How to get maps to geopandas after datasets are removed?
12
23
78,949,414
2024-9-4
https://stackoverflow.com/questions/78949414/consecutive-count-of-binary-column-by-group
I am attempting to create a 'counter' of consecutive binary values = 1, resetting when the binary value = 0, for each group. Example of data: data = {'city_id': [1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6], 'week': [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7], 'binary': [0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1]} df = pd.DataFrame(data) For each id, the first binary = 1 should begin with a consecutive_count = 1 rather than 0. And this should reset each time binary = 0, along with each time we move on to a new id. I have already created a solution that does this. It looks like this: df['consecutive'] = 0 for city in df['city_id'].unique(): city_df = df[df['city_id'] == city] consecutive_count = 0 for i in range(len(city_df)): if city_df['binary'].iloc[i] == 1: consecutive_count += 1 else: consecutive_count = 0 df.loc[(df['city_id'] == city) & (df['week'] == city_df['week'].iloc[i]), 'consecutive'] = consecutive_count The main issue is that my solution is extremely inefficient for large data. I have a large set of ids, ~2.5M, and this solution either times out or runs for hours and hours, so I am struggling in making this more efficient. TIA.
The issue with your approach is that you're repeatedly slicing. You should use the builtin groupby functions for efficiency. You can form a custom group with groupby.cumsum to reset the groups on 0s, then use this to compute the consecutive counts: df['consecutive'] = df.groupby( ['city_id', df['binary'].eq(0).groupby(df['city_id']).cumsum()] )['binary'].cumsum() Output: city_id week binary consecutive 0 1 1 0 0 1 1 2 1 1 2 1 3 1 2 3 1 4 1 3 4 1 5 0 0 5 1 6 0 0 6 1 7 0 0 7 2 1 0 0 8 2 2 1 1 9 2 3 1 2 10 2 4 0 0 11 2 5 1 1 12 2 6 0 0 13 2 7 1 1 14 3 1 1 1 15 3 2 1 2 16 3 3 1 3 17 3 4 0 0 18 3 5 0 0 19 3 6 0 0 20 3 7 0 0 21 4 1 0 0 22 4 2 0 0 23 4 3 1 1 24 4 4 1 2 25 4 5 1 3 26 4 6 1 4 27 4 7 1 5 28 5 1 1 1 29 5 2 1 2 30 5 3 1 3 31 5 4 0 0 32 5 5 0 0 33 5 6 0 0 34 5 7 0 0 35 6 1 1 1 36 6 2 0 0 37 6 3 1 1 38 6 4 0 0 39 6 5 1 1 40 6 6 0 0 41 6 7 1 1
2
4
78,947,332
2024-9-4
https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia
While trying to reduce the size of a Docker image, I noticed pip install torch adds a few GB. A big chunk of this comes from [...]/site-packages/nvidia. Since I'm not using a GPU, I'd like to not install the nvidia things. Here is a minimal example: FROM python:3.12.5 RUN pip install torch (Ignoring -slim base images, since this is not the point here.) Resulting size: FROM python:3.12.5 -> 1.02GB After RUN pip install torch -> 8.98GB With RUN pip install torch && pip freeze | grep nvidia | xargs pip uninstall -y instead -> 6.19GB. While the last point reduces the final size, all the nvidia stuff is still downloaded and installed, which costs time and bandwidth. So, how can I install torch without nvidia directly? Using --no-deps is not a convenient solution, because of the other transitive dependencies, that I would like to install. Of course, I could explicitly list every single one, but looking at this list of packages installed with torch mpmath typing-extensions sympy nvidia-nvtx-cu12 nvidia-nvjitlink-cu12 nvidia-nccl-cu12 nvidia-curand-cu12 nvidia-cufft-cu12 nvidia-cuda-runtime-cu12 nvidia-cuda-nvrtc-cu12 nvidia-cuda-cupti-cu12 nvidia-cublas-cu12 networkx MarkupSafe fsspec filelock triton nvidia-cusparse-cu12 nvidia-cudnn-cu12 jinja2 nvidia-cusolver-cu12 torch I'd like to avoid manually maintaining this list since it would change with future versions of torch.
As (roundaboutly) documented on pytorch.org's getting started page, Torch on PyPI is Nvidia enabled; use the download.pytorch.org index for CPU-only wheels: RUN pip install torch --index-url https://download.pytorch.org/whl/cpu Also please remember to specify a somewhat locked version of Torch, e.g. RUN pip install torch~=2.4.0 --index-url https://download.pytorch.org/whl/cpu
3
6
78,925,963
2024-8-29
https://stackoverflow.com/questions/78925963/unexpected-value-passed-to-langchain-tool-argument
I'm trying to create a simple example tool that creates new user accounts in a hypothetical application when instructed to do so via a user prompt. The llm being used is llama3.1:8b via Ollama. So far what I've written works, but it's very unreliable. The reason why it's unreliable is because when LangChain calls on my tool, it provides unexpected/inconsistent values to the user creation tool's single username argument. Sometime the argument will be a proper username and other times it will be a username with the value "username=" prefixed to the username (eg: "username=jDoe" rather than simply "jdoe"). Also, if I ask for multiple users to be created, sometimes langchain will correctly invoke the tool multiple times while other times, it will invoke the tool once with a string in the format of an array (eg: "['jDoe','jSmith']") My questions are: Is the issue I'm encountering due to the limitations of LangChain or the Llama3.1:8b model that I'm using? Or is the issue something else? Is there a way to get LangChain to more reliably call my user creation tool with a correctly formatted username? Are there are other useful tips/recommendations that you can provide for a beginner like me? Below is my code: from dotenv import load_dotenv from langchain.agents import AgentExecutor, create_react_agent from langchain.tools import Tool from langchain_core.prompts import PromptTemplate from langchain_ollama.chat_models import ChatOllama load_dotenv() # Define the tool to create a user account mock_user_db = ["jDoe", "jRogers", "jsmith"] def create_user_tool(username: str): print("USERNAME PROVIDED FOR CREATION: " + username) if username in mock_user_db: return f"User {username} already exists." mock_user_db.append(username) return f"User {username} created successfully." # Define the tool to delete a user account def delete_user_tool(username: str): print("USERNAME PROVIDED FOR DELETION: " + username) if username not in mock_user_db: return f"User {username} does not exist." mock_user_db.remove(username) return f"User {username} deleted successfully." def list_users_tool(ignore) -> list: return mock_user_db # Wrap these functions as LangChain Tools create_user = Tool( name="Create User", func=create_user_tool, description="Creates a new user account in the company HR system." ) delete_user = Tool( name="Delete User", func=delete_user_tool, description="Deletes an existing user account in company HR system." ) list_users = Tool( name="List Users", func=list_users_tool, description="Lists all user accounts in company HR system." ) # Initialize the language model llm = ChatOllama(model="llama3.1:latest", temperature=0) # Create the agent using the tools tools = [create_user, delete_user, list_users] # Get the prompt to use #prompt = hub.pull("hwchase17/react") # Does not work with ollama/llama3:8b prompt = hub.pull("hwchase17/react-chat") # Kinda works with ollama/llama3:8b agent = create_react_agent(llm, tools, prompt) # Create an agent executor by passing in the agent and tools agent_executor = AgentExecutor(agent=agent, tools=tools, handle_parsing_errors=True) print(agent_executor.invoke({"input": "Please introduce yourself."})['output']) while True: user_prompt = input("PROMPT: ") agent_response = agent_executor.invoke({"input": user_prompt}) print(agent_response['output'])
Prompt engineering (what you are attempting here), is far from an exact science. However, there are ways you can clarify the schema of the tool. One example (from their docs) is getting it to parse your docstrings: @tool(parse_docstring=True) def create_user(username: str): """Creates a user Args: username: username of the user to be created. The exact string of the username, no longer than 20 characters long """ ... # Rest of your code here See docs here But even more reliable would be to create your schema with Pydantic (great tool in general), again, from their docs: class create_user(BaseModel): """Creates a user""" username: str = Field(..., description="username of the user to be created. The exact string of the username, no longer than 20 characters long" In general, the more detail you provide, regarding the shape and nature of the tools and the data, the better results you can expect. You may also want to consider setting your temperature to 0, so you get repeatable responses for any given prompt, which should help with debugging, but you need to test with a higher range of prompts to ensure reliable behaviour
4
2
78,923,480
2024-8-28
https://stackoverflow.com/questions/78923480/how-to-use-doc-in-blacksheep-api
Use blacksheep create with the following options to create an example API: ✨ Project name: soquestion πŸš€ Project template: api πŸ€– Use controllers? Yes πŸ“œ Use OpenAPI Documentation? Yes πŸ”§ Library to read settings essentials-configuration πŸ”© App settings format YAML This will generate a simple API based on BlackSheep, with the endpoints defined in app/controllers/examples.py: """ Example API implemented using a controller. """ from typing import List, Optional from blacksheep.server.controllers import Controller, get, post class ExamplesController(Controller): @classmethod def route(cls) -> Optional[str]: return "/api/examples" @classmethod def class_name(cls) -> str: return "Examples" @get() async def get_examples(self) -> List[str]: """ Gets a list of examples. Lorem Ipsum Dolor Sit amet """ return list(f"example {i}" for i in range(3)) @post() async def add_example(self, example: str): """ Adds an example. """ When you start the API (don't forget to create and activate a virtual environment before you do the pip install ...) with python dev.py and navigate to http://localhost:44777/docs you can see the OpenAPI documentation. According to the documentation you can use the docstring to specify the endpoint description. Is it somehow possible to also add documentation for the responses? According to the documentation you can use the @docs decorator, but that only works in a simple file where @docs is defined beforehand. In the generated API @docs is defined in app/docs/__init.py__, but I can't find a way to use this inside the example.py. The generated app/docs/__init.py__ looks like this: """ This module contains OpenAPI Documentation definition for the API. It exposes a docs object that can be used to decorate request handlers with additional information, used to generate OpenAPI documentation. """ from blacksheep import Application from blacksheep.server.openapi.v3 import OpenAPIHandler from openapidocs.v3 import Info from app.docs.binders import set_binders_docs from app.settings import Settings def configure_docs(app: Application, settings: Settings): docs = OpenAPIHandler( info=Info(title=settings.info.title, version=settings.info.version), anonymous_access=True, ) # include only endpoints whose path starts with "/api/" docs.include = lambda path, _: path.startswith("/api/") set_binders_docs(docs) docs.bind_app(app)
Given how documentation handler is defined in BlackSheep example, you cannot easily use that particular instance. The reason being docs is local to configure_docs function, therefore it cannot be used outside of it. However, that documentation decorator is simply an instance of OpenAPIHandler class, so you can move its definition outside of that function, and use it throughout your project freely. Here's a "patch" for the example project to show the naΓ―ve approach (sorry, but SO doesn't support patch/diff syntax highlight) with docs renamed to docs_handler to better distinguish it from app.docs module: diff --git a/app/controllers/examples.py b/app/controllers/examples.py index 4bb984d..41989e5 100644 --- a/app/controllers/examples.py +++ b/app/controllers/examples.py @@ -5,6 +5,8 @@ from typing import List, Optional from blacksheep.server.controllers import Controller, get, post +from app.docs import docs_handler + class ExamplesController(Controller): @classmethod @@ -15,6 +17,7 @@ class ExamplesController(Controller): def class_name(cls) -> str: return "Examples" + @docs_handler(responses={200: "OK response", 404: "No example found"}) @get() async def get_examples(self) -> List[str]: """ diff --git a/app/docs/__init__.py b/app/docs/__init__.py index 0b3d0f1..f478864 100644 --- a/app/docs/__init__.py +++ b/app/docs/__init__.py @@ -9,18 +9,21 @@ from blacksheep.server.openapi.v3 import OpenAPIHandler from openapidocs.v3 import Info from app.docs.binders import set_binders_docs -from app.settings import Settings +from app.settings import load_settings, Settings +settings = load_settings() + +docs_handler = OpenAPIHandler( + info=Info(title=settings.info.title, version=settings.info.version), + anonymous_access=True, +) + def configure_docs(app: Application, settings: Settings): - docs = OpenAPIHandler( - info=Info(title=settings.info.title, version=settings.info.version), - anonymous_access=True, - ) # include only endpoints whose path starts with "/api/" - docs.include = lambda path, _: path.startswith("/api/") + docs_handler.include = lambda path, _: path.startswith("/api/") - set_binders_docs(docs) + set_binders_docs(docs_handler) - docs.bind_app(app) + docs_handler.bind_app(app) I am not particularly happy about project structure here, to be honest, I even thought about moving that handler instance to a separate module and making it a singleton class. But it should get you going, and you can adapt it to you real use-case as you see fit.
4
1
78,922,047
2024-8-28
https://stackoverflow.com/questions/78922047/non-equi-join-in-polars
If you come from the future, hopefully this PR has already been merged. If you don't come from the future, hopefully this answer solves your problem. I want to solve my problem only with polars (which I am no expert, but I can follow what is going on), before just copy-pasting the DuckDB integration suggested above and compare the results in my real data. I have a list of events (name and timestamp), and a list of time windows. I want to count how many of each event occur in each time window. I feel like I am close to getting something that works correctly, but I have been stuck for a couple of hours now: import polars as pl events = { "name": ["a", "b", "a", "b", "a", "c", "b", "a", "b", "a", "b", "a", "b", "a", "b", "a", "b", "a", "b"], "time": [0.0, 1.0, 1.5, 2.0, 2.25, 2.26, 2.45, 2.5, 3.0, 3.4, 3.5, 3.6, 3.65, 3.7, 3.8, 4.0, 4.5, 5.0, 6.0], } windows = { "start_time": [1.0, 2.0, 3.0, 4.0], "stop_time": [3.5, 2.5, 3.7, 5.0], } events_df = pl.DataFrame(events).sort("time").with_row_index() windows_df = ( pl.DataFrame(windows) .sort("start_time") .join_asof(events_df, left_on="start_time", right_on="time", strategy="forward") .drop("name", "time") .rename({"index": "first_index"}) .sort("stop_time") .join_asof(events_df, left_on="stop_time", right_on="time", strategy="backward") .drop("name", "time") .rename({"index": "last_index"}) ) print(windows_df) """ shape: (4, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ first_index ┆ last_index β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════════β•ͺ════════════║ β”‚ 2.0 ┆ 2.5 ┆ 3 ┆ 7 β”‚ β”‚ 1.0 ┆ 3.5 ┆ 1 ┆ 10 β”‚ β”‚ 3.0 ┆ 3.7 ┆ 8 ┆ 13 β”‚ β”‚ 4.0 ┆ 5.0 ┆ 15 ┆ 17 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """ So far, for each time window, I can get the index of the first and last events that I care about. Now I "just" need to count how many of these are of each type. Can I get some help on how to do this? The output I am looking for should look like: shape: (4, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1.0 ┆ 3.5 ┆ 4 ┆ 5 ┆ 1 β”‚ β”‚ 2.0 ┆ 2.5 ┆ 2 ┆ 2 ┆ 1 β”‚ β”‚ 3.0 ┆ 3.7 ┆ 3 ┆ 3 ┆ 0 β”‚ β”‚ 4.0 ┆ 5.0 ┆ 2 ┆ 1 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ I feel like using something like int_ranges(), gather(), and explode() can get me a dataframe with each time window and all it's corresponding events. Finally, something like group_by(), count(), and pivot() can get me to the dataframe I want. But I have been struggling with this for a while.
update join_where() was released in version 1.7.0: ( windows_df .join_where( events_df, pl.col.time >= pl.col.start_time, pl.col.time <= pl.col.stop_time, ) .sort("name", "start_time") .pivot(on="name", index=["start_time","stop_time"], aggregate_function="len", values="time") .fill_null(0) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ u32 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1.0 ┆ 3.5 ┆ 4 ┆ 5 ┆ 1 β”‚ β”‚ 2.0 ┆ 2.5 ┆ 2 ┆ 2 ┆ 1 β”‚ β”‚ 3.0 ┆ 3.7 ┆ 3 ┆ 3 ┆ 0 β”‚ β”‚ 4.0 ┆ 5.0 ┆ 2 ┆ 1 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ previous I was looking at your double join_asof and thought that maybe you could also use another approach, which would not require an explode(). The thing is, you don't really need the data from events_df, only counts. So if we do join_asof for every possible value in name then we can calculate counts by simple arithmetic. First, let's prepare our DataFrames. events_df = pl.DataFrame(events) windows_df = ( pl.DataFrame(windows) .join( events_df.select(pl.col.name.unique()), how="cross" ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ name β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ══════║ β”‚ 1.0 ┆ 3.5 ┆ c β”‚ β”‚ 1.0 ┆ 3.5 ┆ a β”‚ β”‚ 1.0 ┆ 3.5 ┆ b β”‚ β”‚ 2.0 ┆ 2.5 ┆ c β”‚ β”‚ 2.0 ┆ 2.5 ┆ a β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 3.0 ┆ 3.7 ┆ a β”‚ β”‚ 3.0 ┆ 3.7 ┆ b β”‚ β”‚ 4.0 ┆ 5.0 ┆ c β”‚ β”‚ 4.0 ┆ 5.0 ┆ a β”‚ β”‚ 4.0 ┆ 5.0 ┆ b β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ events_df= ( events_df .with_columns(index = pl.int_range(pl.len()).over("name")) ) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ time ┆ index β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ═══════║ β”‚ a ┆ 0.0 ┆ 0 β”‚ β”‚ b ┆ 1.0 ┆ 0 β”‚ β”‚ a ┆ 1.5 ┆ 1 β”‚ β”‚ b ┆ 2.0 ┆ 1 β”‚ β”‚ a ┆ 2.25 ┆ 2 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ b ┆ 3.8 ┆ 6 β”‚ β”‚ a ┆ 4.0 ┆ 7 β”‚ β”‚ b ┆ 4.5 ┆ 7 β”‚ β”‚ a ┆ 5.0 ┆ 8 β”‚ β”‚ b ┆ 6.0 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Now we can do the same join you did, but add by parameter, so we do it within name column: result_df = ( windows_df .sort("name", "start_time") .join_asof(events_df, left_on="start_time", right_on="time", strategy="forward", by="name") .drop("time") .rename({"index": "first_index"}) .sort("name", "stop_time") .join_asof(events_df, left_on="stop_time", right_on="time", strategy="backward", by="name") .drop("time") .rename({"index": "last_index"}) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ name ┆ first_index ┆ last_index β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ══════β•ͺ═════════════β•ͺ════════════║ β”‚ 2.0 ┆ 2.5 ┆ a ┆ 2 ┆ 3 β”‚ β”‚ 1.0 ┆ 3.5 ┆ a ┆ 1 ┆ 4 β”‚ β”‚ 3.0 ┆ 3.7 ┆ a ┆ 4 ┆ 6 β”‚ β”‚ 4.0 ┆ 5.0 ┆ a ┆ 7 ┆ 8 β”‚ β”‚ 2.0 ┆ 2.5 ┆ b ┆ 1 ┆ 2 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 4.0 ┆ 5.0 ┆ b ┆ 7 ┆ 7 β”‚ β”‚ 2.0 ┆ 2.5 ┆ c ┆ 0 ┆ 0 β”‚ β”‚ 1.0 ┆ 3.5 ┆ c ┆ 0 ┆ 0 β”‚ β”‚ 3.0 ┆ 3.7 ┆ c ┆ null ┆ 0 β”‚ β”‚ 4.0 ┆ 5.0 ┆ c ┆ null ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ And now you can calculate the result by simple last_index - first_index + 1: ( result_df .with_columns(index = pl.col.last_index - pl.col.first_index + 1) .pivot(on="name", index=["start_time","stop_time"], values="index") .fill_null(0) .sort("start_time", "stop_time") ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ start_time ┆ stop_time ┆ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1.0 ┆ 3.5 ┆ 4 ┆ 5 ┆ 1 β”‚ β”‚ 2.0 ┆ 2.5 ┆ 2 ┆ 2 ┆ 1 β”‚ β”‚ 3.0 ┆ 3.7 ┆ 3 ┆ 3 ┆ 0 β”‚ β”‚ 4.0 ┆ 5.0 ┆ 2 ┆ 1 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
5
2
78,925,696
2024-8-29
https://stackoverflow.com/questions/78925696/when-should-i-include-the-score-benefit-of-a-local-decision-when-using-minimax
In the Stone Game problem, Alice and Bob take turns picking a pile of stones from the start or the end. The goal is to maximize Alice's total def play(turn, left, right): if left > right: return 0 end = piles[right] + play(1 - turn, left, right - 1) start = piles[left] + play(1 - turn, left + 1, right) return max(start, end) if turn == 0 else min(start, end) alice = play(0, 0, n - 1) This follows the classic minimax algorithm. Let's now take a look at Stone Game II. In this problem, Alice and Bob can pick the next 1 <= x <= 2m piles of stones, where m is the maximum x somebody has used. To my surprise, classic minimax would return the same number of stones whether it is Alice or Bob's turn, giving us an incorrect final answer # DOESN'T WORK def play(left, m, turn): if left == n-1: return 0 total = 0 ans = inf if turn else -inf for pos in range(left+1, min(n, left+2*m+1)): total += piles[pos] value = total + play(pos, max(m, pos - left), 1 - turn) if turn == 0: ans = max(ans, value) else: ans = min(ans, value) return ans alice = play(-1, 1, 0) However, if we only include total in Alice's calculation, it suddenly works: # WORKS def play(left, m, turn): if left == n-1: return 0 total = 0 ans = inf if turn else -inf for pos in range(left+1, min(n, left+2*m+1)): total += piles[pos] value = play(pos, max(m, pos - left), 1 - turn) if turn == 0: ans = max(ans, total + value) else: ans = min(ans, value) return ans alice = play(-1, 1, 0) Could someone explain why we're not supposed to add the local total for the minimizer in the second example? Here's a discrepancy I noticed that may have something to do with the answer: value is always the same recursive call when we take min/max in the second problem, but in the first problem, end and start are different recursive calls.
The function we're trying to maximize is f := amount of stones alice gets. That's why we only add stones for Alice; the function we're maximizing doesn't include the amount of stones Bob gets. So then why does the first algorithm work? Turns out it's not generally correct, and only works because this specific problem constrains len(piles) as even, making it so Alice always wins. If we actually look at the values from the first function, it returns the same answer for Bob and Alice too (this was just obfuscated because alice > sum(piles) - alice is always true, so the bug didn't produce wrong answers) An actually correct minimax implementation for game 1 would look like this: def play(turn, left): right_used = turn - left right = n - 1 - right_used if left > right: return 0 end = play(turn + 1, left) start = play(turn + 1, left + 1) return min(start, end) if turn % 2 else max(piles[left] + start, piles[right] + end)
3
0
78,927,692
2024-8-29
https://stackoverflow.com/questions/78927692/how-to-get-all-styling-parameter-configurable-by-ttk-style-configure-for-a
I have been searching the answer for this question from a long time but with no success had to ask it here. I am able to get the styling parameter for from the tcl documentation, but my question is how can I achieve the same result programmatically. For example in Tkinter, we can use widget.configure() with no parameters to get all valid parameters for that widgets, since all design parameter must be changed using Style() only in themed tkinter, how can achieve the same functionality? Edit Consider this example: import tkinter as tk root =tk.Tk() a = tk.Label(root) print(a.configure()) #Output {'activebackground': ('activebackground', 'activeBackground', 'Foreground', <string object: 'SystemButtonFace'>, 'SystemButtonFace'), 'activeforeground': ('activeforeground', 'activeForeground', 'Background', <string object: 'SystemButtonText'>, 'SystemButtonText'), 'anchor': ('anchor', 'anchor', 'Anchor', <string object: 'center'>, 'center'), 'background': ('background', 'background', 'Background', <string object: 'SystemButtonFace'>, 'SystemButtonFace'), 'bd': ('bd', '-borderwidth'), 'bg': ('bg', '-background'), 'bitmap': ('bitmap', 'bitmap', 'Bitmap', '', ''), 'borderwidth': ('borderwidth', 'borderWidth', 'BorderWidth', <string object: '2'>, <string object: '2'>), 'compound': ('compound', 'compound', 'Compound', <string object: 'none'>, 'none'), 'cursor': ('cursor', 'cursor', 'Cursor', '', ''), 'disabledforeground': ('disabledforeground', 'disabledForeground', 'DisabledForeground', <string object: 'SystemDisabledText'>, 'SystemDisabledText'), 'fg': ('fg', '-foreground'), 'font': ('font', 'font', 'Font', <string object: 'TkDefaultFont'>, 'TkDefaultFont'), 'foreground': ('foreground', 'foreground', 'Foreground', <string object: 'SystemButtonText'>, 'SystemButtonText'), 'height': ('height', 'height', 'Height', 0, 0), 'highlightbackground': ('highlightbackground', 'highlightBackground', 'HighlightBackground', <string object: 'SystemButtonFace'>, 'SystemButtonFace'), 'highlightcolor': ('highlightcolor', 'highlightColor', 'HighlightColor', <string object: 'SystemWindowFrame'>, 'SystemWindowFrame'), 'highlightthickness': ('highlightthickness', 'highlightThickness', 'HighlightThickness', <string object: '0'>, <string object: '0'>), 'image': ('image', 'image', 'Image', '', ''), 'justify': ('justify', 'justify', 'Justify', <string object: 'center'>, 'center'), 'padx': ('padx', 'padX', 'Pad', <string object: '1'>, <string object: '1'>), 'pady': ('pady', 'padY', 'Pad', <string object: '1'>, <string object: '1'>), 'relief': ('relief', 'relief', 'Relief', <string object: 'flat'>, 'flat'), 'state': ('state', 'state', 'State', <string object: 'normal'>, 'normal'), 'takefocus': ('takefocus', 'takeFocus', 'TakeFocus', '0', '0'), 'text': ('text', 'text', 'Text', '', ''), 'textvariable': ('textvariable', 'textVariable', 'Variable', '', ''), 'underline': ('underline', 'underline', 'Underline', -1, -1), 'width': ('width', 'width', 'Width', 0, 0), 'wraplength': ('wraplength', 'wrapLength', 'WrapLength', <string object: '0'>, <string object: '0'>)} but import tkinter.ttk as ttk b= ttk.Label(root, text="hello") print(ttk.Style(root).configure("TLabel") #Output: //Nothing Image from Tcl Official Docs Hence, I would like to get all the styling options configurable with ttk::style for a particular widget via python program. Note: This question differs from similar question asked previously on StackOverflow such as How to know all style options of a ttk widget? in sense that it asks about the options of all element within a widget, But this question is specfically about the options which if passed into style.configure("Widget", options) are valid and have effect to the apperacnce of the widget.
After reading the source code I think I got some ideas. The problem why you can't get the rowheight and indent options for your treeview is that these options are not attached to any elements at all. They are only stored in some special so-called option table (which I think is a "hashtable"). But in order to actually store them in this table, tkinter or tk/tcl commands have to be called first (e.g. style.configure(rowheight=30)). Before that, they simply don't exist anywhere (I am quite sure about that, but not 100%). So unfortunately you can't access them programmatically in the "normal" way. This is also true for element options, they are also quite often not stored at the beginning of the program in an "option table" (that is why style.configure({Theme}) quite often returns an empty dictionary, all option tables are empty at the beginning of the program). But you can get the options for each element because the developers actually implemented special methods and stored the options inside the theme files in arrays so you can just get them. But they didn't make the same methods for these special widget options. I think it might be some kind of "bug". But most likely they just felt there was no need to do this functionality, since all these "widget-specific" options can be found in their documentation. These options, such as "rowheight", are also theme-independent, so they can always be called, regardless of the theme. But if it really needs to be obtained by programming, the only way I can see to get them is to actually search through the ttk "C" widget files (e.g. for treeview the file is "ttkTreeview.c") and look for the functions that are used to get the options from the option data table. For example, in the "ttkTreeview.c" file there is code like this: if ((objPtr = Ttk_QueryOption(treeLayout, "-rowheight", 0))) { (void)Tcl_GetIntFromObj(NULL, objPtr, &tv->tree.rowHeight); tv->tree.rowHeight = MAX(tv->tree.rowHeight, 1); } if ((objPtr = Ttk_QueryOption(treeLayout, "-indent", 0))) { (void)Tcl_GetIntFromObj(NULL, objPtr, &tv->tree.indent); } The "C" method Ttk_QueryOption is used to get an option from the options data table (I think it is roughly equivalent to the tkinter style.lookup() method). So what you can do is search the file for the Ttk_QueryOption method and get these additional options that you can't get from the element_options() method. But obviously this is not a good approach, just a workaround. And it is impossible to be sure that you can perform this search. First, Ttk_QueryOption may not be the only method used to get the option from the options tables. Second, it may be wrapped around another method, like this (roughly equivalent in Python): def some_random_name(style, option, state, default): return ttk.Style().lookup(style, option, state, default) Conclusion (and some additional information): There is no built-in method (even at C level) to get these special widget options. And probably the only way to get them is to look for them in the widget "C" files. Also, something I have not mentioned yet. There is this thing called "sublayouts". What these are is sort of additional sub "Layouts" (styles) for ttk widgets. For example, the main style name for the ttk.Treeview widget is "Treeview", but it also has sublayouts (which are mentioned in the documentation) "Heading", "Item", "Cell", and "Row" (which for some reason isn't mentioned). So there are also options for Treeview that exist for these sublayouts. For example, for the "clam" theme, the layout and parameters of "Treeview.Item" are as follows: {'Treeitem.padding': ('padding', 'relief', 'shiftrelief'), 'Treeitem.indicator': ('foreground', 'indicatorsize', 'indicatormargins'), 'Treeitem.image': ('image', 'stipple', 'background'), 'Treeitem.text': 'text', 'font', 'foreground', 'underline', 'width', 'anchor', 'justify', 'wraplength', 'embossed')} And… you can't get sublayers either. So if you really want to, play with the tk source files. The "ttkTreeview.c" file also mentions sublayers: treeLayout && GetSublayout(interp, themePtr, treeLayout, ".Item", tv->tree.tagOptionTable, &tv->tree.itemLayout) && GetSublayout(interp, themePtr, treeLayout, ".Cell", tv->tree.tagOptionTable, &tv->tree.cellLayout) && GetSublayout(interp, themePtr, treeLayout, ".Heading", tv->tree.headingOptionTable, &tv->tree.headingLayout) && GetSublayout(interp, themePtr, treeLayout, ".Row", tv->tree.tagOptionTable, &tv->tree.rowLayout) But honestly, it would probably be much easier to do it yourself, without coding, just by looking at the widget source files to get all this information. As for the element options, you are more than welcome to use the element_options() command. Note: I would also recommend reading this documentation. It covers some pretty advanced information about styles towards the end.
4
2
78,945,268
2024-9-3
https://stackoverflow.com/questions/78945268/efficient-conversion-of-timezone-aware-timestamps-to-datetime64m-in-pandas
I have the following code that creates a DataFrame representing the data I have in my system: import pandas as pd data = { "date": [ "2021-03-12 19:50:00-05:00", "2021-03-12 19:51:00-05:00", "2021-03-12 19:52:00-05:00", "2021-03-12 19:53:00-05:00", "2021-03-12 19:54:00-05:00", "2021-03-12 19:55:00-05:00", "2021-03-12 19:56:00-05:00", "2021-03-12 19:57:00-05:00", "2021-03-12 19:58:00-05:00", "2021-03-12 19:59:00-05:00", "2021-03-15 04:00:00-04:00", "2021-03-15 04:01:00-04:00", "2021-03-15 04:02:00-04:00", "2021-03-15 04:03:00-04:00", "2021-03-15 04:04:00-04:00", "2021-03-15 04:05:00-04:00", "2021-03-15 04:06:00-04:00", "2021-03-15 04:07:00-04:00", "2021-03-15 04:08:00-04:00", "2021-03-15 04:09:00-04:00" ], "open": [81.15, 81.14, 81.15, 81.15, 81.15, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05], "high": [81.15, 81.14, 81.15, 81.15, 81.17, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05], "low": [81.14, 81.14, 81.14, 81.15, 81.15, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05], "close": [81.14, 81.14, 81.15, 81.15, 81.17, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05], "volume": [300.0, 100.0, 1684.0, 0.0, 1680.0, 150.0, 448.0, 0.0, 1500.0, 380.0, 162.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], } df = pd.DataFrame(data) print(df.info()) The output is: <class 'pandas.core.frame.DataFrame'> RangeIndex: 20 entries, 0 to 19 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 date 20 non-null object 1 open 20 non-null float64 2 high 20 non-null float64 3 low 20 non-null float64 4 close 20 non-null float64 5 volume 20 non-null float64 dtypes: float64(5), object(1) memory usage: 1.1+ KB The data type of the date column is object - it is timezone aware timestamp. The timestamps contain timezone information that I need to remove then convert the date column to datetime64[m] (minute precision), but after applying the following conversion code: df['date'] = df['date'].apply(lambda ts: pd.Timestamp(ts).tz_localize(None).to_numpy().astype('datetime64[m]')) print(df.info()) The output shows that the date column has a data type of datetime64[ns] instead of datetime64[m]: <class 'pandas.core.frame.DataFrame'> RangeIndex: 20 entries, 0 to 19 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 date 20 non-null datetime64[ns] 1 open 20 non-null float64 2 high 20 non-null float64 3 low 20 non-null float64 4 close 20 non-null float64 5 volume 20 non-null float64 dtypes: datetime64 , float64(5) memory usage: 1.1 KB How can I correctly convert the date column with timezone data to datetime64[m] in the most memory-efficient way?
Unfortunately there is no "minute" unit for pandas datetimes. You can choose from "D,s,ms,us,n" for day, second, millisecond, microsecond, or nanosecond respectively. This listing can be found under the "unit" argument in the docs of pandas.to_datetime That said, you can still parse this data and convert it to a seconds unit. The key here is understanding that pandas cannot handle having distinct timezone information (-05:00 and -04:00) in a single series (column). It can support a timezone + differing daylight savings info (as I suspect the case is here), but since it is ambiguous as to whether this is the case we’re going to need to take a trip through UTC and then let the conversion to our timezone handle whether something is in daylight savings or not. import pandas as pd data = { 'raw': [ '2021-03-12 19:50:00-05:00', '2021-03-12 19:51:00-05:00', '2021-03-12 19:52:00-05:00', '2021-03-12 19:53:00-05:00', '2021-03-12 19:54:00-05:00', '2021-03-12 19:55:00-05:00', '2021-03-12 19:56:00-05:00', '2021-03-12 19:57:00-05:00', '2021-03-12 19:58:00-05:00', '2021-03-12 19:59:00-05:00', '2021-03-15 04:00:00-04:00', '2021-03-15 04:01:00-04:00', '2021-03-15 04:02:00-04:00', '2021-03-15 04:03:00-04:00', '2021-03-15 04:04:00-04:00', '2021-03-15 04:05:00-04:00', '2021-03-15 04:06:00-04:00', '2021-03-15 04:07:00-04:00', '2021-03-15 04:08:00-04:00', '2021-03-15 04:09:00-04:00' ], } df = pd.DataFrame(data) df['parsed_w_timezone'] = ( pd.to_datetime(df['raw'], format='%Y-%m-%d %H:%M:%S%z', utc=True) # 1. parse into utc .dt.tz_convert('US/Eastern') # 2. convert to US/Eastern .astype('datetime64[s, US/Eastern]') # 3. convert nanoseconds β†’ seconds unit ) df['parsed_wo_timezone'] = df['parsed_w_timezone'].dt.tz_localize(None) print(df.dtypes) # raw object # parsed_w_timezone datetime64[s, US/Eastern] # parsed_wo_timezone datetime64[s] # dtype: object print(df.to_string(col_space=30, index=False, justify='left')) # raw parsed_w_timezone parsed_wo_timezone # 2021-03-12 19:50:00-05:00 2021-03-12 19:50:00-05:00 2021-03-12 19:50:00 # 2021-03-12 19:51:00-05:00 2021-03-12 19:51:00-05:00 2021-03-12 19:51:00 # 2021-03-12 19:52:00-05:00 2021-03-12 19:52:00-05:00 2021-03-12 19:52:00 # 2021-03-12 19:53:00-05:00 2021-03-12 19:53:00-05:00 2021-03-12 19:53:00 # 2021-03-12 19:54:00-05:00 2021-03-12 19:54:00-05:00 2021-03-12 19:54:00 # 2021-03-12 19:55:00-05:00 2021-03-12 19:55:00-05:00 2021-03-12 19:55:00 # 2021-03-12 19:56:00-05:00 2021-03-12 19:56:00-05:00 2021-03-12 19:56:00 # 2021-03-12 19:57:00-05:00 2021-03-12 19:57:00-05:00 2021-03-12 19:57:00 # 2021-03-12 19:58:00-05:00 2021-03-12 19:58:00-05:00 2021-03-12 19:58:00 # 2021-03-12 19:59:00-05:00 2021-03-12 19:59:00-05:00 2021-03-12 19:59:00 # 2021-03-15 04:00:00-04:00 2021-03-15 04:00:00-04:00 2021-03-15 04:00:00 # 2021-03-15 04:01:00-04:00 2021-03-15 04:01:00-04:00 2021-03-15 04:01:00 # 2021-03-15 04:02:00-04:00 2021-03-15 04:02:00-04:00 2021-03-15 04:02:00 # 2021-03-15 04:03:00-04:00 2021-03-15 04:03:00-04:00 2021-03-15 04:03:00 # 2021-03-15 04:04:00-04:00 2021-03-15 04:04:00-04:00 2021-03-15 04:04:00 # 2021-03-15 04:05:00-04:00 2021-03-15 04:05:00-04:00 2021-03-15 04:05:00 # 2021-03-15 04:06:00-04:00 2021-03-15 04:06:00-04:00 2021-03-15 04:06:00 # 2021-03-15 04:07:00-04:00 2021-03-15 04:07:00-04:00 2021-03-15 04:07:00 # 2021-03-15 04:08:00-04:00 2021-03-15 04:08:00-04:00 2021-03-15 04:08:00 # 2021-03-15 04:09:00-04:00 2021-03-15 04:09:00-04:00 2021-03-15 04:09:00
2
3
78,923,112
2024-8-28
https://stackoverflow.com/questions/78923112/how-to-read-joystick-input-from-logitech-extreme-3d-pro-in-python
I'm working on a Python program to read inputs from a Logitech Extreme 3D Pro joystick. I am able to receive raw data from the joystick, but I'm struggling to correctly interpret the X and Y values from this data. Problem: I have raw data as shown in the attached image. From the array, each value represents a different key or axis, but I'm having trouble identifying which values correspond to X and Y axis movements. Specifically, I suspect that array[1] and array[2] are the X and Y values, but the readings are inconsistent and don't seem to correlate with the joystick movements. Code: import pywinusb.hid as hid from time import sleep from msvcrt import kbhit def sample_handler(data): print("Raw data: {}".format(data)) # Known Logitech vendor_id vendor_id = 0x046D # Logitech # Find Logitech HID devices devices = hid.HidDeviceFilter(vendor_id=vendor_id).get_devices() if not devices: print("No Logitech device found.") exit() # Select the device device = devices[0] if len(devices) == 1 else None if not device: print("Multiple Logitech devices found. Please select one:") for i, dev in enumerate(devices): print(f"{i}: {dev.product_name} (Product ID: {dev.product_id:04X})") selection = int(input("\nEnter the number of the device to select (0-{}): ".format(len(devices) - 1))) device = devices[selection] else: print(f"Automatically selected device: {device.product_name}") # Open the device and set the data handler try: device.open() device.set_raw_data_handler(sample_handler) print("Waiting for data... Press any key to stop.") while not kbhit() and device.is_plugged(): sleep(0.5) finally: device.close() Questions: How can I correctly interpret the X and Y axis values from the raw data I receive? Are there alternative methods for reading joystick input in Python besides using the inputs library? I've heard about using HID descriptors to understand joystick control values. How can I use HID descriptors to achieve this? HID Descriptors: USB Input Device Connection Status Device connected Current Configuration 1 Speed Full Device Address 3 Number Of Open Pipes 1 Device Descriptor Extreme 3D pro Offset Field Size Value Description 0 bLength 1 12h 1 bDescriptorType 1 01h Device 2 bcdUSB 2 0200h USB Spec 2.0 4 bDeviceClass 1 00h Class info in Ifc Descriptors 5 bDeviceSubClass 1 00h 6 bDeviceProtocol 1 00h 7 bMaxPacketSize0 1 08h 8 bytes 8 idVendor 2 046Dh Logitech, Inc. 10 idProduct 2 C215h 12 bcdDevice 2 5711h 57.11 14 iManufacturer 1 01h "Logitech" 15 iProduct 1 02h "Extreme 3D pro" 16 iSerialNumber 1 03h "00000000002A" 17 bNumConfigurations 1 01h Configuration Descriptor 1 Bus Powered, 100 mA Offset Field Size Value Description 0 bLength 1 09h 1 bDescriptorType 1 02h Configuration 2 wTotalLength 2 0022h 4 bNumInterfaces 1 01h 5 bConfigurationValue 1 01h 6 iConfiguration 1 04h "HID Config" 7 bmAttributes 1 80h Bus Powered 4..0: Reserved ...00000 5: Remote Wakeup ..0..... No 6: Self Powered .0...... No, Bus Powered 7: Reserved (set to one) (bus-powered for 1.0) 1....... 8 bMaxPower 1 32h 100 mA Interface Descriptor 0/0 HID, 1 Endpoint Offset Field Size Value Description 0 bLength 1 09h 1 bDescriptorType 1 04h Interface 2 bInterfaceNumber 1 00h 3 bAlternateSetting 1 00h 4 bNumEndpoints 1 01h 5 bInterfaceClass 1 03h HID 6 bInterfaceSubClass 1 01h Boot Interface 7 bInterfaceProtocol 1 00h 8 iInterface 1 00h HID Descriptor Offset Field Size Value Description 0 bLength 1 09h 1 bDescriptorType 1 21h HID 2 bcdHID 2 0111h 1.11 4 bCountryCode 1 00h 5 bNumDescriptors 1 01h 6 bDescriptorType 1 22h Report 7 wDescriptorLength 2 007Ah 122 bytes Endpoint Descriptor 81 1 In, Interrupt, 1 ms Offset Field Size Value Description 0 bLength 1 07h 1 bDescriptorType 1 05h Endpoint 2 bEndpointAddress 1 81h 1 In 3 bmAttributes 1 03h Interrupt 1..0: Transfer Type ......11 Interrupt 7..2: Reserved 000000.. 4 wMaxPacketSize 2 0007h 7 bytes 6 bInterval 1 01h 1 ms Interface 0 HID Report Descriptor Joystick Item Tag (Value) Raw Data Usage Page (Generic Desktop) 05 01 Usage (Joystick) 09 04 Collection (Application) A1 01 Collection (Logical) A1 02 Report Count (2) 95 02 Report Size (10) 75 0A Logical Minimum (0) 15 00 Logical Maximum (1023) 26 FF 03 Physical Minimum (0) 35 00 Physical Maximum (1023) 46 FF 03 Usage (X) 09 30 Usage (Y) 09 31 Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02 Report Size (4) 75 04 Report Count (1) 95 01 Logical Maximum (7) 25 07 Physical Maximum (315) 46 3B 01 Unit (Eng Rot: Degree) 66 14 00 Usage (Hat Switch) 09 39 Input (Data,Var,Abs,NWrp,Lin,Pref,Null,Bit) 81 42 Unit (None) 65 00 Report Size (8) 75 08 Logical Maximum (255) 26 FF 00 Physical Maximum (255) 46 FF 00 Usage (Rz) 09 35 Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02 Push A4 Report Count (8) 95 08 Report Size (1) 75 01 Logical Maximum (1) 25 01 Physical Maximum (1) 45 01 Usage Page (Button) 05 09 Usage Minimum (Button 1) 19 01 Usage Maximum (Button 8) 29 08 Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02 Pop B4 Usage (Slider) 09 36 Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02 Report Count (4) 95 04 Report Size (1) 75 01 Logical Maximum (1) 25 01 Physical Maximum (1) 45 01 Usage Page (Button) 05 09 Usage Minimum (Button 9) 19 09 Usage Maximum (Button 12) 29 0C Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02 Report Count (4) 95 04 Input (Cnst,Ary,Abs) 81 01 End Collection C0 Collection (Logical) A1 02 Report Count (4) 95 04 Report Size (8) 75 08 Logical Maximum (255) 26 FF 00 Physical Maximum (255) 46 FF 00 Usage Page (Vendor-Defined 1) 06 00 FF Usage (Vendor-Defined 1) 09 01 Feature (Data,Var,Abs,NWrp,Lin,Pref,NNul,NVol,Bit) B1 02 End Collection C0 End Collection Additional Information: I know libraries like pywinusb.hid, but I want to explore other options if possible. Any guidance on creating a program to parse and interpret joystick values using HID descriptors automatically would also be appreciated.
I have found a package called Pygame which is used to easily read remote data. It automatically reads the HID descriptor to determine the number of buttons and analogue controls. Using the Pygame package is a good solution for this issue.
2
1
78,945,659
2024-9-3
https://stackoverflow.com/questions/78945659/check-if-series-has-values-in-range
I have a Pandas dataframe that has user information and also has a column for their permissions: UserName Permissions John Doe 02 John Doe 11 Example 09 Example 08 User3 11 I am trying to create a new column called User Class that is based on their Permissions (looking at all of the users permissions). If a user has all permissions <10, they are considered Admin. If a user has all permission >=10, they are considered User. However if they have permissions that are both <10 and >=10, then they will be coded as Admin/User. So my resulting output would be: UserName Permissions User Class John Doe 02 Admin/User John Doe 11 Admin/User Example 09 Admin Example 08 Admin User3 11 User What would be the best way to do this? My original idea was to do: for UserName, User_df in df.groupby(by='UserName'): LT10 = (User_df['Permissions'] < 10).any() GTE10 = (User_df['Permissions'] >= 10).any() if (LT10 & GTE10): UserClass = 'Admin/User' elif LT10: UserClass = 'Admin' elif GTE10: UserClass = 'User' df.at[User_df.index, 'User Class'] = UserClass However these seems very inefficient because df has ~800K records
Another possible solution: df['User Class'] = ( df.groupby('UserName')['Permissions'] .transform(lambda x: 'Admin' if (x < 10).all() else 'User' if (x >= 10).all() else 'Admin/User')) Output: UserName Permissions User Class 0 John Doe 2 Admin/User 1 John Doe 11 Admin/User 2 Example 9 Admin 3 Example 8 Admin 4 User3 11 User
7
6
78,944,602
2024-9-3
https://stackoverflow.com/questions/78944602/how-can-i-override-the-default-behavior-of-listmyenum
I have a custom enum, MyEnum, with some elements that have different names but the same value. from enum import Enum class MyEnum(Enum): A = 1 B = 2 C = 3 D = 1 # Same value as A Consequently, list(MyEnum) returns only the names of some of the members (the first name for each value): >>>list(MyEnum) [<MyEnum.A: 1>, <MyEnum.B: 2>, <MyEnum.C: 3>] Apparently, list(MyEnum.__members__) returns all the names: >>>list(MyEnum.__members__) ['A', 'B', 'C', 'D'] However, if I try to override the __iter__() method for my enum, the override seems to fail: class MyEnum(Enum): A = 1 B = 2 C = 3 D = 1 # Same value as A @classmethod # an attempt to override list(MyEnum) that doesn't change anything def __iter__(cls): return iter(list(cls.__members__)) Apparently list(MyEnum) doesn't ever hit the custom __iter__() (as indicated by, say, adding a print() before returning in our custom __iter__()). Why is that? How can I override the default behavior of list(MyEnum) so that I get all the distinct names?
A class method is not the same as an instance method on a metaclass, which is what __iter__ for Enum is. You need to define a new metaclass, which you can use to define a new subclass of Enum that does what you are looking for. A caveat: I make no claim that replacing the current behavior of EnumType.__iter__ with your suggestion will be compatible with Enum's current semantics, only that this will make your definition available. from enum import EnumType, Enum class MyEnumType(EnumType): def __iter__(self): # Must return an Iterator, something with a __next__ method, # not an Iterable. return iter(list(self.__members__)) class MyEnumBase(Enum, metaclass=MyEnumType): pass class MyEnum(MyEnumBase): A = 1 B = 2 C = 3 D = 1 assert list(MyEnum) == ['A', 'B', 'C', 'D']
3
3