question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,762,002 | 2024-7-18 | https://stackoverflow.com/questions/78762002/contradictory-error-when-using-polars-read-csv-with-multiple-files-for-csv-gz | I'm trying to read multiple csv.gz files into a dataframe but it's not working as I expect. When I use this globbing pattern: pl.read_csv('folder_1\*.csv.gz') It returns this error: ComputeError: cannot scan compressed csv; use read_csv for compressed data This error occurred with the following context stack: >[1] 'csv scan' failed [2] 'select' input failed to resolve Which is strange considering I'm using the very function they suggest. However, passing this globbing pattern for csv works completely fine: pl.read_csv('folder_1\*.csv') How can I get around this? I'm currently just using glob.glob() and iterating through the list but I thought it'll look neater without it. | When I pass a glob string blah/blah/blah/*.csv.gz to pl.read_csv, it passes this to pl.scan_csv because it is a glob string. See polars.io.csv.functions line 514 et seq in version 1.1.0. There are two separate questions here: How do you read multiple CSVs? You can read multiple CSVs by passing a glob string to pl.scan_csv. It returns a lazy data frame that you can then evaluate with .collect(). How do you read a compressed CSV? You can read certain types of compressed CSVs with pl.read_csv (notable exception being csv.xzs which do not work). But put the two questions together and it turns out pl.scan_csv does not support compressed files at all. This is an open issue. If you want a one liner for reading your CSVs, you will have to fall back on something like a list comprehension with eager execution: from glob import glob l = [pl.read_csv(i) for i in glob('*.csv.gz')] Then do what you will with the list of CSVs (eg pl.concat). | 2 | 2 |
78,763,470 | 2024-7-18 | https://stackoverflow.com/questions/78763470/find-if-any-number-appears-more-than-n-4-times-in-a-sorted-array | I was asked the following in an interview: Given a sorted array with n numbers (where n is a multiple of 4), return whether any number appears more than n/4 times. My initial thought was to iterate over the array and keep a counter: limit = len(nums) // 4 counter = 1 for i in range(1, len(nums)): if nums[i] == nums[i-1]: counter += 1 if counter > limit: return True else: counter = 1 return False However the interviewer asked if I could do better. Sorted array, better than linear complexity, I immediately thought "binary search"... I explored this path but was clueless. After a few minutes, he gave a hint: "if any i exists where nums[i] == nums[i + len(nums) / 4]", does that mean we should return true? I was trying to apply this to my binary search and was not thinking straight so the hint went over my head... Retrospectively it is trivial, however I only see how to apply this with a sliding window: limit = len(nums) // 4 for i in range(limit, len(nums)): if nums[i] == nums[i-limit]: return True return False Is it what the interviewer meant? Or is there a better, non-linear solution? | A number that appears more than n/4 times in nums must be one of the elements of nums[::len(nums)//4]. For each candidate number, we perform a binary search for the first position where it appears, and check whether the same number appears len(nums)//4 spots later: import bisect def check(nums): n = len(nums) assert n % 4 == 0 for i in nums[::n//4]: first_appearance = bisect.bisect_left(nums, i) if first_appearance + n//4 < len(nums) and nums[first_appearance + n//4] == i: return True return False We perform at most 4 binary searches, and at most a constant amount of other work, so the time complexity is O(log n). | 4 | 3 |
78,763,118 | 2024-7-18 | https://stackoverflow.com/questions/78763118/create-discrete-colorbar-from-colormap-in-python | I want to use a given colormap (let's say viridis) and create plots and colorbars with discrete colors from that colormap. I used to use mpl.cm.get_cmap("viridis", 7) for 7 different colors, but this function is deprecated and will be removed. The recommendation is to use matplotlib.colormaps[name] or matplotlib.colormaps.get_cmap(obj) instead, but neither of these allow to specify a number of discrete colors. So far I have only found complicated workarounds online, is anyone aware of a simple, straightforward way as I had originally? Thank you! Here is a sample code with simplified data. It took me a while to get the colorbar axis right how I wanted it, so I would prefer to stick with what I already have instead of a different plt.colorbar() solution. import os.path as op import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt mpl.rcParams.update({'font.size': 30}) from mpl_toolkits.axes_grid1 import make_axes_locatable PLOT = '/tmp/' def main(): data = np.random.random((20,20)) data[5,:] = np.nan fig, ax = plt.subplots(figsize=(8.3,12)) divider = make_axes_locatable(ax) cm = mpl.cm.get_cmap('viridis', 7) cm.set_bad('darkgrey', alpha=1) plt.pcolormesh(data, cmap=cm, vmin=0,vmax=1) ax.axis('off') cax = divider.append_axes("right", size="5%", pad=0.2) cb = plt.colorbar(cax=cax) plt.savefig(op.join(PLOT, 'test.png'), bbox_inches='tight', dpi=300) plt.clf() plt.close() if __name__ == "__main__": main() | A simple option to replace matplotlib.cm.get_cmap (deprecated in 3.7.0) is to use matplotlib.pyplot.get_cmap (that was preserved in the API for backward compatibility). Another one woud be to make a resampled colormap (used under the hood) : resampled(lutsize) [source] Return a new colormap with lutsize entries. Note that the lut is a parameter in get_cmap(name=None, lut=None). cm = mpl.cm.get_cmap("viridis", 7) # (-) cm = plt.get_cmap("viridis", 7) # (+) | 1 | 6 |
78,762,850 | 2024-7-18 | https://stackoverflow.com/questions/78762850/why-the-addion-of-float32-array-and-float64-scalar-is-float32-array-in-numpy | If one add float32 and float64 scalars, the result is float64: float32 is promoted to float64. However, I find that, adding a float32 array and a float64 scalar, the result is a float32 array, rather than one might expect to be a float64 array. I have written some code repreduce the problem. My question is why the dtype of np.add(np.array([a]), b) is float32? import numpy as np import sys print(sys.version_info) # sys.version_info(major=3, minor=10, micro=9, releaselevel='final', serial=0) print(np.__version__) # '1.24.1' a = np.float32(1.1) b = np.float64(2.2) print((a+b).dtype) # float64 print(np.add(a, np.array([b])).dtype) # float64 print(np.add(np.array([a]), np.array([b])).dtype) # float64 print(np.add(np.array([a]), b).dtype) # float32 ?? expcting float64 This contradicts the doc of np.add (https://numpy.org/doc/stable/reference/generated/numpy.add.html), which says in the Notes Equivalent to x1 + x2 in terms of array broadcasting. x, y = numpy.broadcast_arrays(np.array([a]), b) print(np.add(x, y).dtype) ## float64 | why the dtype of np.add(np.array([a]), b) is float32? Because NumPy's promotion rules involving scalars pre-NEP 50 were confusing*. Using the NEP 50 rules in your version of NumPy, the behavior is as you expect: import numpy as np import sys # Use this to turn on the NEP 50 promotion rules np._set_promotion_state("weak") print(sys.version_info) # sys.version_info(major=3, minor=10, micro=9, releaselevel='final', serial=0) print(np.__version__) # '1.24.0' a = np.float32(1.1) b = np.float64(2.2) print((a+b).dtype) # float64 print(np.add(a, np.array([b])).dtype) # float64 print(np.add(np.array([a]), np.array([b])).dtype) # float64 print(np.add(np.array([a]), b).dtype) # float64 These rules are the default in NumPy 2.0. *: More specifically, the old rules depend on the values. If you set a = b = np.finfo(np.float32).max, the result would overflow if kept in float32, so you would get float64 even with the old rules. | 3 | 3 |
78,760,761 | 2024-7-17 | https://stackoverflow.com/questions/78760761/django-database-routers-allow-migrate-model-throws-the-following-typeerror | the "allow_migrate_model" function within my database routers keeps throw the following error when I try to run python manage.py makemigrations: ... File "C:\Users\...\lib\site-packages\django\db\utils.py", line 262, in allow_migrate allow = method(db, app_label, **hints) TypeError: allow_migrate() missing 1 required positional argument: 'app_label' My Routers look like this (routers.py): class BaseRouter: route_app_labels = {} db_name = "" def db_for_read(self, model, **hints) -> Union[str, None]: if model._meta.app_label in self.route_app_labels: return self.db_name return None def db_for_write(self, model, **hints) -> Union[str, None]: if model._meta.app_label in self.route_app_labels: return self.db_name return None def allow_relation(self, obj1, obj2, **hints) -> Union[bool, None]: if ( obj1._meta.app_label in self.route_app_labels or obj2._meta.app_label in self.route_app_labels ): return True return None def allow_migrate( self, db, app_label, model_name=None, **hints ) -> Union[bool, None]: if app_label in self.route_app_labels: return db == self.db_name return None class DefaultRouter(BaseRouter): route_app_labels = {"auth", "contenttypes", "sessions", "admin", "myapp"} db_name = "default" class LibraryRouter(BaseRouter): """ separate router for backend stuff """ route_app_labels = {"library"} db_name = "library" In settings.py DATABASE_ROUTERS = [ myproject.routers.DefaultRouter, myproject.routers.LibraryRouter, ] The above is basically a slightly modified version found on the Django Docs. If I comment out the allow_migrate the makemigrations works, however, I would like to modify the allow_migrate further so that's not a convincing option rn. Ah also for some reason if a rewrite it into a staticmethod (like mentioned here) it works. But then I lose my variables stored in self so also not optimal... I'm on Django 4.2.14 and Python 3.9.10 | Ah also for some reason if a rewrite it into a staticmethod. You likely registered the router as: # settings.py # import of DefaultRouter and LibraryRouter # β¦ DATABASE_ROUTERS = [DefaultRouter, LibraryRouter] Then it is not an instance: it targets it as a class, that is the problem: the method(β¦) is called as BaseRouter.allow_migrate(db, app_label, **hints), not through an instance. We can solve this by constructing the instances, like: # settings.py # import of DefaultRouter and LibraryRouter # β¦ DATABASE_ROUTERS = [DefaultRouter(), LibraryRouter()] Another option is to move the logic one level up: by making it a @classmethod [python-doc]: class BaseRouter: route_app_labels = {} db_name = '' @classmethod def db_for_read(cls, model, **hints) -> Union[str, None]: if model._meta.app_label in cls.route_app_labels: return cls.db_name return None @classmethod def db_for_write(cls, model, **hints) -> Union[str, None]: if model._meta.app_label in cls.route_app_labels: return cls.db_name return None @classmethod def allow_relation(cls, obj1, obj2, **hints) -> Union[bool, None]: if ( obj1._meta.app_label in cls.route_app_labels or obj2._meta.app_label in cls.route_app_labels ): return True return None @classmethod def allow_migrate( cls, db, app_label, model_name=None, **hints ) -> Union[bool, None]: if app_label in cls.route_app_labels: return db == cls.db_name return None The DATABASE_ROUTERS should actually work with a string, but I agree that is perhaps not the best way from a software design perspective: if you rename the class, it is possible that the string does not change, and so it refers to a non-existing item that does not per se raises an exception, but the way it was designed is: # settings.py # β¦ DATABASE_ROUTERS = [ 'module_name.DefaultRouter', 'module_name.LibraryRouter' ] | 2 | 2 |
78,760,293 | 2024-7-17 | https://stackoverflow.com/questions/78760293/find-value-in-column-which-contains-list-and-take-another-value-from-next-colum | I have two tables df1 = pd.DataFrame([{'a': 1}, {'a': 2}, {'a': 8}]) df1['b'] = "" df2 = pd.DataFrame([{'e': [1,2,3], 'f': 1},{'e': [4,5,6], 'f': 2},{'e': [7,8,9], 'f': 3}]) e f I would like to insert a value into df1['b'] from df2['f'] based on the condition that value in df['a'] is somewhere in df2['e'] Mostly I would use apply/map df1['a'].apply(lambda x: function) I can get the specific value I need df2['f'].loc[df2['e'].apply(lambda y: value_to_look_for in y)].item() but when I put it together df1['a'].apply(lambda x: (df2['f'].loc[df2['e'].map(lambda y:x in y)].item())) I'm getting an error "ValueError: can only convert an array of size 1 to a Python scalar" Can I ask for a solution, what am I missing? | I would explode, create a mapping Series, and map: df1['b'] = df1['a'].map(df2.explode('e').set_index('e')['f']) Output: a b 0 1 1 1 2 1 2 8 3 Intermediate mapping Series: df2.explode('e').set_index('e')['f'] e 1 1 2 1 3 1 4 2 5 2 6 2 7 3 8 3 9 3 Name: f, dtype: int64 | 2 | 1 |
78,759,687 | 2024-7-17 | https://stackoverflow.com/questions/78759687/numpy-sum-n-successive-array-elements-where-n-comes-from-a-list | Here is code I am trying to optimize: import numpy as np rng = np.random.default_rng() num = np.fromiter((rng.choice([0, 1], size=n, p=[1-qEff, qEff]).sum() for n in num), dtype='int') What it does is it takes array num, where each element represents a bucket with given number of items. For each item it rolls a dice (Bernoulli trial with probability qEff) and if successful, it keeps the item in the bucket, if not it removes it. To avoid the loop, I came with idea to do all the rolling first and then sum number of trials corresponding to particular element in num array. So it would go something like this: rol = rng.choice([0, 1], size=num.sum(), p=[1-qEff, qEff] num = some_smart_sum(rol, num) So now I need to figure out how to do the some_smart_sum() without a loop. So far it looks like np.add.reduceat() should do the trick, but I am having difficulties building the indices parameter. | To add buckets you can get the indices for reduceat from a call to cumsum, but you will have to add a 0 in front and get rid of the last entry, which is equal to the total number of items and is added automatically by reduceat. Something like this should work: indices = np.concatenate(([0], np.cumsum(num)[:-1])) num = np.add.reduceat(roll, indices) A less obvious approach, but probably faster for large inputs is to not do the sums at all... Your sums are the number of successes of n independent Bernoulli trials with success probability p, so they follow a binomial distribution. So you can do that directly instead: rng = np.random.default_rng() num = rng.binomial(num, qEff) | 2 | 2 |
78,759,751 | 2024-7-17 | https://stackoverflow.com/questions/78759751/dropping-rows-of-a-dataframe-where-selected-columns-has-na-values | I have below code df = pd.DataFrame(dict( age=[5, 6, np.nan], born=[pd.NaT, pd.Timestamp('1939-05-27'), pd.Timestamp('1940-04-25')], name=['Alfred', 'Batman', np.nan], toy=[np.nan, 'Batmobile', 'Joker'])) Now I want to drop those rows where columns name OR toy have NaN/empty string values. I tried with below code df[~df[['name', 'toy']].isna()] I was expecting only 2nd row would return. Could you please help where I went wrong? | what is wrong df[~df[['name', 'toy']].isna()] returns a DataFrame (2D), thus you will apply a mask on the whole DataFrame, keeping values that are truthy (the shape remains unchanged). You're essentially doing: df.where(~df[['name', 'toy']].isna()) Which will mask the non target columns with NaN and the NaN values from name/toy with NaN (= no change) how to fix it What you want to do is boolean indexing, you must pass a Series (and thus aggregate with any). Also, since '' is not considered a NaN, you first have to replace them to None: out = df[~df[['name', 'toy']].replace('', None).isna().any(axis=1)] Note that this is equivalent to using notna+all without the ~: out = df[df[['name', 'toy']].replace('', None).notna().all(axis=1)] Output: age born name toy 1 6.0 1939-05-27 Batman Batmobile | 2 | 1 |
78,759,219 | 2024-7-17 | https://stackoverflow.com/questions/78759219/how-to-test-django-models-against-a-db-without-manage-py-in-a-standalone-package | Context: I am developing a standalone Django package that is intended to be added to existing Django projects. This package includes Django models with methods that interact directly with a database. I want to write tests for these methods using pytest and pytest-django. I need to run these tests in a CI/CD environment, specifically using GitHub Actions. Problems Encountered: Database Setup: Since this is a standalone package, it does not include a manage.py file. This makes it challenging to run typical Django management commands like migrate to set up the database schema for testing. Table Creation: I encounter errors indicating that the table for my model does not exist because the necessary migrations are not applied. This is problematic because my tests need to interact with the database tables directly. Fixtures and Setup: I need to ensure the database tables are created and populated with initial data before the tests run. My attempts to use pytest fixtures and manually create database tables within the tests have not been successful. What I've Tried: Using pytest fixtures to set up the initial database state. Manually creating database tables within the test setup using direct SQL execution. Using the mixer library to populate the database with test data. Desired Outcome: A reliable method to ensure the necessary database tables for my models are created before running the tests. Proper setup and teardown of test data within the standalone package. Integration with CI/CD pipelines like GitHub Actions without relying on manage.py commands. Additional Information: I am using an in-memory SQLite database for faster test execution. The test environment should be isolated and not rely on external databases or services. I am looking for guidance on how to approach this problem or best practices for testing Django models in a standalone package using pytest and pytest-django. Any advice or solutions would be greatly appreciated. | You don't need manage.py to run migrate. The same commands are available via the django-admin command that Django installs, but you'll need to set the DJANGO_SETTINGS_MODULE variable that manage.py would ordinarily set. IOW, if you'd generally run python manage.py and your project settings are myproject.settings, you can just as well run env DJANGO_SETTINGS_MODULE=myproject.settings django-admin migrate. That said, that's not your issue here β pytest-django will deal with setting up and tearing down a testing database for you. Generally, when developing a Django package, you'd have a testing project and app next to it to host the package for development use. An example (of a standalone package, tested with GitHub Actions) is here (repository maintained by yours truly); lippukala is the reusable package, and lippukala_test_app is the package that hosts the settings for pytest-django to use; there's a stanza [tool.pytest.ini_options] DJANGO_SETTINGS_MODULE = "lippukala_test_app.settings" in the pyproject.toml file that makes that work. | 2 | 0 |
78,757,696 | 2024-7-17 | https://stackoverflow.com/questions/78757696/python-check-if-the-last-value-in-a-sequence-is-relatively-higher-than-the-res | For a list of percentage data, I need to check if the last value (90.2) is somehow higher and somewhat "abnormal" than the rest of the data. Clearly it is in this sequence. delivery_pct = [59.45, 55.2, 54.16, 66.57, 68.62, 64.19, 60.57, 44.12, 71.52, 90.2] But for the below sequnece the last value is not so: delivery_pct = [ 63.6, 62.64, 60.36, 72.8, 70.86, 40.51, 52.06, 61.47, 51.55, 74.03 ] How do I check if the last value is abnormally higher than the rest? About Data: The data point has the range between 0-100%. But since this is percentage of delivery taken for a stock for last 10 days, so it is usually range bound based on nature of stock (highly traded vs less frequently traded), unless something good happens about the stock and there is higher delivery of that stock on that day in anticipation of good news. | Once you've determined a threshold (deviation from mean) you could do this: import statistics t = 2 # this is the crucial value pct = [59.45, 55.2, 54.16, 66.57, 68.62, 64.19, 60.57, 44.12, 71.52, 90.2] mean = statistics.mean(pct) tsd = statistics.pstdev(pct) * t lo = mean - tsd hi = mean + tsd print(*[x for x in pct if x < lo or x > hi], sep="\n") Output: 90.2 It's the threshold value that (effectively) determines what's "abnormal" The interquartile range (IQR) method produces the same result: import statistics pct = [59.45, 55.2, 54.16, 66.57, 68.62, 64.19, 60.57, 44.12, 71.52, 90.2] spct = sorted(pct) m = len(spct) // 2 Q1 = statistics.median(spct[:m]) m += len(spct) % 2 # increment m if list length is odd Q3 = statistics.median(spct[m:]) IQR = Q3 - Q1 lo = Q1 - 1.5 * IQR hi = Q3 + 1.5 * IQR print(*[x for x in pct if x < lo or x > hi], sep="\n") Output: 90.2 | 2 | 3 |
78,729,842 | 2024-7-10 | https://stackoverflow.com/questions/78729842/symbol-not-found-in-flat-namespace-bcp-batch | I'm using pyenv to manage my python versions. When i use python 3.12.4 or python3.9^, i get this error: File "src/pymssql/_pymssql.pyx", line 1, in init pymssql._pymssql ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/pymssql/_mssql.cpython-312-darwin.so, 0x0002): symbol not found in flat namespace '_bcp_batch' my script: from sqlalchemy import create_engine, text import bi_pass v_server = bi_pass.v_server v_user = bi_pass.v_user v_password = bi_pass.v_password v_database = bi_pass.v_database source_engine = create_engine(f"mssql+pymssql://{v_user}:{v_password}@{v_server}:1433/{v_database}") source_conn = source_engine.connect() query = text( """ SELECT * from test""") result = source_conn.execute(query) rows = result.fetchall() source_conn.close() source_engine.dispose() I have a Mac m3 chip, how can I fix it? Thanks | also another way that works: pip install --no-binary :all: pymssql --no-cache --force | 3 | 1 |
78,735,592 | 2024-7-11 | https://stackoverflow.com/questions/78735592/how-to-compute-a-column-in-polars-dataframe-using-np-linspace | Consider the following pl.DataFrame: df = pl.DataFrame( data={ "np_linspace_start": [0, 0, 0], "np_linspace_stop": [8, 6, 7], "np_linspace_num": [5, 4, 4] } ) shape: (3, 3) βββββββββββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββ β np_linspace_start β np_linspace_stop β np_linspace_num β β --- β --- β --- β β i64 β i64 β i64 β βββββββββββββββββββββͺβββββββββββββββββββͺββββββββββββββββββ‘ β 0 β 8 β 5 β β 0 β 6 β 4 β β 0 β 7 β 4 β βββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ How can I create a new column ls, that is the result of the np.linspace function? This column will hold an np.array. I was looking for something along those lines: df.with_columns( ls=np.linspace( start=pl.col("np_linspace_start"), stop=pl.col("np_linspace_stop"), num=pl.col("np_linspace_num") ) ) Is there a polars equivalent to np.linspace? | As mentioned in the comments, adding an np.linspace-style function to polars is an open feature request. Until this is implemented a simple implementation using polars' native expression API could look as follows. Update. Modern polars supports broadcasting of operations between scalar and list columns. This can be used to shift and scale an integer list column created using pl.int_ranges and improve on the initial implementation outlined below. def pl_linspace(start: str | pl.Expr, stop: str | pl.Expr, num: str | pl.Expr) -> pl.Expr: start = pl.col(start) if isinstance(start, str) else start stop = pl.col(stop) if isinstance(stop, str) else stop num = pl.col(num) if isinstance(num, str) else num grid = pl.int_ranges(num) _scale = (stop - start) / (num - 1) _offset = start return grid * _scale + _offset df.with_columns( pl_linspace( "np_linspace_start", "np_linspace_stop", "np_linspace_num", ).alias("pl_linspace") ) shape: (3, 4) βββββββββββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ β np_linspace_start β np_linspace_stop β np_linspace_num β pl_linspace β β --- β --- β --- β --- β β i64 β i64 β i64 β list[f64] β βββββββββββββββββββββͺβββββββββββββββββββͺββββββββββββββββββͺβββββββββββββββββββββββββββββββββ‘ β 0 β 8 β 5 β [0.0, 2.0, 4.0, 6.0, 8.0] β β 0 β 6 β 4 β [0.0, 2.0, 4.0, 6.0] β β 0 β 7 β 4 β [0.0, 2.333333, 4.666667, 7.0] β βββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ Note. If num is 1, the division when computing _scale will result in infinite values. This can be avoided by adding the following to pl_linspace. _scale = pl.when(_scale.is_infinite()).then(pl.lit(0)).otherwise(_scale) Outdated (but relevant for older versions of polars). First, we use pl.int_range (thanks to @Dean MacGregor) to create a range of integers from 0 to num (exclusive). Next, we rescale and shift the range according to start, stop, and num. Finally, we implode the column with pl.Expr.implode to obtain a column with the range as list for each row. def pl_linspace(start: pl.Expr, stop: pl.Expr, num: pl.Expr) -> pl.Expr: grid = pl.int_range(num) _scale = (stop - start) / (num - 1) _offset = start return (grid * _scale + _offset).implode().over(pl.int_range(pl.len())) df.with_columns( pl_linspace( start=pl.col("np_linspace_start"), stop=pl.col("np_linspace_stop"), num=pl.col("np_linspace_num"), ).alias("pl_linspace") ) | 4 | 6 |
78,756,903 | 2024-7-16 | https://stackoverflow.com/questions/78756903/ps-camera-taking-too-much-light-on-opencv | I installed a third-party driver and firmware to use my PS 4 Camera on PC and I need it to do a object detection project. Everything works fine on camera on Windows (the screen resolution looks buggy but this is simply how Windows read Sony data format). Here is how it looks (I have a red light in the room): -- When I open the same cam on OpenCV the image was looking very bright, or it looks very low quality. Here is how the image looks in OpenCV when saved with imwrite(): In the first image you can clearly read ALIENWARE under the monitor, in the second image is clearly granulated. I wrote the simplest code just to read and show the cam: import cv2 #read webcam = cv2.VideoCapture(0) #visualize while True: ret, frame = webcam.read() cv2.imshow('webcam', frame) if cv2.waitKey(40) & 0xFF == ord('q'): break webcam.release() cv2.destroyAllWindows() How can I fix this? | I solved the problem! With ffmpeg/ffplay I figured out that this cam can only support certain frame/resolution combinations: [dshow @ 000002e52f473900] pixel_format=yuyv422 min s=3448x808 fps=8 max s=3448x808 fps=60.0002 (tv, bt470bg/bt709/unknown, topleft) [dshow @ 000002e52f473900] pixel_format=yuyv422 min s=1748x408 fps=8 max s=1748x408 fps=120 (tv, bt470bg/bt709/unknown, topleft) [dshow @ 000002e52f473900] pixel_format=yuyv422 min s=898x200 fps=30 max s=898x200 fps=240.004 (tv, bt470bg/bt709/unknown, topleft) it appears that if you try to lower/increase the fps while keeping the same resolution it adjust the resolution itself to the nearest supported resolution but keeping the said framerate (if between min-max interval). That happens on OpenCV as well. Now, with width = 3448 height = 808 cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height) fps = 60 cap.set(cv2.CAP_PROP_FPS, fps) it outputs this image: much better image Thank you all for your help! | 2 | 1 |
78,734,751 | 2024-7-11 | https://stackoverflow.com/questions/78734751/how-do-i-persist-faiss-indexes | In the langchain wiki of FAISS, https://python.langchain.com/v0.2/docs/integrations/vectorstores/faiss/, it only talks about saving indexes to files. db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) docs = new_db.similarity_search(query) How can I save the indexes to databases, such that we can organize and concurrently access multiple indexes? Searched online but could not get much info on this. Can FAISS be used with any kind of distributed databases? | In fact, FAISS is considered as an in-memory database itself in order to vector search based on similarity that you can serialize and deserialize the indexes using functions like write_index and read_index within the FAISS interface directly or using save_local and load_local within the LangChain integration which typically uses the pickle for serialization. If you need to store serialized files, you could manually save them in a NoSQL database like MongoDB as binary data, and then deserialize and retrieve them when needed, however, it is not the best practice! If you are looking for a vector database that is not in-memory and capable in a scalable system, you might want to consider using Milvus which is designed for this purpose. | 2 | 2 |
78,754,451 | 2024-7-16 | https://stackoverflow.com/questions/78754451/defining-previous-timestep-dependent-ramping-constraints-in-python-gekko | I am building an MPC optimization (IMODE = 6) in Python GEKKO with multiple state (SV) and manipulated (MV) variables. Inside my problem, I would like to define ramping constraints of the following kind: |x(tk) - x(tk-1)| <= Cx for the SV, where Cx is a constant, tk is the current time step and tk-1 is the previous 0.9 <= u(tk) / u(tk-1) <= 1.1 for the MV I tried using the build-in tuning parameter DCOST for the MV, but as far as I understand it suggests using a constant value, which is not my case. Alternatively I tried solving the model iteratively with 1 time step per iteration and taking the values of the previous iteration as starting values for the next one, but it doesn't seem to work. Any clue of how to implement this or an example simple model implementation with such constraints would be of great help. | For the constraint |x(tk) - x(tk-1)| <= Cx for a state variable, create a new variable dx that is the defined as the derivative of the variable x and set a constraint on the ramp rate. # State Variable x = m.SV(value=0,ub=2) dx = m.SV(value=0,lb=-0.5,ub=0.5) # Process model m.Equation(5*x.dt() == -x + 3*u) m.Equation(dx == x.dt()) For the constraint 0.9 <= u(tk) / u(tk-1) <= 1.1 for the MV, set up a delay variable and constrain the value of u to be no more than a 10% change from the prior value. u = m.MV(value=0.8, lb=0.1, ub=10) # avoid divide-by-zero u.STATUS = 1 # allow optimizer to change ud = m.Var(value=0.8) m.delay(u,ud,1) # delay of 1 m.Equation(u<=1.1*ud) m.Equation(u>=0.9*ud) If the MV value u ever reaches zero, then the value won't be allowed to change further. It is generally safer to use something like DMAX or DMAXHI/DMAXLO to configure the rate of change of the MV. Below is a complete and minimal example that demonstrates the two constraint methods. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO() m.time = np.linspace(0,10,21) # Manipulated variable u = m.MV(value=0.8, lb=0.1, ub=10) # avoid divide-by-zero u.STATUS = 1 # allow optimizer to change #u.DCOST = 0.1 # smooth out MV movement #u.DMAX = 0.5 # limit MV rate of change ud = m.Var(value=0.8) m.delay(u,ud,1) # delay of 1 m.Equation(u<=1.1*ud) m.Equation(u>=0.9*ud) # State Variable x = m.SV(value=0,ub=2) dx = m.SV(value=0,lb=-0.5,ub=0.5) # Process model m.Equation(5*x.dt() == -x + 3*u) m.Equation(dx == x.dt()) # Objective m.Maximize(x) m.options.IMODE = 6 # control m.solve(disp=False) plt.figure(figsize=(6,3.5)) plt.subplot(2,1,1) plt.step(m.time,u.value,'b-',label='MV Optimized') plt.legend(); plt.ylabel('Input') plt.subplot(2,1,2) plt.plot(m.time,x.value,'r--',label='SV Response') plt.ylabel('State'); plt.xlabel('Time') plt.legend(); plt.tight_layout() plt.show() | 2 | 0 |
78,752,644 | 2024-7-16 | https://stackoverflow.com/questions/78752644/pytest-fixtures-not-found-when-running-tests-from-pycharm-ide | I am having trouble with pytest fixtures in my project. I have a root conftest.py file with some general-use fixtures and isolated conftest.py files for specific tests. The folder structure is as follows: product-testing/ βββ conftest.py # Root conftest.py βββ tests/ β βββ grpc_tests/ β βββ collections/ β βββ test_collections.py βββ fixtures/ βββ collections/ βββ conftest.py # Used by test_collections.py specifically When I try to run the tests from the IDE (PyCharm) using the "run button" near the test function, pytest can't initialize fixtures from the root conftest.py. The test code is something like this: import datetime import allure from faker import Faker from fixtures.collections.conftest import collection fake = Faker() @allure.title("Get list of collections") def test_get_collections_list(collection, postgres): with allure.step("Send request to get the collection"): response = collection.collection_list( limit=1, offset=0, # ...And the rest of the code ) here is a contest from fixtures/collection/ import datetime import pytest from faker import Faker from path.to.file import pim_collections_pb2 fake = Faker() @pytest.fixture(scope="session") def collection(grpc_pages): def create_collection(collection_id=None, store_name=None, items_ids=None, **kwargs): default_params = { "id": collection_id, "store_name": store_name, "item_ids": items_ids, "is_active": True, "description_eng": fake.text(), # rest of the code and this is the root conftest.py file import pytest from faker import Faker from pages.manager import DBManager, GrpcPages, RestPages fake = Faker() @pytest.fixture(scope="session") def grpc_pages(): return GrpcPages() @pytest.fixture(scope="session") def rest_pages(): return RestPages() @pytest.fixture(scope="session") def postgres(): return DBManager() #some other code The error message I get is: test setup failed file .../tests/grpc_tests/collections/test_collections.py, line 9 @allure.title("Create a multi-collection") def test_create_multicollection(collection, postgres): file .../fixtures/collections/conftest.py, line 10 @pytest.fixture(scope="session") def collection(grpc_pages): E fixture 'grpc_pages' not found > available fixtures: *session*faker, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, collection, doctest_namespace, factoryboy_request, faker, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. However, when I run the tests through a terminal with pytest ., there is no error. It seems like pytest can see the root conftest.py in one case but not in another. I have tried to Explicitly importing the missing fixtures directly from the root conftest.py to resolve the issue. This works for single test runs but causes errors when running all tests using CLI commands like pytest ., with the following error: ``` ImportError while importing test module '/path/to/test_collections.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: .../importlib/init.py:127: in import_module return bootstrap.gcd_import(name[level:], package, level) tests/.../test_collections.py:6: in <module> from conftest import grpc_pages E ImportError: cannot import name 'grpc_pages' from 'conftest' (/path/to/fixtures/users/conftest.py) ``` It feels like pytest is trying to find grpc_pages in the isolated conftest.py instead of the root file. Also, running isolated tests through a command like pytest -k "test_create_multicollection" ./tests/grpc_tests/collections/ pytest path/to/test_collections.py::test_create_multicollection works when the import is not explicit. However, I need to make the run button in the IDE work for less experienced, non-technical team members. So my questions are: Has anyone encountered this issue where PyCharm can't find fixtures from the root conftest.py when running individual tests, but pytest can when run from the command line? How can I fix this so that both PyCharm's run button and command-line pytest work consistently? Additional Context: Pytest version: pytest 7.4.4 PyCharm version: 2023.2.1 Python version: 3.9.6 Thank you in advance for your help! | After hours of debugging and calls I found the solution, IDK maybe one day it will help someone. First of all, putting contest files in a separate directory from pytest is not a good idea, pytest can't see the files, so the files should be put in the same directory or at least on the path to the test file. So I have moved the files from the fixtures folder to the tests folder. product-testing/ βββ conftest.py # Root conftest.py βββ tests/ β βββ grpc_tests/ β βββ collections/ β βββ test_collections.py β βββ conftest.py # Used by test_collections.py specifically Also, I got rid of explicitly importing all conftest files and functions from the file as pytest should handle it by itself. The code is like this. import datetime import allure from faker import Faker #from fixtures.collections.conftest import collection -> get rid of this file fake = Faker() @allure.title("Get list of collections") def test_get_collections_list(collection, postgres): with allure.step("Send request to get the collection"): response = collection.collection_list( limit=1, offset=0, # ...And the rest of the code ) And last but not least, I have changed the root testing directory to the root folder of the project by right-clicking on the project folder and choosing Mark directory As > Test Source Root These changes were made based on the official documentation of pytest and they solved my issues. | 5 | 2 |
78,756,165 | 2024-7-16 | https://stackoverflow.com/questions/78756165/how-to-contract-nodes-and-apply-functions-to-node-attributes | I have a simple graph with the attributes height and area import networkx as nx nodes_list = [("A", {"height":10, "area":100}), ("B", {"height":12, "area":200}), ("C", {"height":8, "area":150}), ("D", {"height":9, "area":120})] G = nx.Graph() G.add_nodes_from(nodes_list) edges_list = [("A","B"), ("B","C"), ("C","D")] G.add_edges_from(edges_list) nx.draw(G, with_labels=True, node_color="red") I want to contract nodes and update the attributes with the average of height and the sum of area of the contracted nodes. There must be some easier way than: H = nx.contracted_nodes(G, "B","C") #print(H.nodes["B"]) #{'height': 12, 'area': 200, 'contraction': {'C': {'height': 8, 'area': 150}}} #Calc the average of node B and C's heights new_height = (H.nodes["B"]["height"] + H.nodes["B"]["contraction"]["C"]["height"])/2 #10 #Calc the sum of of node B and C's areas new_area = H.nodes["B"]["area"] + H.nodes["B"]["contraction"]["C"]["area"] #Drop old attributes del(H.nodes["B"]["height"], H.nodes["B"]["area"], H.nodes["B"]["contraction"]) nx.set_node_attributes(H, {"B":{"height":new_height, "area":new_area}}) print(H.nodes["B"]) #{'height': 10.0, 'area': 350} Can I get networkx to average height and sum area, every time I contract multiple nodes? | I don't think there is a builtin way, especially since you have custom aggregation functions. Here is an alternative way, using a custom function: def contract_merge(G, n1, n2): H = nx.contracted_nodes(G, n1, n2) d_n1 = H.nodes[n1] d_n2 = d_n1.pop('contraction')[n2] d_n1['height'] = (d_n1['height']+d_n2['height'])/2 d_n1['area'] = d_n1['area']+d_n2['area'] nx.set_node_attributes(H, {n1: d_n1}) return H H = contract_merge(G, 'B', 'C') print(H.nodes(data=True)) Output: NodeDataView({'A': {'height': 10, 'area': 100}, 'B': {'height': 10.0, 'area': 350}, 'D': {'height': 9, 'area': 120}}) | 2 | 1 |
78,755,449 | 2024-7-16 | https://stackoverflow.com/questions/78755449/type-of-union-or-union-of-types | In Python, is Type[Union[A, B, C]] the same as Union[Type[A], Type[B], Type[C]]? I think they are equivalent, and the interpreter seems to agree. ChatGPT (which, from my past experience, tends to be wrong with this type of questions) disagrees so I was wondering which one is more correct. To be clear: I want a union of the types of A, B or C, not a union of instances of A, B or C. The first option is shorter and IMHO more readable. In other words, given these definitions: class A: pass class B: pass class C: pass def foo(my_class: Type[Union[A, B, C]]): pass I expect this usage to be correct: foo(A) # pass the class A itself But not this one: foo(A()) # pass an instance of A | They are the same. You can use reveal_type and check. class A: ... class B: ... class C: ... def something(a: type[A | B | C]) -> None: reveal_type(a) def something_else(b: type[A] | type[B]| type[C]) -> None: reveal_type(b) mypy main.py main.py:6: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" main.py:9: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" pyright main.py main.py:6:17 - information: Type of "a" is "type[A] | type[B] | type[C]" main.py:9:17 - information: Type of "b" is "type[A] | type[B] | type[C]" Edit: Older syntax from typing import Union, Type class A: ... class B: ... class C: ... def something(a: type[A | B | C]) -> None: reveal_type(a) def something_else(b: type[A] | type[B]| type[C]) -> None: reveal_type(b) def old_something(c: Type[Union[A, B, C]]) -> None: reveal_type(c) def old_something_else(d: Union[Type[A], Type[B], Type[C]]) -> None: reveal_type(d) mypy main.py main.py:7: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" main.py:10: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" main.py:13: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" main.py:16: note: Revealed type is "Union[type[main.A], type[main.B], type[main.C]]" pyright main.py main.py:7:17 - information: Type of "a" is "type[A] | type[B] | type[C]" main.py:10:17 - information: Type of "b" is "type[A] | type[B] | type[C]" main.py:13:17 - information: Type of "c" is "type[A] | type[B] | type[C]" main.py:16:17 - information: Type of "d" is "type[A] | type[B] | type[C]" | 3 | 6 |
78,752,520 | 2024-7-16 | https://stackoverflow.com/questions/78752520/how-can-i-get-the-subarray-indicies-of-a-binary-array-using-numpy | I have an array that looks like this r = np.array([1, 0, 0, 1, 1, 1, 0, 1, 1, 1]) and I want an output of [(0, 0), (3, 5), (7, 9)] right now I am able to accomplish this with the following function def get_indicies(array): indicies = [] xstart = None for x, col in enumerate(array): if col == 0 and xstart is not None: indicies.append((xstart, x - 1)) xstart = None elif col == 1 and xstart is None: xstart = x if xstart is not None: indicies.append((xstart, x)) return indicies However for arrays with 2 million elements this method is slow (~8 seconds). Is there a way using numpy's "built-ins" (.argwhere, .split, etc) to make this faster? This thread is the closest thing I've found, however I can't seem to get the right combination to solve my problem. | The solution I came up with is to separately find the indices where the first occurrence of 1 is and the indices where the last occurrence of 1 is. def get_indices2(arr): value_diff = np.diff(arr, prepend=0, append=0) start_idx = np.nonzero(value_diff == 1)[0] end_idx = np.nonzero(value_diff == -1)[0] - 1 # -1 to include endpoint idx = np.stack((start_idx, end_idx), axis=-1) return idx Note that the result is not a list of tuples, but a 2D numpy array like the one below. array([[0, 0], [3, 5], [7, 9]]) Here is benchmark: import timeit import numpy as np def get_indices(array): indices = [] xstart = None for x, col in enumerate(array): if col == 0 and xstart is not None: indices.append((xstart, x - 1)) xstart = None elif col == 1 and xstart is None: xstart = x if xstart is not None: indices.append((xstart, x)) return indices def get_indices2(arr): value_diff = np.diff(arr, prepend=0, append=0) start_idx = np.nonzero(value_diff == 1)[0] end_idx = np.nonzero(value_diff == -1)[0] - 1 # -1 to include endpoint idx = np.stack((start_idx, end_idx), axis=-1) return idx def benchmark(): rng = np.random.default_rng(0) arr = rng.integers(0, 1, endpoint=True, size=20_000_000) expected = np.asarray(get_indices(arr)) for f in [get_indices, get_indices2]: t = np.asarray(f(arr)) assert expected.shape == t.shape and np.array_equal(expected, t), f.__name__ elapsed = min(timeit.repeat(lambda: f(arr), repeat=10, number=1)) print(f"{f.__name__:25}: {elapsed}") benchmark() Result: get_indices : 4.708652864210308 get_indices2 : 0.21052680909633636 One thing that concerns me is that on my PC, your function takes less than 5 seconds to process 20 million elements, while you mention that it takes 8 seconds to process 2 million elements. So I may be missing something. Update Matt provided an elegant solution using reshape in his answer. However, if performance is important, I would suggest optimizing the np.diff part first. def custom_int8_diff(arr): out = np.empty(len(arr) + 1, dtype=np.int8) out[0] = arr[0] out[-1] = -arr[-1] np.subtract(arr[1:], arr[:-1], out=out[1:-1]) return out def get_indices2_custom_diff(arr): mask = custom_int8_diff(arr) # Use custom diff. Others unchanged. start_idx = np.nonzero(mask == 1)[0] end_idx = np.nonzero(mask == -1)[0] - 1 return np.stack((start_idx, end_idx), axis=-1) For Matt's reshape solution, we can use logical_xor, which is even faster. def custom_bool_diff(arr): out = np.empty(len(arr) + 1, dtype=np.bool_) out[0] = arr[0] out[-1] = arr[-1] np.logical_xor(arr[1:], arr[:-1], out=out[1:-1]) return out def get_indices3_custom_diff(arr): value_diff = custom_bool_diff(arr) # Use custom diff. Others unchanged. idx = np.nonzero(value_diff)[0] idx[1::2] -= 1 return idx.reshape(-1, 2) Benchmark (2 million elements): get_indices : 0.463582425378263 get_indices2 : 0.01675519533455372 get_indices3 : 0.01814895775169134 get_indices2_custom_diff : 0.010258681140840054 get_indices3_custom_diff : 0.006368924863636494 Benchmark (20 million elements): get_indices : 4.708652864210308 get_indices2 : 0.21052680909633636 get_indices3 : 0.19463363010436296 get_indices2_custom_diff : 0.14093663357198238 get_indices3_custom_diff : 0.08207075204700232 | 3 | 5 |
78,750,943 | 2024-7-15 | https://stackoverflow.com/questions/78750943/split-a-pandas-dataframe-column-into-multiple-based-on-text-values | I have a pandas dataframe with a column. id text_col 1 Was it Accurate?: Yes\n\nReasoning: This is a sample : text 2 Was it Accurate?: Yes\n\nReasoning: This is a :sample 2 text 3 Was it Accurate?: No\n\nReasoning: This is a sample: 1. text I have to break the text_col into two columms "Was it accurate?" and "Reasoning" The final dataframe should look like: id Was it Accurate? Reasoning 1 Yes This is a sample : text 2 Yes This is a :sample 2 text 3 No This is a sample: 1. text The text values can have multiple : "colons" in it I tried splitting the text_col using "\n\nReasoning:" but did'nt get desired result.It is leaving out the text after second colon (:) df[['Was it Accurate?', 'Reasoning']] = df['text_col'].str.extract(r'Was it Accurate\?: (Yes|No)\n\nReasoning: (.*)') Edit: I applied the function on the LLM_response column of my sample_100 dataframe. and printed the first row. if you see closely the sample_100.iloc[0]['Reasoning'] has stripped off all the text after : Temp dict obj to test on: {'id_no': [8736215], 'Notes': [' Temp Notes Sample xxxxxxxxxxxxx [4/21/23, 2:10 PM] Work started -work complete-'], 'ProblemDescription': ['Sample problem description xxxxxxxxxxxxxxxxxxxxxxxx'], 'LLM_response': ['Accurate & Understandable: Yes\n\nReasoning: The Technician notes are accurate and understandable as:\n1) The technician provided detailed steps on how they addressed the mold issue by removing materials, treating surfaces, priming, and painting them.\n2) Additionally, even though there was non-repair related information (toilet repairs), the main issue of mold growth was addressed.\n3) The process described logically follows the process for remedying a mold issue, which aligns with the problem description.'], 'Accurate & Understandable': ['Yes'], 'Reasoning': ['The Technician notes are accurate and understandable as:']} | The issue is not due to colons, but to newlines in your sample text. Those are not matched by . by default. You should add the re.DOTALL flag. Example: import re import pandas as pd df = pd.DataFrame({'id': [1, 2, 3], 'text_col': ['Was it Accurate?: Yes\n\nReasoning: This is a sample: text', 'Was it Accurate?: Yes\n\nReasoning: This is a sample:\n with newline', 'Was it Accurate?: No\n\nReasoning: This is a sample text']}) df[['Was it Accurate?', 'Reasoning']] = (df['text_col'] .str.extract(r'Was it Accurate\?: (Yes|No)\n\nReasoning: (.*)', flags=re.DOTALL) ) Output: id text_col Was it Accurate? Reasoning 0 1 Was it Accurate?: Yes\n\nReasoning: This is a sample: text Yes This is a sample: text 1 2 Was it Accurate?: Yes\n\nReasoning: This is a sample:\n with newline Yes This is a sample:\n with newline 2 3 Was it Accurate?: No\n\nReasoning: This is a sample text No This is a sample text | 2 | 0 |
78,753,705 | 2024-7-16 | https://stackoverflow.com/questions/78753705/resolving-runtime-nzec-error-in-python-code | I was solving a problem on HackerEarth which is as follows: You are provided an array A of size N that contains non-negative integers. Your task is to determine whether the number that is formed by selecting the last digit of all the N numbers is divisible by 10. Print "Yes" if possible and "No" if not. Here's what I did: size = int(input()) integers = input() integers = [i for i in integers.split(' ')] last_digit = [i[-1] for i in integers] number = int(''.join(last_digit)) if number % 10 == 0: print("Yes") else: print("No") The code seems correct to me, but the problem is that the online HackerEarth platform throws a runtime error NZEC, about which I have no idea. So, I want to know why(and what kind of) error occured here and what is the problem with my code. | https://help.hackerearth.com/hc/en-us/articles/360002673433-types-of-errors Hackerearth says For interpreted languages like Python, NZEC will usually mean Usually means that your program has either crashed or raised an uncaught exception Many runtime errors Usage of an external library that is not used by the judge For your case: number = int(''.join(last_digit)) number is becoming a very very big number which is the reason you are getting NZEC error (they have limited memory and time to execute the problem) little modified code: size = int(input()) integers = input() integers = [i for i in integers.split(' ')] last_digit = [i[-1] for i in integers] number = ''.join(last_digit) # keeping the number as string if number[-1] == '0': # checking the last digit of the string to be 0 print("Yes") else: print("No") | 2 | 0 |
78,753,820 | 2024-7-16 | https://stackoverflow.com/questions/78753820/using-polars-with-python-and-being-thrown-the-following-exception-attributeerro | I am trying to apply a function to a Dataframe column (series) that retrieves the day of the week based on the timestamps in the column. However, I am being thrown the following exception, even though the Polars docs include documentation for polars.Expr.apply. AttributeError: 'Expr' object has no attribute 'apply'. My goal is to create a new column of day names using the following code where the alertTime column is of dtype datetime64: def get_day(dt_obj): days_of_week = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'] return days_of_week[dt_obj.weekday()] # Get the day of the week from the timestamp df = df.with_columns( pl.col('alertTime').apply(get_day, return_dtype=pl.Utf8).alias('day_of_week') ) Could anyone help with where I might be going wrong? | apply was renamed to .map_elements() some time ago. Previous versions printed a deprecation warning, but it was eventually removed after a grace period. You're likely looking at the docs for an older version of Polars, but there is a "version switcher" on the docs site: As for the actual task, you can also do it natively using .dt.to_string() import datetime import polars as pl pl.select( pl.lit(str(datetime.datetime.now())) .str.to_datetime() .dt.to_string("%A") ) shape: (1, 1) βββββββββββ β literal β β --- β β str β βββββββββββ‘ β Tuesday β βββββββββββ | 7 | 6 |
78,743,465 | 2024-7-13 | https://stackoverflow.com/questions/78743465/python-3-10-4-scikit-learn-import-hangs-when-executing-via-cpp | Python 3.10.4 is embedded into cpp application. I'm trying to import sklearn library which is installed at custom location using pip --target. sklearn custom path (--target path) is appended to sys.path. Below is a function from the script which just prints the version information. Execution using Command Line works well as shown below. python3.10 -c 'from try_sklearn import *; createandload()' Output [INFO ] [try_sklearn.py:23] 3.10.4 (main, Aug 4 2023, 01:24:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] [INFO ] [try_sklearn.py:24] sklearn /users/xxxx/temp/python/scikit-learn/sklearn/__init__.py Version = 1.5.1 The same script when called using CPP, hangs at import sklearn Other libraries like pandas, numpy etc works without any issues. | https://github.com/scipy/scipy/issues/21189 Looks like Scipy and Numpy do not support Embedded python | 5 | 2 |
78,753,451 | 2024-7-16 | https://stackoverflow.com/questions/78753451/should-i-and-how-should-i-align-the-types-hints-my-abstract-class-and-concrete | I'm having an abstract class in Python like this class Sinker(ABC): @abstractmethod def batch_write(self, data) -> None: pass The concrete class is expected to write data to some cloud databases. My intention of writing abstract class this way was to make sure the concrete classes always implement the batch_write() method. However, I'm not sure what type hints I should put in for data because one of my concrete class is expecting List[str] while another concrete class is expecting List[dict]. Here are just some options popped up in my head so far. Any List[Any] List[str|dict] What would be a better way to do so? Is there a style guide for this situation? I tried out all three options and since they're just type hints, it won't cause me any trouble. But I just want to make sure I'm writing codes that are aligned with corresponding best practices in OOP if there's any. | You can make Sinker a generic class so the type of items in the list passed to batch_write can be parameterized: from abc import ABC, abstractmethod class Sinker[T](ABC): @abstractmethod def batch_write(self, data: list[T]) -> None: ... class StrSinker(Sinker[str]): def batch_write(self, data: list[str]) -> None: pass class DictSinker(Sinker[dict]): def batch_write(self, data: list[dict]) -> None: pass Demo with Pyright This way, the type checker would be able to spot an incorrect type hint for you. For example, with: class StrSinker(Sinker[str]): def batch_write(self, data: list[int]) -> None: pass Pyright would produce the following complaint: Method "batch_write" overrides class "Sinker" in an incompatible manner Parameter 2 type mismatch: base parameter is type "list[str]", override parameter is type "list[int]" "list[str]" is incompatible with "list[int]" Type parameter "_T@list" is invariant, but "str" is not the same as "int" Consider switching from "list" to "Sequence" which is covariant (reportIncompatibleMethodOverride) Demo with Pyright | 2 | 2 |
78,749,739 | 2024-7-15 | https://stackoverflow.com/questions/78749739/why-does-iterating-break-up-my-text-file-lines-while-a-generator-doesnt | For each line of a text file I want to do heavy calculations. The amount of lines can be millions so I'm using multiprocessing: num_workers = 1 with open(my_file, 'r') as f: with multiprocessing.pool.ThreadPool(num_workers) as pool: for data in pool.imap(my_func, f, 100): print(data) I'm testing interactively hence ThreadPool() (will be replaced in final version). For map or imap documentation says: This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks Since my opened file is an iterable (each iteration is a line) I expect this to work but it breaks up lines in the middle. I made a generator that returns lines as expected, but I want to understand why the generator is needed at all. Why isn't the chunking happening on line boundaries? UPDATE: to clarify if I use this generator: def chunked_reader(file, chunk_size=100): with open(file, 'r') as f: chunk = [] i = 0 for line in f: if i == chunk_size: yield chunk chunk = [] i = 0 i += 1 chunk.append(line) # yield final chunk yield chunk the calculation works, returns expected values. If I use the file object directly, I get errors somehow indicating the chunking is not splitting on line boundaries. UPDATE 2: it's chunking the line into each individual character and not line by line. I can also take another environment with python 3.9 and that has the same behavior. So it has been like this for some time and seems to be "as designed" but not very intuitive. UPDATE 3: To clarify on the marked solution, my misunderstanding was that chunk_size will send a list of data to process to my_func. Internally my_func iterates over the passed in data. But since my assumption wrong and each line gets send separately to my_func regardless of chunk_size, the internal iteration was iterating over the string, not as I expected list of strings. | imap chunking is an implementation detail, your function will be called with a single element of the iterable regardless of the chunk size. for line in file: result = my_func(line) Becomes for result in pool.imap(my_func, file, some_chunk_size): pass It is syntactically the same as builtins.map without the chunk size, but people don't really use it either. The chunking size controls how many elements are packed in a list before being pickled and sent over a pipe, the chunking process is transparent to your code, your function wil always be called with a single line regardless of the chunk size, you should play around with the chunk size and see if it improves performance in your case. Note that imap interally has a for loop, so it should work exactly like a simple for loop on the iterated object. | 3 | 2 |
78,751,680 | 2024-7-15 | https://stackoverflow.com/questions/78751680/nested-named-regex-groups-how-to-maintain-the-nested-structure-in-match-result | A small example: import re pattern = re.compile( r"(?P<hello>(?P<nested>hello)?(?P<other>cat)?)?(?P<world>world)?" ) result = pattern.match("hellocat world") print(result.groups()) print(result.groupdict() if result else "NO RESULT") produces: ('hellocat', 'hello', 'cat', None) {'hello': 'hellocat', 'nested': 'hello', 'other': 'cat', 'world': None} The regex match result returns a flat dictionary, rather than a dictionary of dictionaries that would correspond with the nested structure of the regex pattern. By this I mean: {'hello': {'nested': 'hello', 'other': 'cat'}, 'world': None} Is there a "built-in" (i.e. something involving details of what is provided by the re module) way to access the match result that does preserve the nesting structure of the regex? By this I mean that the following are not solutions in the context of this question: parsing the regex pattern myself to determine nested groups using a data structure that represents a regex pattern as a nested structure, and then implementing logic for that data structure to match against a string as if it were a "flat" regex pattern. | Since you don't mind using implementation details of the re module (which are subject to undocumented future changes), what you want is then possible by overriding the hooks that are called when the parser enters and leaves a capture group. Reading the source code of the Python implementation of re's parser we can find that it calls the opengroup method of the re._parser.State object when entering a capture group, and calls the closegroup method when leaving. We can therefore patch State with an additional attribute of a stack of dicts representing the sub-tree of the current group, override opengroup and closegroup to build the sub-trees when entering and leaving groups, and provide a method nestedgroupdict to fill the leaves (which have empty sub-trees) with actual matching values from the output of the groupdict method of a match: import re class State(re._parser.State): def __init__(self): super().__init__() self.treestack = [{}] def opengroup(self, name=None): self.treestack[-1][name] = subtree = {} self.treestack.append(subtree) return super().opengroup(name) def closegroup(self, gid, p): self.treestack.pop() super().closegroup(gid, p) def nestedgroupdict(self, groupdict, _tree=None): if _tree is None: _tree, = self.treestack result = {} for name, subtree in _tree.items(): if subtree: result[name] = self.nestedgroupdict(groupdict, subtree) else: result[name] = groupdict[name] return result re._parser.State = State so that the parser will produce a state with treestack containing a structure of the named groups: parsed = re._parser.parse( r"(?P<hello>(?P<nested>hello)?(?P<other>cat)?)?(?P<world>world)?" ) print(parsed.state.treestack) which outputs: [{'hello': {'nested': {}, 'other': {}}, 'world': {}}] We can then compile the parsed pattern to match it against a string and call groupdict to get the group-value mapping to feed into our nestedgroupdict method of the state to produce the desired nested structure: groupdict = re._compiler.compile(parsed).match("hellocat world").groupdict() print(parsed.state.nestedgroupdict(groupdict)) which outputs: {'hello': {'nested': 'hello', 'other': 'cat'}, 'world': None} Demo here | 3 | 5 |
78,752,268 | 2024-7-15 | https://stackoverflow.com/questions/78752268/least-common-multiple-of-natural-numbers-up-to-a-limit-say-10000000 | I'm working on a small Python program for myself and I need an algorithm for fast multiplication of a huge array with prime powers (over 660 000 numbers, each is 7 digits). The result number is over 4 millions digits. Currently I'm using math.prod, which calculates it in ~10 minutes. But that's too slow, especially if I want to increase amount of numbers. I checked some algorithms for faster multiplications, for example the SchΓΆnhageβStrassen algorithm and ToomβCook multiplication, but I didn't understand how they work or how to implement them. I tried some versions that I've found on the internet, but they're not working too well and are even slower. I wonder if someone knows how to multiply these amounts of numbers faster, or could explain how to use some math to do this? | There are two keys to making this fast. First, using the fastest mult implementation you can get. For "sufficiently large" multiplicands, Python's Karatsuba mult is O(n^1.585). The decimal module's much fancier NTT mult is more like O(n log n). But fastest of all is to install the gmpy2 extension package, which wraps GNU's GMP library, whose chief goal is peak speed. That has essentially the same asymptotics as decimal mult, but with a smaller constant factor. Second, the advanced mult algorithms work best when multiplying two large ints of about the same size (number of bits). You can leave that to luck, or, as below, you can force it by using a priority queue and, at each step, multiplying the "two smallest" partial products remaining. from gmpy2 import mpz from heapq import heapreplace, heappop, heapify # Assuming your input ints are in `xs`. mpzs = list(map(mpz, xs)) heapify(mpzs) for _ in range(len(mpzs) - 1): heapreplace(mpzs, heappop(mpzs) * mpzs[0]) assert len(mpzs) == 1 # the result is mpzs[0] That's the code I'd use. Note that the cost of recursion (which this doesn't use) is trivial compared to the cost of huge-int arithmetic. Heap operations are more expensive than recursion, but still relatively cheap, and can waaaaay more than repay their cost if the input is in an order such that the "by luck" methods aren't lucky enough. | 7 | 8 |
78,747,712 | 2024-7-15 | https://stackoverflow.com/questions/78747712/find-number-of-redundant-edges-in-components-of-a-graph | I'm trying to solve problem 1319 on Leetcode, which is as follows: There are n computers numbered from 0 to n - 1 connected by ethernet cables connections forming a network where connections[i] = [ai, bi] represents a connection between computers ai and bi. Any computer can reach any other computer directly or indirectly through the network. You are given an initial computer network connections. You can extract certain cables between two directly connected computers, and place them between any pair of disconnected computers to make them directly connected. Return the minimum number of times you need to do this in order to make all the computers connected. If it is not possible, return -1. Thinking on this a little, I came up with the following non-working approach and associated code: First, convert the edge list into an adjacency list of connections. Go to the first computer and see how many computers are accessible from that one (using e.g DFS). Additionally, keep track of the number of connections that repeatedly try to access a visited node, indicating that there's a wire we can get rid of. This represents a connected component. Find the next non-visited node and repeat the same process. At the end, determine if the number of wires we counted is >= the number of connected components - 1 from typing import DefaultDict, List, Set from collections import defaultdict class Solution: def makeConnected(self, n: int, connections: List[List[int]]) -> int: def dfs( adj_list: DefaultDict[int, List[int]], computer: int, visited: Set[int] ) -> int: """Returns the number of removable wires from this connected component""" num_removable_wires = 0 stack = [computer] while len(stack) > 0: current = stack.pop() # Already been here, so can remove this wire if current in visited: num_removable_wires += 1 continue visited.add(current) if current in adj_list: for neighbor in adj_list[current]: stack.append(neighbor) return num_removable_wires adj_list = defaultdict(list) for connection in connections: adj_list[connection[0]].append(connection[1]) # adj_list[connection[1]].append(connection[0]) total_removable_wires = 0 num_components = 0 visited = set() for computer in adj_list.keys(): if computer in visited: continue num_components += 1 total_removable_wires += dfs(adj_list, computer, visited) # Add computers that are completely isolated num_components += n - len(visited) return ( num_components - 1 if total_removable_wires >= num_components - 1 else -1 ) if __name__ == "__main__": print(Solution().makeConnected(6, [[0, 1], [0, 2], [0, 3], [1, 2]])) print( Solution().makeConnected( 11, [ [1, 4], [0, 3], [1, 3], [3, 7], [2, 7], [0, 1], [2, 4], [3, 6], [5, 6], [6, 7], [4, 7], [0, 7], [5, 7], ], ) ) For the first test case, this code works as expected. For the second, I realized that for certain vertices, e.g 1, the only vertices accessible, directly or indirectly, are 4, 3, 7, and 6 since the edges are only placed in one direction in the adjacency list. The code then incorrectly determines that vertex 0 is part of a new component. To fix this, I tried to adjust the following, uncommenting the second line of code when constructing the adjacency list, to add both sides of the same edge: for connection in connections: adj_list[connection[0]].append(connection[1]) adj_list[connection[1]].append(connection[0]) However, while this fixes the second test case, this now breaks the first. Now, when the code reaches e.g 3 from 0 and sees that 0 is a neighbor already visited, it incorrectly states that edge is redundant even though it was just traversed on the way to 3. How can I correctly count the number of redundant edges (or removable wires) in the context of this problem? Note that I realize there are better approaches in the Leetcode solutions tab that I could implement, but I was wondering what I am doing wrong for my solution attempt and whether it is possible to correct this existing approach. | When you do: adj_list = defaultdict(list) for connection in connections: adj_list[connection[0]].append(connection[1]) # adj_list[connection[1]].append(connection[0]) You will only create dictionary keys for one-directional edges. You need to add the edges in the reverse direction (that you have commented out) and then, when you are performing the DFS, you need to count half-edges (and adjust for the tree edges that are not removable): from typing import DefaultDict, List, Set from collections import defaultdict def dfs( adj_list: DefaultDict[int, List[int]], computer: int, visited: Set[int] ) -> int: """Returns the number of removable wires from this connected component""" stack = [computer] # The root vertex of the DFS is only visited once so add an extra # half-edge to adjust. removable_half_edges = 1 while len(stack) > 0: current = stack.pop() # count each half-edge as it is visited. removable_half_edges += 1 if current not in visited: # Tree edges are not removable so do not count both half-edges. removable_half_edges -= 2 visited.add(current) stack.extend(adj_list[current]) # return the number of bi-directional edges return removable_half_edges // 2 class Solution: def makeConnected(self, n: int, connections: List[List[int]]) -> int: adj_list = defaultdict(list) for connection in connections: adj_list[connection[0]].append(connection[1]) adj_list[connection[1]].append(connection[0]) total_removable_wires = 0 num_components = 0 visited = set() for computer in adj_list.keys(): if computer in visited: continue num_components += 1 total_removable_wires += dfs(adj_list, computer, visited) # Add computers that are completely isolated num_components += n - len(visited) return ( num_components - 1 if total_removable_wires >= num_components - 1 else -1 ) if __name__ == "__main__": tests = ( (4, [[0,1],[0,2],[1,2]], 1), (6, [[0,1],[0,2],[0,3],[1,2],[1,3]], 2), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]], 5), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3],[2,4]], 5), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3]], -1), (6, [[0,1],[0,2],[0,3],[1,2]], -1), (11, [[1, 4], [0, 3], [1, 3], [3, 7], [2, 7], [0, 1], [2, 4], [3, 6], [5, 6], [6, 7], [4, 7], [0, 7], [5, 7]], 3), ) solution = Solution() for n, connections, expected in tests: output = solution.makeConnected(n, connections) if output == expected: print("PASS") else: print(f"FAIL: {n}, {connections} -> {output} expected {expected}.") You could also simplify the data-structures by creating a Computer class to store whether the computer has been visited and the adjacency list and then iterating over those to find the connected components: from __future__ import annotations class Computer: connections: list[Computer] dfs_visited: bool def __init__(self) -> None: self.connections = [] self.dfs_visited = False class Solution: def makeConnected( self, n: int, connections: list[list[int]], ) -> int: if len(connections) < n - 1: return -1 removable_cables: int = 0 connected_networks: int = 0 computers = [Computer() for _ in range(n)] for f, t in connections: computers[f].connections.append(computers[t]) computers[t].connections.append(computers[f]) for computer in computers: if computer.dfs_visited: continue connected_networks += 1 removable_half_edges = 1 stack: list[Computer] = [computer] while stack: comp = stack.pop() removable_half_edges += 1 if not comp.dfs_visited: removable_half_edges -= 2 comp.dfs_visited = True stack.extend(comp.connections) removable_cables += removable_half_edges / 2 return ( (connected_networks - 1) if removable_cables >= (connected_networks - 1) else -1 ) if __name__ == "__main__": tests = ( (4, [[0,1],[0,2],[1,2]], 1), (6, [[0,1],[0,2],[0,3],[1,2],[1,3]], 2), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]], 5), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3],[2,4]], 5), (10, [[0,1],[0,2],[0,3],[0,4],[1,2],[1,3],[1,4],[2,3]], -1), (6, [[0,1],[0,2],[0,3],[1,2]], -1), (11, [[1, 4], [0, 3], [1, 3], [3, 7], [2, 7], [0, 1], [2, 4], [3, 6], [5, 6], [6, 7], [4, 7], [0, 7], [5, 7]], 3), ) solution = Solution() for n, connections, expected in tests: output = solution.makeConnected(n, connections) if output == expected: print("PASS") else: print(f"FAIL: {n}, {connections} -> {output} expected {expected}.") | 4 | 1 |
78,752,121 | 2024-7-15 | https://stackoverflow.com/questions/78752121/how-does-numpy-polyfit-return-a-slope-and-y-intercept-when-its-documentation-say | I have seen examples where slope, yintercept = numpy.polyfit(x,y,1) is used to return slope and y-intercept, but the documentation does not mention "slope" or "intercept" anywhere. The documentation also states three different return options: p: ndarray, shape (deg + 1,) or (deg + 1, K) Polynomial coefficients, highest power first. If y was 2-D, the coefficients for k-th data set are in p[:,k], or residuals, rank, singular_values, rcond These values are only returned if full == True residuals: sum of squared residuals of the least squares fit rank: the effective rank of the scaled Vandermonde singular_values: singular values of the scaled Vandermonde rcond: value of rcond, or v: ndarray, shape (deg + 1, deg + 1) or (deg + 1, deg + 1, K) Present only if full == False and cov == True I have also run code with exactly the previous line and the slope and yintercept variables are filled with a single value each, which does not fit any of the documented returns. Which documentation am I supposed to use, or what am I missing? | slope, yintercept = numpy.polyfit(x, y, 1) Both full and cov parameters are False, so it's the first option, "Polynomial coefficients, highest power first". deg is 1, so "p: ndarray, shape (deg + 1,)" is an array of shape (2,). With tuple assignment, slope is p[0], the xdeg-0 = x1 coefficient, yintercept is p[1], the xdeg-1 = x0 coefficient. | 2 | 1 |
78,751,960 | 2024-7-15 | https://stackoverflow.com/questions/78751960/how-to-save-generator-object-to-png-jpg-or-other-image-files | I'm using timeseries-generator from GitHub to get synthetic line graphs. That works good. But now I want to save the plots into a file as separate png images. This is my code so far: i=range(0,4) #first number inclusive, second exclusive for element in i: outpath = "PATH" c=random.random() o=random.random() s=random.random() lt = LinearTrend(coef=c, offset=o, col_name="plot no. {0}".format(element)) g = Generator(factors={lt}, features=None, date_range=pd.date_range(start="01-01-2020", end="12-31-2020")) wn = WhiteNoise(stdev_factor=s) g.update_factor(wn) g.generate() g.plot() g.savefig(path.join(outpath,"graph_{0}.png".format(element))) The last line get me following error: AttributeError: 'Generator' object has no attribute 'savefig' The type of object g is: print(type(g)) <class 'timeseries_generator.generator.Generator'> I tried following suggestion from an other post: Save image from generator object: with open("output_file.png", "wb") as fp: for chunk in thumb_local: fp.write(chunk) But that doesn't work for me. Maybe I'm using it wrong, with: with open("graph_{0}.png".format(element), "wb") as fp: fp.write(g) I get following error: TypeError: a bytes-like object is required, not 'Generator' Is there a way to change the generator object into something else to save the images or is there a way to save the generator objects as png? Or can I use the suggestion above, but do it wrong? Thank you for your help in advance! | g.plot() simply calls the plot() method of the underlying Pandas dataframe. You should get the dataframe and export it on your own: fig = g.ts.plot(kind='line', figsize=(20, 16)).get_figure() fig.savefig(path.join(outpath,"graph_{0}.png".format(element))) | 2 | 1 |
78,750,662 | 2024-7-15 | https://stackoverflow.com/questions/78750662/np-where-on-a-numpy-mxn-matrix-but-return-m-rows-with-indices-where-condition-ex | I am trying to use np.where on a MxN numpy matrix, where I want to return the same number of M rows but the indices in each row where the element exists. Is this possible to do so? For example: a = [[1 ,2, 2] [2, 3, 5]] np.where(a == 2) I would like this to return: [[1, 2], [0]] | One option is to post-process the output of where, then split: a = np.array([[1, 2, 2], [2, 3, 5]]) i, j = np.where(a == 2) out = np.split(j, np.diff(i).nonzero()[0]+1) Alternatively, using a list comprehension: out = [np.where(x==2)[0] for x in a] Output: [array([1, 2]), array([0])] using this output to average another array a = np.array([[1, 2, 2], [2, 3, 5]]) b = np.array([[10, 20, 30], [40, 50, 60]]) m = a == 2 i, j = np.where(m) # (array([0, 0, 1]), array([1, 2, 0])) idx = np.r_[0, np.diff(i).nonzero()[0]+1] # array([0, 2]) out = np.add.reduceat(b[m], idx)/np.add.reduceat(m[m], idx) # array([50, 40])/array([2, 1]) Output: array([25., 40.]) handling NaNs: a = np.array([[1, 2, 2], [2, 3, 5]]) b = np.array([[10, 20, np.nan], [40, 50, 60]]) m = a == 2 i, j = np.where(m) # (array([0, 0, 1]), array([1, 2, 0])) idx = np.r_[0, np.diff(i).nonzero()[0]+1] # array([0, 2]) b_m = b[m] # array([20., nan, 40.]) nans = np.isnan(b_m) # array([False, True, False]) out = np.add.reduceat(np.where(nans, 0, b_m), idx)/np.add.reduceat(~nans, idx) # array([20., 40.])/array([1, 1]) Output: array([20., 40.]) | 3 | 1 |
78,747,351 | 2024-7-14 | https://stackoverflow.com/questions/78747351/sympy-how-to-factor-expressions-inside-parentheses-only | I want to simplify the following expression using sympy: -8*F_p + 8*Omega*u + 6*alpha*u*(u**2 + v**2) - 5*beta*u*(u**4 + 2*u**2*v**2 + v**4) - 8*gamma*omega*v So it ends up like -8*F_p + 8*Omega*u + 6*alpha*u*r**2 - 5*beta*u*r**4 - 8*gamma*omega*v with r**2=u**2+v**2 It is not working though, but when I use factor(u**4 + 2*u**2*v**2 + v**4).subs(u**2+v**2, r**2) The output I get is r**4 as expected. Unfortunately, I don't get this simplification when I try this with the first expression. I wonder if there is a way to tell sympy to factor only the expressions inside the innermost parentheses. Here is the full code from sympy import symbols u, v, r = symbols('u v r') # parameters alpha = symbols('alpha', positive=True) beta = symbols('beta', positive=True) F_p = symbols('F_p', positive=True) gamma = symbols('gamma', positive=True) omega = symbols('omega', positive=True) Omega = symbols('Omega', positive=True) f=-8*F_p + 8*Omega*u + 6*alpha*u**3 + 6*alpha*u*v*v - 5*beta*u**5 - 10*beta*u**3*v**2 - 5*beta*u*v**4 - 8*gamma*omega*v print(f) f = collect(f, [alpha, beta]) print(simplify(f)) # Substitute u^2 + v^2 = r^2 subs_dict = {u**2 + v**2: r**2} # Simplify the expression simplified_expr = simplify(f.subs(subs_dict)) print(simplified_expr) print(factor(u**4 + 2*u**2*v**2 + v**4).subs(u**2+v**2, r**2)) | Sometimes the easiest thing to do is to re-arrange the expression you are trying to replace: >>> f.subs(v**2, r**2 - u**2).expand() -8*F_p + 8*Omega*u + 6*alpha*r**2*u - 5*beta*r**4*u - 8*gamma*omega*v For an older version of SymPy, that doesn't replace the v**2 in the way you want, you might have to fall back to using v, instead and then cleaning up residual v-expressions. >>> s=sqrt(var('r',positive=True)**2-u**2) >>> f.subs(v,s).expand().subs(s,v) -8*F_p + 8*Omega*u + 6*alpha*r**2*u - 5*beta*r**4*u - 8*gamma*omega*v | 2 | 2 |
78,742,550 | 2024-7-12 | https://stackoverflow.com/questions/78742550/how-to-use-glob-pattern-to-read-many-csvs-into-one-polars-data-frame-with-pydriv | If I have 2 .csv files stored locally data/file_1.csv and data/file_2.csv which both have the same schema, it is easy to polars-read both of them in to 1 concatenated data frame like so: pl.read_csv('data/file_*.csv') But if I am storing these same 2 files within Google Drive (not a GCS bucket), and I am using GDriveFileSystem from pydrive2.fs as my fsspec file system, I cannot find a way to make use of the glob pattern and have to read them in separately, e.g. fs = GDriveFileSystem(ROOT_FOLDER_ID, client_id = CLIENT_ID, client_secret = CLIENT_SECRET) dfs = [] for i in range(1, 3): with fs.open(f'{ROOT_FOLDER_ID}/data/file_{i}.csv', 'rb') as f: dfs += pl.read_csv(f) df = pl.concat(dfs) Not only does this mean I need to know and specify the amount of files and their exact file paths in advance, but the code also just feels a lot less cleaner than before. Is there any way I can still read these multiple files with a glob path but using the fsspec file system? | Although pydrive2 has an fsspec interface, it doesn't seem to declare a protocol or register itself with fsspec, so calls like fsspec.open("gdrive://...", ) are not automatically recognised. This is the intended usage, so I suggest an issue should be raised with them to make sure this gets implemented. You could call fsspec.register_implementation manually to assign a protocol to the PyDrive2 fsspec class. The older, less complete and unreleased gdrivefs does support this usage, because it is listed in fsspec.registry.known_implementations. | 5 | 0 |
78,749,458 | 2024-7-15 | https://stackoverflow.com/questions/78749458/how-to-create-a-python-type-alias-for-a-parametrized-type | jaxtyping provides type annotations that use a str as parameter (as opposed to a type), e.g.: Float[Array, "dim1 dim2"] Let's say I would like to create a type alias which combines the Float and Array part. This means I would like to be able to write MyOwnType["dim1 dim2"] instead. To my understanding I cannot use TypeAlias/TypeVar, as the generic parameter (here "dim1 dim2") is not a type but actually an instance of a type. Is there a concise Pythonic way to achieve this? EDIT: I tried the following and it does not work: class _Singleton: def __getitem__(self, shape: str) -> Float: return Float[Array, shape] MyOwnType = _Singleton() Using MyOwnType["dim1 dim2"]as function parameter annotation gives the mypy complaint Variable "MyOwnType" is not valid as a type Solution: Based on the answer of @chepner this was the final solution: class MyOwnType(Generic[Shape]): def __class_getitem__(cls, shape: str) -> Float: return Float[Array, shape] | You need to override/define __class_getitem__, not __getitem__ (which applies to instances of _Singleton, not _Singleton itself). class _Singleton: def __class_getitem__(self, shape: str) -> Float: return Float[Array, shape] | 4 | 2 |
78,748,429 | 2024-7-15 | https://stackoverflow.com/questions/78748429/apply-permutation-array-on-multiple-axes-in-numpy | Let's say I have an array of permutations perm which could look like: perm = np.array([[0, 1, 2], [1, 2, 0], [0, 2, 1], [2, 1, 0]]) If I want to apply it to one axis, I can write something like: v = np.arange(9).reshape(3, 3) print(v[perm]) Output: array([[[0, 1, 2], [3, 4, 5], [6, 7, 8]], [[3, 4, 5], [6, 7, 8], [0, 1, 2]], [[0, 1, 2], [6, 7, 8], [3, 4, 5]], [[6, 7, 8], [3, 4, 5], [0, 1, 2]]]) Now I would like to apply it to two axes at the same time. I figured out that I can do it via: np.array([v[tuple(np.meshgrid(p, p, indexing="ij"))] for p in perm]) But I find it quite inefficient, because it has to create a mesh grid, and it also requires a for loop. I made a small array in this example but in reality I have a lot larger arrays with a lot of permutations, so I would really love to have something that's as quick and simple as the one-axis version. | How about: p1 = perm[:, :, np.newaxis] p2 = perm[:, np.newaxis, :] v[p1, p2] The zeroth axis of p1 and p2 is just the "batch" dimension of perm, which allows you to do many permutations in one operation. The other dimension of perm, which corresponds with the indices, is aligned along the first axis in p1 and the second in p2. Because the axes are orthogonal, the arrays get broadcasted, basically like the arrays you got using meshgrid - but these still have the batch dimension. That's the best I can do from my cell phone : ) I can try to clarify later if needed, but the key idea is broadcasting. Comparison: import numpy as np perm = np.array([[0, 1, 2], [1, 2, 0], [0, 2, 1], [2, 1, 0]]) v = np.arange(9).reshape(3, 3) ref = np.array([v[tuple(np.meshgrid(p, p, indexing="ij"))] for p in perm]) p1 = perm[:, :, np.newaxis] p2 = perm[:, np.newaxis, :] res = v[p1, p2] np.testing.assert_equal(res, ref) # passes %timeit np.array([v[tuple(np.meshgrid(p, p, indexing="ij"))] for p in perm]) # 107 Β΅s Β± 20.6 Β΅s per loop %timeit v[perm[:, :, np.newaxis], perm[:, np.newaxis, :]] # 3.73 Β΅s Β± 1.07 Β΅s per loop A simpler (without batch dimension) example of broadcasting indices: import numpy as np i = np.arange(3) ref = np.meshgrid(i, i, indexing="ij") res = np.broadcast_arrays(i[:, np.newaxis], i[np.newaxis, :]) np.testing.assert_equal(res, ref) # passes In the solution code at the top, the broadcasting is implicit. We don't need to call broadcast_arrays because it happens automatically during the indexing. | 6 | 4 |
78,747,409 | 2024-7-14 | https://stackoverflow.com/questions/78747409/suppress-stdout-message-from-c-c-library | I am attempting to suppress a message that is printed to stdout by a library implemented in C. My specific usecase is OpenCV, so I will use it for the MCVE below. The estimateChessboardSharpness function has a printout when the grid size is too small (which happens here). I made a PR to fix it, but in the meantime, I'd like to suppress the message. For example: import cv2 import numpy as np img = np.zeros((512, 640), dtype='uint8') corners = [] for i in range(10): for j in range(8): corner = (30 + 3 * j, 70 + 3 * i) if i and j: corners.append(corner) if (i % 2) ^ (j % 2): img[corner[0]:corner[0] + 3, corner[1]:corner[1] + 3] = 255 corners = np.array(corners) >>> cv2.estimateChessboardSharpness(img, (9, 7), corners) calcEdgeSharpness: checkerboard too small for calculation. ((9999.0, 9999.0, 9999.0, 9999.0), None) The line appears to be a simple std::cout << ..., so I have tried all of the following: from contextlib import redirect_stdout from os imoprt devnull import sys with redirect_stdout(None): cv2.estimateChessboardSharpness(img, (9, 7), corners) with open(devnull, "w") as null, redirect_stdout(null): cv2.estimateChessboardSharpness(img, (9, 7), corners) sys.stdout = open(devnull, "w") cv2.estimateChessboardSharpness(img, (9, 7), corners) I've even tried redirect_stderr instead of redirect_stdout just in case. I've also tried setting OPENCV_LOG_LEVEL=SILENT in bash and os.environ["OPENCV_LOG_LEVEL"] = "SILENT" in python before importing cv2, not that I expected stdout to be conflated with logging in this case. In all cases, the message prints. How do I make it stop? | Assuming a UNIX-like platform, (you'll get an OSError if not) You can save the underlying fd with backup = os.dup(sys.stdout.fileno()) point it to a different file (eg. different) with os.dup2(different.fileno(), sys.stdout.fileno()) and resume normal service with os.dup2(backup, sys.stdout.fileno()) Do note that this won't behave at all nicely if the file you redirect to is also being used for buffered output (ie, you have writes to different mixed with C++ library writes to std::cout which is using different.fileno()...) | 2 | 2 |
78,745,907 | 2024-7-14 | https://stackoverflow.com/questions/78745907/is-there-a-way-to-expose-a-plot-generated-using-a-pandas-dataframe-as-a-jupyter | Current Workflow Right now, me, as an ML engineer, am using a jupyter notebook to plot some pandas dataframes. Basically, inside the jupyter notebook, I am setting some hard-coded configurations like k_1:float=2.3 date_to_analyse:str='2024-06-17' # 17th June region_to_analyse:str='montana' ... Based on the hardcoded parameters like above, the notebook (along with some helper functions) runs a query to my company's data warehouse in Google Big Query, crunches a few numbers, applies some business logic and generates a pandas dataframe for transaction numbers minute by minute throughout the day. The plot of the pandas dataframe is what's important to the business user, which I show to them as part of our model evaluation metric. The logic is straightforward enough, and captured in the figure below. Objective I am tasked with making this into a self-service application directly usable by the business team, where a user can input the above values via a simple UI the plot appears on the screen. I don't have a dedicated frontend guy, neither do I have frontend experience myself. So was just wondering whether there is a simple enough solution (best if serverless, and part of the GCP eco-system) that can accomplish this? I know about Google colab notebooks (basically, jupyter notebook, right?), but have not tried them. Can they provide a simple way to expose the notebook's functionality (generate the plots from user provided configs) | If you can expose the notebook to users (load credentials via env vars etc.) you can use IPython widgets. Google collab has forms and allows you to hide the code, if you think filling out a form an pressing the play button left of the field is managable for your userbase that is an option as well. While not totally serverless gradio, huggingfaces ui library makes it simple enough to create simple webinterfaces. You can get an internal and a public address for it. You could run the gradio code in google collab as well. | 2 | 1 |
78,744,848 | 2024-7-13 | https://stackoverflow.com/questions/78744848/impact-on-performance-when-using-sql-like-padding-in-polars | Description: I have been using SQL-like padding in my Polars DataFrame queries for better readability and easier commenting of conditions. This approach involves using True (analogous to 1=1 in SQL) to pad the conditional logic. SQL Example In SQL, padding with 1=1 makes it easy to add or remove conditions: SELECT * FROM employees WHERE 1=1 AND department = 'Sales' AND salary > 50000 AND hire_date > '2020-01-01'; SELECT * FROM employees WHERE 1=1 AND department = 'Sales' -- AND salary > 50000 AND hire_date > '2020-01-01'; (>>>> in my opinion <<<<) This makes adding, removing, and replacing operators very easy. Also it simplifies the git diff because adding and removing conditions appear as a single diff. Polars Example Here's the Polars equivalent that I have been using: # Example data data = { "name": ["Alice", "Bob", "Charlie", "David"], "department": ["Sales", "HR", "Sales", "IT"], "salary": [60000, 45000, 70000, 50000], "hire_date": ["2021-06-01", "2019-03-15", "2020-08-20", "2018-11-05"] } # Create a DataFrame df = pl.DataFrame(data) # Filter with padding filtered_df = df.filter( ( True & pl.col("department").eq("Sales") # & pl.col("salary").gt(50000) & pl.col("hire_date").gt("2020-01-01") & True ) ) print(filtered_df) Concern I am concerned about the potential performance impact of this approach in Polars. Does padding the filter conditions with True significantly affect the performance of query execution in Polars? Also is there a better way to do this ? Thank you for your assistance. | Personally for SQL i switched to βand at the end of the lineβ cause i think itβs much more readable like that. It does make commenting out the last line trickier, but i just donβt like 1=1 dummy condition. Luckily, in modern dataframe library like polars you donβt need to do that anymore. Just pass multiple filter condutions instead of combined one. filtered_df = df.filter( pl.col("department").eq("Sales"), # pl.col("salary").gt(50000), pl.col("hire_date").gt("2020-01-01"), ) | 4 | 3 |
78,744,860 | 2024-7-13 | https://stackoverflow.com/questions/78744860/using-pandas-concat-along-axis-1-returns-a-concatenation-along-axis-0 | I am trying to horizontally concatenate a pair of data frames with identical indices, but the result is always a vertical concatenation with NaN values inserted into every column. dct_l = {'1':'a', '2':'b', '3':'c', '4':'d'} df_l = pd.DataFrame.from_dict(dct_l, orient='index', columns=['Key']) dummy = np.zeros((4,3)) index = np.arange(1,5) columns = ['POW', 'KLA','CSE'] df_e = pd.DataFrame(dummy, index, columns) print(df_l) Key 1 a 2 b 3 c 4 d print(df_e) POW KLA CSE 1 0.0 0.0 0.0 2 0.0 0.0 0.0 3 0.0 0.0 0.0 4 0.0 0.0 0.0 pd.concat([df_l, df_e], axis=1) Actual Result Key POW KLA CSE 1 a NaN NaN NaN 2 b NaN NaN NaN 3 c NaN NaN NaN 4 d NaN NaN NaN 1 NaN 0.0 0.0 0.0 2 NaN 0.0 0.0 0.0 3 NaN 0.0 0.0 0.0 4 NaN 0.0 0.0 0.0 Expected Result Key POW KLA CSE 1 a 0.0 0.0 0.0 2 b 0.0 0.0 0.0 3 c 0.0 0.0 0.0 4 d 0.0 0.0 0.0 What is happening here? | You have different dtypes for your two indexes: df_e.index Index([1, 2, 3, 4], dtype='int64') df_l.index Index(['1', '2', '3', '4'], dtype='object') which will break the alignment (1 != '1'). Make sure they are identical. For example: pd.concat([df_l.rename(int), df_e], axis=1) Key POW KLA CSE 1 a 0.0 0.0 0.0 2 b 0.0 0.0 0.0 3 c 0.0 0.0 0.0 4 d 0.0 0.0 0.0 | 3 | 2 |
78,744,571 | 2024-7-13 | https://stackoverflow.com/questions/78744571/output-repeating-and-non-repating-digits-using-regex-from-s-1222311 | I am finding it difficult to write regex expression. Need your help in getting expected output. import re pattern1=r'(\d)\1{1,}' s = '1222311' c=re.finditer(pattern,s) for i in c: print(i.__getitem__(0)) I managed to get repeating digit but not able to get non repeating digit. How to frame regex to get non repeating digits and repeating digits. s = '1222311' Expected output Repeating digit : 222 , 11 Non Repeating digit : 1,3 | Changing your pattern to r'(\d)(\1*)' will select all digits, and group together each set of consecutive digits. Then with your list of matches, simply sort them based on length: import re s = '1222311' pattern = r'(\d)\1*' repeating_digits = [] non_repeating_digits = [] # finditer produces the following matches `['1', '222', '3', '11']` for match in re.finditer(pattern, s): digits = match.group(0) if len(digits) > 1: repeating_digits.append(digits) else: non_repeating_digits.append(digits) print(f"Repeating digit : {' , '.join(repeating_digits)}") print(f"Non Repeating digit : {' , '.join(non_repeating_digits)}") Pattern Explanation (\d) matches any single digit (0 - 9), and is the first capture group. \1 is a back reference to the first capture group, (\d). It will match whatever was matched by the first capture group, which in this case is a single digit. Adding the asterisk to the back reference \1* will match the digit from the first capture group when repeated 1 or more times. | 2 | 1 |
78,743,607 | 2024-7-13 | https://stackoverflow.com/questions/78743607/socket-cant-find-af-unix-attribute | I'm using an Arch Linux machine and trying to run the following code from a Python file. import socket import sys if __name__ == "__main__": print(sys.platform) server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) and it keeps telling me that AttributeError: module 'socket' has no attribute 'AF_UNIX' Things tried Some posts claim this error occurs on Windows but obviously that isn't the case. sys.platform prints linux Code works on my Mac which was running Python3.9 Downgraded from Python3.12 to Python3.9 on the Linux machine and still no luck socket.AF_INET has the same issue running python -c "import socket; socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) returns the same error error when using system binary python, conda python and venv python running it in the interactive shell has no issue however. | The error is caused by an unlucky choice of variable name. The variable socket shadows the module of the same name. The first time the line is run, it works fine: import socket socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) # works fine, but now socket points to the object returned by the socket.socket() call socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) # error! Solution: use a different variable name, e.g. sock: sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) | 2 | 2 |
78,733,152 | 2024-7-11 | https://stackoverflow.com/questions/78733152/how-to-terminate-a-function-which-is-working-in-python | I want to make a code which works like this: after certain seconds I want to terminate the working function in the main function. I've tried the below code but when the code in the loop function isn't an async function(like await asyncio.sleep(1)) it doesn't work. import asyncio async def loop(): while True: pass async def main(): task = asyncio.Task(loop()) await asyncio.sleep(3) print('1223') task.cancel() print('x') asyncio.run(main()) | Use multiprocessing. When you want a function to be terminated after certain seconds, then all the function should start running simultaneously. My comments inside the code explains the rest. import multiprocessing as mp import time as t # function which needs to be terminated after certain seconds def func1(): # Do something def func2(): # Do something # A function to terminate func1 def stop(func): # terminate the func after 1 sec t.sleep(1) func.terminate() # Add execution guard in case "spawn"-multiprocessing mode is used if __name__ == "__main__": # Create two processes pro_1 = mp.Process(target=func1) pro_2 = mp.Process(target=func2) # third process being stop() and the args being pro_1 pro_3 = mp.Process(target=stop, args=(pro_1,)) # Start the processes pro_1.start() pro_2.start() pro_3.start() #As the pro_1 is terminated use process.join for only pro_2 and pro_3 pro_2.join() pro_3.join() | 3 | 2 |
78,742,818 | 2024-7-13 | https://stackoverflow.com/questions/78742818/regex-find-matches-only-outside-of-single-quotes | I currently have a regex that selects all occurrences of (, , , );: (\s\()|(,\s)|(\),)|(\);) however, I've been trying to figure out a way so that if anything is between single quotes 'like this, for example', it'll ignore any of the matches listed above. I tried many different solutions, however none of them seemed to work for me. Does anyone know of ways I could make this work? | Expressing a not-find is a tricky thing, as everything around a regex is designed to work in a positive / greedy way (find as much as possible, whenever somehow possible). The easiest and most likely fastest thing you could to is to remove the parts you want to exclude prior to applying your search, assuming quotes always appear in pairs: "'[^']*'" => "" and then apply your search to the remaining string. If the string needs to be modified "inplace", you could first search for these things and replace them with arbitrary, non-colliding placeholders that do not appear naturally, and replace them later again. (I quite often use something like ###Placeholder1### or something for that purpose. Easy to match and replace again, and almost guaranteed to not appear elsewhere naturally). Python example: import re text = "this is a , and this a ( whith a ) while 'this ( is in quotes,therefore excluded' unlike these: ( ) , but 'these () are again'. period." print(text) placeholders = [] def repl(m): contents = m.group(1) placeholders.append(contents) return "###Placeholder{0}###".format(len(placeholders) - 1) temp=re.sub('(\'[^\']*\')', repl, text) print(temp) temp=re.sub('([,\)\(])', "`\\1`", temp) print(temp) for k in range(len(placeholders)): temp = re.sub("###Placeholder{0}###".format(k), placeholders[k], temp) print(temp) (Note that the ### also ensures that Placeholder1 and Placeholder13 won't collide later on.) this is a , and this a ( whith a ) while 'this ( is in quotes,therefore excluded' unlike these: ( ) , but 'these () are again'. period. this is a , and this a ( whith a ) while ###Placeholder0### unlike these: ( ) , but ###Placeholder1###. period. this is a , and this a ( whith a ) while ###Placeholder0### unlike these: ( ) , but ###Placeholder1###. period. this is a , and this a ( whith a ) while 'this ( is in quotes,therefore excluded' unlike these: ( ) , but 'these () are again'. period. Or with the pythonic * operator, the final round of re-replacing could be omitted. (This however may cause issues if {0} and stuff appear naturally): import re text = "this is a , and this a ( whith a ) while 'this ( is in quotes,therefore excluded' unlike these: ( ) , but 'these () are again'. period." print(text) placeholders = [] def repl(m): placeholders.append(m.group(1)) return "{"+"{0}".format(len(placeholders) - 1) + "}" temp=re.sub('(\'[^\']+\')', repl, text) print(temp) temp=re.sub('([,\)\(])', "`\\1`", temp) print(temp) temp = temp.format(*placeholders) print(temp) | 2 | 1 |
78,742,511 | 2024-7-12 | https://stackoverflow.com/questions/78742511/cumulative-calculation-across-rows | Suppose I have a function: def f(prev, curr): return prev * 2 + curr (Just an example, could have been anything) And a Polars dataframe: | some_col | other_col | |----------|-----------| | 7 | ... | 3 | | 9 | | 2 | I would like to use f on my dataframe cumulatively, and the output would be: | some_col | other_col | |----------|-----------| | 7 | ... | 17 | | 43 | | 88 | I understand that, naturally, this type of calculation isn't going to be very efficient since it has to be done one row at a time (at least in the general case). I can obviously loop over rows. But is there an elegant, idiomatic way to do this in Polars? | It depends on the exact operation you need to perform. The example you've given can be expressed in terms of .cum_sum() with additional arithmetic: def plus_prev_times_2(col): x = 2 ** pl.int_range(pl.len() - 1).reverse() y = 2 ** pl.int_range(1, pl.len()) cs = (x * col.slice(1)).cum_sum() return cs / x + col.first() * y df = pl.DataFrame({"some_col": [7, 3, 9, 2]}) df.with_columns( pl.col.some_col.first() .append(pl.col.some_col.pipe(plus_prev_times_2)) .alias("plus_prev_times_2") ) shape: (4, 2) ββββββββββββ¬ββββββββββββββββββββ β some_col β plus_prev_times_2 β β --- β --- β β i64 β f64 β ββββββββββββͺββββββββββββββββββββ‘ β 7 β 7.0 β β 3 β 17.0 β β 9 β 43.0 β β 2 β 88.0 β ββββββββββββ΄ββββββββββββββββββββ Vertical fold/scan In general, I believe what you're asking for is called a "Vertical fold/scan" https://github.com/pola-rs/polars/issues/12165 Polars only offers a horizontal version, pl.cum_fold df = pl.DataFrame(dict(a=[7], b=[3], c=[9], d=[2])) df.with_columns( pl.cum_fold(acc=0, function=lambda acc, x: acc * 2 + x, exprs=pl.all()) ) shape: (1, 5) βββββββ¬ββββββ¬ββββββ¬ββββββ¬βββββββββββββββ β a β b β c β d β cum_fold β β --- β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β struct[4] β βββββββͺββββββͺββββββͺββββββͺβββββββββββββββ‘ β 7 β 3 β 9 β 2 β {7,17,43,88} β βββββββ΄ββββββ΄ββββββ΄ββββββ΄βββββββββββββββ As discussed in the issue, a vertical equivalent would be hugely inefficient. For an efficient approach, you can write plugins in Rust: https://marcogorelli.github.io/polars-plugins-tutorial/cum_sum/ But using something like numba is probably easier to implement. There are several existing numba answers, e.g. Python (Polars): Vectorized operation of determining current solution with the use of previous variables | 4 | 3 |
78,734,383 | 2024-7-11 | https://stackoverflow.com/questions/78734383/asyncio-how-to-chain-coroutines | I have the following test code, where I am trying to chain together different coroutines. The idea is that I want to have one coroutine that downloads data, and as soon as data is downloaded I want to get the data into the second routine which then process the data. The code below works, whenever I skip the process_data step, but whenever I include the process_data step (trying to chain together coroutines) it fails. How can I fix it? import asyncio import time task_inputs = [0,1,2,3,4,5,4,3,4] async def download_dummy(url): await asyncio.sleep(url) data = url print(f'downloaded {url}') return data async def process_data(data): await asyncio.sleep(1) processed_data = data*2 print(f"processed {data}") return processed_data async def main(task_inputs): task_handlers = [] print(f"started at {time.strftime('%X')}") async with asyncio.TaskGroup() as tg: for task in task_inputs: res = tg.create_task(process_data(download_dummy(task))) # res = tg.create_task(download_dummy(task)) task_handlers.append(res) print(f"finished at {time.strftime('%X')}") results = [task_handler.result() for task_handler in task_handlers] print(results) asyncio.run(main(task_inputs)) The error I get is rather telling, it seems that the first coroutine is not actually executed, when it is passed to the second coroutine, but I am not sure how I can elegantly fix this. + Exception Group Traceback (most recent call last): | File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 2252, in <module> | main() | File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 2234, in main | globals = debugger.run(setup['file'], None, None, is_module) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 1544, in run | return self._exec(is_module, entry_point_fn, module_name, file, globals, locals) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 1551, in _exec | pydev_imports.execfile(file, globals, locals) # execute the script | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile | exec(compile(contents+"\n", file, 'exec'), glob, loc) | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 31, in <module> | asyncio.run(main(task_inputs)) | File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\runners.py", line 194, in run | return runner.run(main) | ^^^^^^^^^^^^^^^^ | File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\runners.py", line 118, in run | return self._loop.run_until_complete(task) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\base_events.py", line 687, in run_until_complete | return future.result() | ^^^^^^^^^^^^^^^ | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 21, in main | async with asyncio.TaskGroup() as tg: | File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\taskgroups.py", line 145, in __aexit__ | raise me from None | ExceptionGroup: unhandled errors in a TaskGroup (9 sub-exceptions) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 2 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 3 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 4 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 5 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 6 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 7 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 8 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +---------------- 9 ---------------- | Traceback (most recent call last): | File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data | processed_data = data*2 | ~~~~^~ | TypeError: unsupported operand type(s) for *: 'coroutine' and 'int' +------------------------------------ | The problem is that the line tg.create_task(process_data(download_dummy(task))) wont call process_data with the result of download_dummy, but with an awaitable - that is, the paramter that gets inside process_data have to be awaited to get to its value. For simple, hard-coded, pipelines, IΒ΄d usually just create a small function calling the co-routines in order: async def pipeline(arg): step1 = await download_data(arg) step2 = await process_results(step1) return step2 And then tg.create_task(pipeline(task))) Turns out that pipeline can be made generic, and get the co-routines to run in series at runtime - that should work even for complicated cases: from typing import Sequence, Awaitable, Any async def pipeline(coroutines: Sequence[awaitable], initial_arg): partial = initial_arg for coroutine in couroutines: partial = await coroutine(partial) return partial [...] async def main(task_inputs): task_handlers = [] print(f"started at {time.strftime('%X')}") chain = [download_dummy, process_data] async with asyncio.TaskGroup() as tg: for task in task_inputs: res = tg.create_task(pipeline(chain, task) task_handlers.append(res) | 2 | 1 |
78,737,630 | 2024-7-11 | https://stackoverflow.com/questions/78737630/a-kerastensor-cannot-be-used-as-input-to-a-tensorflow-function | I have been following a machine-learning book by Chollet and I keep getting this error in this block of code, specifically in the 3rd line. It seems I am passing a Keras tensor into a tf function but I don't know how to get around this. import tensorflow as tf inputs = keras.Input(shape=(None,), dtype="int64") embedded = tf.one_hot(inputs, depth=max_tokens) x = layers.Bidirectional(layers.LSTM(32))(embedded) x = layers.Dropout(0.5)(x) outputs = layers.Dense(1, activation="sigmoid")(x) model = keras.Model(inputs, outputs) model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"]) model.summary() I tried to follow the solution in the error message which told me to create a new class but didn't know how to do it. | Wrap the TensorFlow function (tf.one_hot) in a layer. And, replace the call to the function with a call to the new layer. eg: class EmbeddedLayer(keras.Layer): def call(self, x): return tf.one_hot(x, depth=max_tokens) inputs = keras.Input(shape=(None,), dtype="int64") embedded = EmbeddedLayer()(inputs) x = layers.Bidirectional(layers.LSTM(32))(embedded) x = layers.Dropout(0.5)(x) outputs = layers.Dense(1, activation="sigmoid")(x) model = keras.Model(inputs, outputs) model.compile( optimizer="rmsprop", loss="binary_crossentropy", metrics=["accuracy"], ) model.summary() | 3 | 5 |
78,741,275 | 2024-7-12 | https://stackoverflow.com/questions/78741275/using-something-like-mydataframe-print-instead-of-printmydataframe | When I run a script from the Spyder console using runfile, source file lines containing expressions (e.g., "HELLO") don't print out to the console. I have to explicitly print, e.g., print("HELLO"). Is there a way to output the string representation of a pandas DataFrame using a method? Something like the very last chained method below: dfShipName[['ShpNm','ShipNamesPrShpNm']] \ .drop_duplicates() \ .groupby('ShipNamesPrShpNm',as_index=False)['ShipNamesPrShpNm'] \ .agg(['count']) \ .rename(columns={'count':'nShpNm'}) \ .sort_values(by='nShpNm',ascending=False) \ .head(20).print() I could encapsulate the entire thing (sans .print()) in a print() invocation, but that's another level of indentation. Just appending a print()-like method avoids having to be much more cleaner, which makes a big difference when I have lots of these (for exploratory analysis). I can also toggle the output by simply commenting away the .print(). | I would just save the dataframe to a variable and print the variable. In any case, you can use pipe to achieve the behavior you're looking for: import pandas as pd df = pd.DataFrame({"a": [1, 2, 3]}) df.add(4).pipe(print) a 0 5 1 6 2 7 | 2 | 5 |
78,740,033 | 2024-7-12 | https://stackoverflow.com/questions/78740033/how-to-instantly-terminate-a-thread-using-ollama-python-api-with-tkinter-to-str | I'm using ollama to stream a response from llama2 large language model. The functionality I need is, when I click the stop button, it should stop the thread immediately. The code below works but the problem is, it always waits for the first chunk to be available before it checks if stop is True. Sometimes the response can take some time before the first chunk is ready. It shouldn't wait for the first chunk. It should immediately stop the thread when stop button is clicked. My code: import tkinter as tk import ollama import threading stop = False # Create the main window root = tk.Tk() root.title("Tkinter Button Example") def get_answer(): print("Start") stream = ollama.chat( model='llama2', messages=[{'role': 'user', 'content': 'write a story about earth'}], stream=True, ) for chunk in stream: if stop is False: print(chunk['message']['content'], end='', flush=True) else: print("Stopped") return # Define the function to be called when the button is pressed def on_button_click(): global stop stop = True print("Button clicked!") # Create the button button = tk.Button(root, text="Stop", command=on_button_click) # Place the button on the window button.pack(pady=20) thread = threading.Thread(target=get_answer) thread.start() # Start the Tkinter event loop root.mainloop() | You cannot instantly terminate a thread in python. This ollama API currently offers an async client, you can use the async client and cancel the Task, this should close the async connection almost instantly. import tkinter as tk import ollama import threading import asyncio from typing import Optional # Create the main window root = tk.Tk() root.title("Tkinter Button Example") client = ollama.AsyncClient() async def get_answer(): print("Start") stream = await client.chat( model='llama2', messages=[{'role': 'user', 'content': 'write a story about earth'}], stream=True, ) async for chunk in stream: print(chunk['message']['content'], end='', flush=True) worker_loop: Optional[asyncio.AbstractEventLoop] = None task_future: Optional[asyncio.Future] = None def worker_function(): global worker_loop, task_future worker_loop = asyncio.new_event_loop() task_future = worker_loop.create_task(get_answer()) worker_loop.run_until_complete(task_future) # Define the function to be called when the button is pressed def on_button_click(): # the loop and the future are not threadsafe worker_loop.call_soon_threadsafe( lambda: task_future.cancel() ) print("Button clicked!") # Create the button button = tk.Button(root, text="Stop", command=on_button_click) # Place the button on the window button.pack(pady=20) thread = threading.Thread(target=worker_function) thread.start() # Start the Tkinter event loop root.mainloop() this does raise an asyncio.CancelledError in the worker thread, so you may want to catch it if you don't want errors on your screen, and you may want to wrap the whole thing in a class rather than relying on globals. | 2 | 1 |
78,739,598 | 2024-7-12 | https://stackoverflow.com/questions/78739598/python-rgba-image-non-zero-pixels-extracting-numpy-mask-speed-up | I need to extract non-zero pixels from RGBA Image. Code below works, but because I need to deal with really huge images, some speed up will be salutary. Getting "f_mask" is the longest task. Is it possible to somehow make things work faster? How to delete rows with all zero values ([0, 0, 0, 0]) faster? import numpy as np import time img_size = (10000, 10000) img = np.zeros((*img_size, 4), float) # Make RGBA image # Put some values for pixels [float, float, float, int] img[0][1] = [1.1, 2.2, 3.3, 4] img[1][0] = [0, 0, 0, 10] img[1][2] = [6.1, 7.1, 8.1, 0] def f_img_to_pts(f_img): # Get non-zero rows with values from whole img array f_shp = f_img.shape f_newshape = (f_shp[0]*f_shp[1], f_shp[2]) f_pts = np.reshape(f_img, f_newshape) f_mask = ~np.all(f_pts == 0, axis=1) f_pts = f_pts[f_mask] return f_pts t1 = time.time() pxs = f_img_to_pts(img) t2 = time.time() print('PIXELS EXTRACTING TIME: ', t2 - t1) print(pxs) | Analysis ~np.all(f_pts == 0, axis=1) is sub-optimal for several reasons: First of all, the operation is memory-bound (typically by the RAM bandwidth) because the image is huge (400 MB in memory) -- even the boolean masks are huge (100 MB). Moreover, Numpy creates multiple temporary arrays: one for the result of f_pts == 0, one for np.all(...) and even one for ~np.all(...). Each array is fully stored in RAM and read back which is inefficient. Even worst: newly allocated arrays are typically not directly mapped in physical memory due to the way virtual memory works and so the expensive overhead of page-faults needs to be paid for each temporary array. Last but not least, Numpy is not optimized for computing array where the main dimension (ie. generally the last) is tiny, like here (4 items). This can be fixed by using a better memory layout though. See AoS vs SoA for more informations. Note that np.zeros reserve some zeroized space in virtual memory but it does not physically map it. Thus, the first call to f_img_to_pts is slower because of page faults. In real-world application, this is generally not the case because writing zeros in img or reading it once causes the whole page to be physically mapped (ie. page-faults). If this is not the case, then you should certainly use sparse matrices instead of dense ones. Let's assume img is already mapped in physical memory (this can be done by running the code twice or by just calling img.fill(0) after np.zeros. Also please note that the dtype float is actually 64-bit float and not 32-bit ones. 64-bit floats are generally more expensive, especially in memory-bound codes since they take twice more space. You should certainly use 32-bit ones for image computations since they nearly never require such a very-high precision. Let's assume the input array is a 32-bit one now. Solutions One way to reduce the overheads is to iterate row by row (loops are not so bad as they seems here, quite the opposite actually if done correctly). While this should be faster, this is still sub-optimal. There is no way to make this much faster than that because of the way Numpy is actually designed. To make this even faster, one can use modules compiling Python functions to native ones like Numba or Cython. With them, we can easily avoid the creation of temporary arrays, specialize the code for 4-channel images and even use multiple threads. Here is the resulting code: import numpy as np import time import numba as nb img_size = (10000, 10000) img = np.zeros((*img_size, 4), np.float32) # Make RGBA image img.fill(0) # Put some values for pixels [float, float, float, int] img[0][1] = [1.1, 2.2, 3.3, 4] img[1][0] = [0, 0, 0, 10] img[1][2] = [6.1, 7.1, 8.1, 0] # Compute ~np.all(f_pts==0,axis=1) and assume most items are 0 @nb.njit('(float32[:,:,::1],)', parallel=True) def f_img_to_pts_optim(f_img): # Get non-zero rows with values from whole img array n, m, c = f_img.shape assert c == 4 # For sake of performance f_mask = np.zeros(n * m, dtype=np.bool_) f_pts = np.reshape(f_img, (n*m, c)) for ij in nb.prange(n * m): f_mask[ij] = (f_pts[ij, 0] != 0) | (f_pts[ij, 1] != 0) | (f_pts[ij, 2] != 0) | (f_pts[ij, 3] != 0) f_pts = f_pts[f_mask] return f_pts t1 = time.time() pxs = f_img_to_pts_optim(img) t2 = time.time() print('PIXELS EXTRACTING TIME: ', t2 - t1) print(pxs) Results Here are timings on a 10-core Skylake Xeon CPU: Initial code (64-bit): 1720 ms Initial code (32-bit): 1630 ms Optimized code (sequential): 323 ms Optimized code (parallel): 182 ms <---------- The optimized code is 9 times faster than the initial code (on the same input type). Note that the code does not scale with the number of core since reading the input is a bottleneck combined with the sequential f_pts[f_mask] operation. One can build f_pts[f_mask] using multiple threads (and it should be about 2 times faster) but this is rather complicated to do (especially with Numba). | 2 | 3 |
78,736,930 | 2024-7-11 | https://stackoverflow.com/questions/78736930/pycord-unknown-interaction | I'm trying to make a embed message with buttons. For support system but I'm getting Unknown interaction error. I added ephemeral=True to defer. Also changed this: interaction.response.send_messageto this interaction.followup.send() embed = discord.Embed( title="Are You Looking For Help?", description="Use buttons man.", color=discord.Colour.blurple(), ) class MyView(discord.ui.View): @discord.ui.button(label="Open Ticket", style=discord.ButtonStyle.primary, emoji="π") async def button_callback(self, interaction: discord.Interaction, _): await interaction.followup.send("You clicked the button!") @bot.slash_command(name="support") async def support(ctx): await ctx.defer(ephemeral=True) view = MyView() await ctx.response.send_message(embed=embed, view=view) bot.run("token") Error: C:\Users\playe\PycharmProjects\pythonProject\.venv\Scripts\python.exe "C:\Users\playe\PycharmProjects\Minecraft Player Bot\main.py" Lolo RS's Main#6223 Ignoring exception in command support: Traceback (most recent call last): File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\commands\core.py", line 131, in wrapped ret = await coro(arg) ^^^^^^^^^^^^^^^ File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\commands\core.py", line 1013, in _invoke await self.callback(ctx, **kwargs) File "C:\Users\playe\PycharmProjects\Minecraft Player Bot\main.py", line 35, in support await ctx.defer(ephemeral=True) File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\interactions.py", line 748, in defer await self._locked_response( File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\interactions.py", line 1243, in _locked_response await coro File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\webhook\async_.py", line 220, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\bot.py", line 1130, in invoke_application_command await ctx.command.invoke(ctx) File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\commands\core.py", line 376, in invoke await injected(ctx) File "C:\Users\playe\PycharmProjects\pythonProject\.venv\Lib\site-packages\discord\commands\core.py", line 139, in wrapped raise ApplicationCommandInvokeError(exc) from exc discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: NotFound: 404 Not Found (error code: 10062): Unknown interactio n | Edit: Based on furhter information, I would say that the issue also intermittently comes from a slow internet connection with discord from the computer your bot is running on. This means that upon recieving the interaction on your computer, the interaction will already have timed out, simply because it didn't arrive fast enough to you. The fact that the interaction is not found means that, when your code responded to it, discord already deleted it from their system, because you need to respond to it in ~3s or so. Doing ctx.defer() is also considered responding to the interaction. The issue you are facing seems to come from multiple small mistakes in how to handle interactions in the pycord library. If you want to jump right in, you have my working code below, but here are some explanations on your mistakes, and how I fixed them: Mixing up discord.Interaction and discord.ApplicationContext: Buttons, and other UI elements and features, return the first one, but slash command the second. In truth, ApplicationContext is a wrapper around Interaction. In your slash command, you are calling ctx.response.send_message, but doing ctx.respond is actually the correct thing, as everything is handled by the library. Bad parameter order in your button callback: The callback should by specification have the following parameters, in order: self, button: discord.Button (the button itself), interaction: discord.Interaction (the interaction). However, you put the interaction first. Calling interaction.followup.send() instead of interaction.respond() or response.send_message(): This interaction is different from the one you had with the button component, and has never been responded to before. The .respond method is a shortcut to do the correct one without worrying about having to think about it. Also, you used interaction.followup.send()(message) with the parentheses twice, which should not be the case. embed = discord.Embed( title="Are You Looking For Help?", description="Use buttons man.", color=discord.Colour.blurple(), ) class MyView(discord.ui.View): @discord.ui.button(label="Open Ticket", style=discord.ButtonStyle.primary, emoji="π") async def button_callback(self, button: discord.Button, interaction: discord.Interaction): await interaction.response.defer(ephemeral=True) await interaction.respond("You clicked the button!", ephemeral=True) @bot.slash_command(name="support") async def support(ctx: discord.ApplicationContext): await ctx.defer(ephemeral=True) view = MyView() await ctx.respond(embed=embed, view=view) Let me know if anything is unclear in a comment and I will try to edit my response. | 4 | 4 |
78,737,053 | 2024-7-11 | https://stackoverflow.com/questions/78737053/importerror-cant-import-name-perf-counter-what-may-be-the-reason | I`ve been working on my code for Micropython-flashed ESP8266, and well... this error occured: from time import perf_counter ImportError: can't import name perf_counter I believe that the whole code itself is irrelevant, as the problem occurs during the import itself. What may be relevant though, is the list of functions imported, which looks strange to me: >>> import time >>> dir(time) ['__class__', '__name__', '__dict__', 'gmtime', 'localtime', 'mktime', 'sleep', 'sleep_ms', >'sleep_us', 'ticks_add', 'ticks_cpu', 'ticks_diff', 'ticks_ms', 'ticks_us', 'time', 'time_ns'] I'm currently using Python version 3.9.0 I have tried looking up the failure on google, but found only the older failures that are associated with time.clock removal and its later replacement with time.perf_counter. | In micropython, according to its documentation the time module doesn't have perf_counter. | 3 | 3 |
78,735,531 | 2024-7-11 | https://stackoverflow.com/questions/78735531/numpy-memory-error-when-masking-along-only-certain-axis-despite-having-sufficie | I have a large array and I want to mask out certain values (set them to nodata). But I'm experiencing an out-of-memory error despite having sufficient RAM. I have shown below an example that reproduces my situation. My array is 14.5 GB and the mask is ~7GB, but I have 64GB of RAM dedicated to this, so I don't understand why this fails. import numpy as np arr = np.zeros((1, 71829, 101321), dtype='uint16') arr.nbytes #14555572218 mask = np.random.randint(2, size=(71829, 101321), dtype='bool') mask.nbytes #7277786109 nodata = 0 #this results in OOM error arr[:, mask] = nodata Interestingly, if I do the following, then things work. arr = np.zeros((71829, 101321), dtype='uint16') arr.nbytes #14555572218 mask = np.random.randint(2, size=(71829, 101321), dtype='bool') mask.nbytes #7277786109 nodata = 0 #this works arr[mask] = nodata But it isn't something I can use. This code will be a part of a library module that would need to accept a variable value for the zeroth dimension. My guess is that arr[mask] = nodata is modifying the array in-place but arr[:, mask] = nodata is creating a new array, but I don't know why that would be the case. Even if it did, there should still be enough space for that, since the total size of arr and mask would be 22GB and I have 64GB of RAM. I tried searching about this, I found this but I'm new to numpy and I didn't understand the explanation of the longer answer. I did try the np.where approach from the other answer to that question, but I still get OOM error. Any input would be appreciated. | I suspect the issue here is that combining slice-based and mask-based indexing leads to a memory-inefficient codepath. You might try expressing it this way so that you're using entirely mask-based indexing: arr[mask[None]] = nodata I don't know enough about the implementation of np.ndarray.__setitem__ to guess at why the arr[:, mask] version leads to memory issues. | 2 | 1 |
78,735,133 | 2024-7-11 | https://stackoverflow.com/questions/78735133/how-to-get-a-square-wave-with-specific-tails-length-by-using-numpy | I'm trying to represent a special square ware characterized by some feature as represented here: all values are related to thick parameter. Tails are related to the thick parameter and to the total length (the size of the tail is equal to three times the thickness plus a remainder which depends on the number of notches that can be placed in the total length). here is the code that allowed me to obtain some good results: import numpy as np thick = 4 length = 60 gap = 2*thick remain = length % thick tail = 3*thick + remain half_floor_division = int(((length-tail) // (thick*2))) motif = np.full(half_floor_division,gap) motif = np.concatenate([[tail],motif[:-1],[tail]]) x = np.cumsum(motif) x = np.insert(x, 0, 0., axis=0) x = np.repeat(x,2 )[1:-1] y = np.tile([0,0,thick,thick],int(np.ceil(len(x)/4)))[:len(x)] and here are some results obtained with different lengths and thicknesses: Well the code seems a little complicated to me... I have the impression that there is a way to make it simpler... and in addition it is unstable since for certain values ββthe end of the pattern is no longer correct. Anyone have any advice? or another approach to correct the situation? | In your "OK" examples the parts do not add up to the length... I don't fully understand what numpy array you are trying to build, but to compute the values of the parameters, I think the easiest way is to take away from length the two default tails and one gap. What's left will be an unknonwn number of gap pairs including a valley and a crest, plus the two remains to add to each tail. In code, this would look like: def get_motif(length, thick): gap = 2 * thick tail = 3 * thick n, remain = divmod(length - 2 * tail - gap, 2 * gap) tail += remain / 2 motif = [tail] + [gap] * (2 * n + 1) + [tail] assert sum(motif) == length return motif Running this on your first and last examples: >>> get_motif(60, 4) [18.0, 8, 8, 8, 18.0] >>> get_motif(75, 3) [10.5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 10.5] | 4 | 1 |
78,734,439 | 2024-7-11 | https://stackoverflow.com/questions/78734439/filling-in-plotted-polygon-shape-in-ternary-plot | I've plotted a polygon on a set of ternary axes using python-ternary and joined up the vertices to form a region on the plot. However, I'd like to be able to fill this region so that it is a shape rather than just the outline. I used the following code to generate the axes and a set of vertices for the polygon: import ternary import numpy as np # Define the scale (the sum of the three components for each point) scale = 100 # Define the vertices of the polygon (each point should sum to the scale value) polygon_vertices = [ (20, 30, 50), (40, 30, 30), (30, 60, 10), (10, 50, 40) ] # Create a figure with a ternary axis figure, tax = ternary. Figure(scale=scale) tax.set_title("Ternary Plot with Polygon Contour", fontsize=20) # Plot the vertices of the polygon on the ternary plot tax.scatter(polygon_vertices, marker='o', color='red', label='Vertices') # Connect the vertices to form the contour of the polygon for i in range(len(polygon_vertices)): next_i = (i + 1) % len(polygon_vertices) tax. Plot([polygon_vertices[i], polygon_vertices[next_i]], color='blue', linewidth=2, linestyle='--') # Fill the polygon area using the ax. Fill method tax.ax.fill(*zip(*polygon_vertices), color='lightblue', alpha=0.8, label='Polygon Area') # Set axis labels tax.left_axis_label("Component 1", fontsize=15) tax.right_axis_label("Component 2", fontsize=15) tax.bottom_axis_label("Component 3", fontsize=15) tax.get_axes().axis('off') # Set the gridlines tax.gridlines(color="blue", multiple=10) # Set ticks and gridlines tax.ticks(axis='lbr', linewidth=1, multiple=10) tax.gridlines(multiple=10, color="black") # Display the legend tax.legend() tax.get_axes().set_aspect(1) #sets as equilateral triangle, needs to be the last step of plotting tax._redraw_labels() #see above # Display the plot tax.show() I get the resulting plot shown here: As seen in the plot, it does plot the filled polygon but not in the correct place, and it plots two of them. I'm not sure why. How do I get the correct region filled? Note: This is not the same question as Python fill polygon as this is Cartesian coordinates. When I try and plot with add_patches, I get a Type Error as Polygon can only take two positional arguments but I provide it with three, despite the earlier conversion of ternary to cartesian coordinates. | Calling tax.ax.fill() will go to matplotlib's standard ax.fill(). That function doesn't work with ternary coordinates, it needs regular x and y coordinates. As tax.fill() isn't implemented in the ternary library, you can use project_sequence() from ternary.helpers to convert the ternary coordinates to regular xs and ys. And then call matplotlib's tax.ax.fill() with those. import ternary from ternary.helpers import project_sequence import numpy as np # Define the scale (the sum of the three components for each point) scale = 100 # Define the vertices of the polygon (each point should sum to the scale value) polygon_vertices = [ (20, 30, 50), (40, 30, 30), (30, 60, 10), (10, 50, 40) ] # Create a figure with a ternary axis figure, tax = ternary.figure(scale=scale) tax.set_title("Ternary Plot with Polygon Contour", fontsize=20) # Plot the vertices of the polygon on the ternary plot tax.scatter(polygon_vertices, marker='o', color='red', label='Vertices') # Connect the vertices to form the contour of the polygon for i in range(len(polygon_vertices)): next_i = (i + 1) % len(polygon_vertices) tax.plot([polygon_vertices[i], polygon_vertices[next_i]], color='blue', linewidth=2, linestyle='--') # Fill the polygon area using the ax.fill method xs, ys = project_sequence(polygon_vertices) tax.ax.fill(xs, ys, color='lightblue', alpha=0.8, label='Polygon Area') # Set axis labels tax.left_axis_label("Component 1", fontsize=15) tax.right_axis_label("Component 2", fontsize=15) tax.bottom_axis_label("Component 3", fontsize=15) tax.ax.axis('off') # Set the gridlines tax.gridlines(color="blue", multiple=10) # Set ticks and gridlines tax.ticks(axis='lbr', linewidth=1, multiple=10) tax.gridlines(multiple=10, color="black") # Display the legend tax.legend() tax.ax.set_aspect(1) #sets as equilateral triangle, needs to be the last step of plotting tax._redraw_labels() #see above # Display the plot tax.show() | 4 | 2 |
78,733,218 | 2024-7-11 | https://stackoverflow.com/questions/78733218/how-to-format-string-date-for-aws-glue-crawler-data-frame-to-correctly-identify | I have some json data (sample below). aws glue crawler reads this data and creates a glue catalog database with table , and sets the date field as a string field . is there a way , i can format date in my json file such that crawler can identify this as a date field ? I plan to read this data into dynamic frame via aws glue etl and push it to a sql database , where I want to save it as a date field , so that it is easy to query and do comparisons on the date field. example of script below. can i convert the string date field to rds date field in spark data frame? myscript.py data=gluecontext.create_dynamic_frame.from_catalog(database="sample", table_name="table" ... data_frame=data.toDF() //convert the string field to date field in the spark data frame {"id": "abc", .... date="2024-07-09"} ... | You can use to_date to convert the string field to the date field in the spark dataframe as follows: from pyspark.sql.functions import to_date data=gluecontext.create_dynamic_frame.from_catalog(database="sample", table_name="table") data_frame = data.toDF() # convert the string field to the date field in the spark data frame data_frame = data_frame.withColumn("date", to_date("date", "yyyy-MM-dd")) | 4 | 3 |
78,733,472 | 2024-7-11 | https://stackoverflow.com/questions/78733472/how-to-initialise-a-list-defined-at-the-class-level-from-a-common-method-outside | I have a list defined at the class level in Python and below is my code: class TestStudentReg(unittest.TestCase): student_id_list = [] I'm trying to initialise the list by reading values from a csv file and below is the method that i use within the same class i.e., TestStudentReg @classmethod def initialise_student_id_list(cls): filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) Now that this method is being common across several classes, i'm planning to make it as a reusable function and define it in a separate python file (utilities.py) which will be like this: import csv import logging import time import timeit def get_date(dateformat="%Y-%m-%d", subtract_num_of_days=0): getdate = date.today() - timedelta(subtract_num_of_days) return getdate.strftime(dateformat) def initialise_student_id_list(**kwargs): filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) I would like to understand how to initialise student_id_list from the utilities file. One way i thought to implement this was to have the method return a list and initialise the value by calling this method. def initialise_student_id_list(**kwargs): id_list=[] filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) return id_list Not sure if this is a correct approach. Great if someone could help me on this please. | There are multiple ways to solve your problem. Here I will list three with increasing readability and overall quality: def student_id_list_initialiser(class_obj, **kwargs): cls = class_obj filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) class MyTest(): student_id_list = [] @classmethod def initialise_student_id_list(cls): student_id_list_initialiser(cls) This approach works fine, but it isn't the best because we have to manually pass-in the class object and the code breaks encapsulation. However, if coupled with a simple and short classmethod that sets the class variable student_id_list, it will be better: def student_id_list_creator(**kwargs): ret_l = [] filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: ret_l.append(col[2]) return ret_l class MyTest(): student_id_list = [] @classmethod def initialise_student_id_list(cls): cls.student_id_list = student_id_list_creator() This is similar to what you said you thought of, and yes, it is indeed a good approach. Finally, inheritance is perhaps the best option here, although not a direct inheritance from the TestStudentReg class. Here's the code: class StudentInfoHelper(): @classmethod def initialise_student_id_list(cls): filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter="|")) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) class TestStudentReg(StudentInfoHelper): student_id_list = [] # outside: TestStudentReg.initialise_student_id_list() This code is not only more readable but also is much more flexible and can be applied to any class as long as the class inherit from StudentInfoHelper. The only downside to this approach is that You must name your class variable student_id_list. But since you already decided the variable name, it wouldn't be a problem. It wouldn't work if you define a method named student_id_initialiser. The reason is pretty obvious here: because if you do, then the original method will be "shadowed" by the new one. But sometimes, this is also a good thing if you want to custom the method or if you don't want it. | 2 | 2 |
78,733,361 | 2024-7-11 | https://stackoverflow.com/questions/78733361/how-to-take-the-average-of-all-previous-entries-in-a-group | I'd like to do the following in python using the polars library: Input: df = pl.from_repr(""" ββββββββ¬βββββββββ β Name β Number β β --- β --- β β str β i64 β ββββββββͺβββββββββ‘ β Mr.A β 1 β β Mr.A β 4 β β Mr.A β 5 β β Mr.B β 3 β β Mr.B β 5 β β Mr.B β 6 β β Mr.B β 10 β ββββββββ΄βββββββββ """) Output: shape: (7, 3) ββββββββ¬βββββββββ¬βββββββββββ β Name β Number β average β β --- β --- β --- β β str β i64 β f64 β ββββββββͺβββββββββͺβββββββββββ‘ β Mr.A β 1 β 0.0 β β Mr.A β 4 β 1.0 β β Mr.A β 5 β 2.5 β β Mr.B β 3 β 0.0 β β Mr.B β 5 β 3.0 β β Mr.B β 6 β 4.0 β β Mr.B β 10 β 4.666667 β ββββββββ΄βββββββββ΄βββββββββββ That is to say: For every first entry of a person, set the average to zero. For every subsequent entry, calculate the average based on the previous entries Example: Mr. A started off with average=0 and the Number=1. Then, Mr. A has the Number=4, thus it took the average of the previous entry (1/1 data=1) Then, Mr. A has the Number=5, thus the previous average was: (1+4) / (2 data) = 5/2 = 2.5 And so on I've tried the rolling mean function (using a Polars Dataframe, df), however, I'm restricted by rolling_mean's window size (i.e. it calculates only the past 2 entries, plus it averages the current entry as well; I want to average only the previous entries) Does anyone have an idea? Much appreciated!: df.group_by("Name").agg(pl.col("Number").rolling_mean(window_size=2)) | cum_sum() and cum_count() to calculate cumulative average. shift() so it's calculated for 'previous' row. over() to do the whole operation within Name. df.with_columns( (pl.col.Number.cum_sum() / pl.col.Number.cum_count()) .shift(1, fill_value=0) .over("Name") .alias("average") ) ββββββββ¬βββββββββ¬βββββββββββ β Name β Number β average β β --- β --- β --- β β str β i64 β f64 β ββββββββͺβββββββββͺβββββββββββ‘ β Mr.A β 1 β 0.0 β β Mr.A β 4 β 1.0 β β Mr.A β 5 β 2.5 β β Mr.B β 3 β 0.0 β β Mr.B β 5 β 3.0 β β Mr.B β 6 β 4.0 β β Mr.B β 10 β 4.666667 β ββββββββ΄βββββββββ΄βββββββββββ | 5 | 4 |
78,732,336 | 2024-7-10 | https://stackoverflow.com/questions/78732336/groupby-multiple-columns-and-extract-top-rows-based-on-non-grouped-column-value | I am trying to solve a problem some what very similar to: https://platform.stratascratch.com/coding/10362-top-monthly-sellers?code_type=2 here is my data frame: product seller market total_sales books s1 de 10 books s2 jp 20 books s3 in 30 books s1 de 25 books s5 in 15 books s1 us 12 books s2 uk 10 clothing s1 de 11 clothing s2 de 18 clothing s1 uk 55 clothing s3 in 31 clothing s1 de 14 clothing s2 de 10 clothing s1 de 35 electronics s1 us 18 electronics s1 de 12 electronics s2 in 16 electronics s3 uk 24 electronics s1 us 37 electronics s4 jp 27 electronics s3 uk 26 electronics s1 us 15 Expected output is: product seller market total_sales books s1 de 35 books s3 in 30 books s2 jp 20 Clothing s1 de 60 Clothing s1 uk 55 Clothing s3 in 31 electronics s1 us 70 electronics s3 uk 50 electronics s4 jp 27 I want to get top three product category sales in the world. I was able to aggregate total sales basing on product,seller & group them. After grouping I am able to sort it by total_sales per group. But, I was not able to get only the top-3 total_sale rows per ['product'] group. Cany anyone help on this. import pandas as pd df = df.groupby(['product', 'seller','market'], as_index = False).agg({'total_sales':sum}) df = df.sort_values(['product','total_sales'],ascending=False).groupby(['product', 'seller','market',],as_index = False).aggregate(lambda x: ','.join(map(str, x))) print(df) | You can do it like this: (dfs:= df.groupby(['product', 'seller', 'market'], as_index=False)['total_sales'].sum())\ .loc[dfs.groupby('product')['total_sales'].rank(ascending=False)<4]\ .sort_values(['product','total_sales'], ascending=[True, False]) Output: product seller market total_sales 0 books s1 de 35 4 books s3 in 30 2 books s2 jp 20 6 clothing s1 de 60 7 clothing s1 uk 55 9 clothing s3 in 31 11 electronics s1 us 70 13 electronics s3 uk 50 14 electronics s4 jp 27 Details: Use walrus operator to assign groupby product, seller and market dataframe to a variable dfs. Filter that dataframe by dfs groupby product and rank, then boolean index dfs for only to top three rank Lastly, sort the filtered dataframe on product and total_sales descending. | 3 | 4 |
78,729,859 | 2024-7-10 | https://stackoverflow.com/questions/78729859/numpythonic-way-to-fill-value-based-on-range-indices-reference-label-encoding-f | I have this tensor dimension: (batch_size, class_id, range_indices) -> (4, 3, 2) int64 [[[1250 1302] [1324 1374] [1458 1572]] [[1911 1955] [1979 2028] [2120 2224]] [[2546 2599] [2624 2668] [2765 2871]] [[3223 3270] [3286 3347] [3434 3539]]] How do I construct dense representation with filled value with this rule? Since there are 3 class IDs, therefore: Class ID 0: filled with 1 Class ID 1: filled with 2 Class ID 2: filled with 3 Default: filled with 0 Therefore, it will outputting vector like this: [0 0 0 ...(until 1250)... 1 1 1 ...(until 1302)... 0 0 0 ...(until 1324)... 2 2 2 ...(until 1374)... and so on] Here is a copiable code: data = np.array([[[1250, 1302], [1324, 1374], [1458, 1572]], [[1911, 1955], [1979, 2028], [2120, 2224]], [[2546, 2599], [2624, 2668], [2765, 2871]], [[3223, 3270], [3286, 3347], [3434, 3539]]]) Here is code generated by ChatGPT, but I'm not sure it's Numpythonic since it's using list comprehension: import numpy as np # Given tensor tensor = np.array([[[1250, 1302], [1324, 1374], [1458, 1572]], [[1911, 1955], [1979, 2028], [2120, 2224]], [[2546, 2599], [2624, 2668], [2765, 2871]], [[3223, 3270], [3286, 3347], [3434, 3539]]]) # Determine the maximum value in the tensor to define the size of the output array max_value = tensor.max() # Create an empty array filled with zeros of size max_value + 1 dense_representation = np.zeros(max_value + 1, dtype=int) # Generate the class_ids array, replicated for each batch class_ids = np.tile(np.arange(1, tensor.shape[1] + 1), tensor.shape[0]) # Generate start and end indices start_indices = tensor[:, :, 0].ravel() end_indices = tensor[:, :, 1].ravel() # Create an array of indices to fill indices = np.hstack([np.arange(start, end) for start, end in zip(start_indices, end_indices)]) # Create an array of values to fill values = np.hstack([np.full(end - start, class_id) for start, end, class_id in zip(start_indices, end_indices, class_ids)]) # Fill the dense representation array dense_representation[indices] = values # The resulting dense representation print(dense_representation) print(dense_representation[1249:1303]) print(dense_representation[1323:1375]) print(dense_representation[1457:1573]) print(dense_representation[1910:1956]) Output: [0 0 0 ... 3 3 0] [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0] [0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0] [0 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0] [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0] | IIUC, you could craft the output array with zeros, repeat, tile: start = data[..., 0].ravel() end = data[..., 1].ravel() slices = [slice(a,b) for a,b in zip(start, end)] n = end-start out = np.zeros(data.max(), dtype='int') out[np.r_[*slices]] = np.repeat(np.tile(np.arange(data.shape[1])+1, data.shape[0]), n) Variant with boolean indexing: start = data[..., 0].ravel() end = data[..., 1].ravel() out = np.zeros(data.max(), dtype='int') idx = np.arange(len(out)) m = ((idx >= start[:, None]) & (idx < end[:, None])).any(axis=0) n = end-start out[m] = np.repeat(np.tile(np.arange(data.shape[1])+1, data.shape[0]), n) Or: start = data[..., 0].ravel() end = data[..., 1].ravel() out = np.zeros(data.max(), dtype='int') idx = np.arange(len(out)) m1 = ((idx >= start[:, None]) & (idx < end[:, None])) m2 = m1.any(axis=0) nums = np.tile(np.arange(data.shape[1])+1, data.shape[0]) out[m2] = nums[m1[:, m2].argmax(0)] Output: [0 0 0 ... 3 3 3] | 2 | 1 |
78,729,171 | 2024-7-10 | https://stackoverflow.com/questions/78729171/how-to-have-the-size-of-markers-match-in-a-matplotlib-plot-and-in-its-legend | I have different (X,Y) data series drawn as scatter plots on one figure. Within each series, the marker size is set to different values according to the rank of the data point within the series (i.e. the 1st point in the series is shown with a smaller marker than the 2nd, which itself is shown with a smaller marker than the 3rd and so onβ¦). I want to show in the legend which marker size correspond to which rank. I managed to set the marker size in the legend handles to the same values as those on the graph. I also set the markerscale keyword of the legend() function to 1, aiming to have the markers be the same size in both legend and graph. However, the markers appear significantly larger in the legend than in the graph. Here is a code snippet replicating my issue. import numpy as np import numpy.random as rd import matplotlib.pyplot as plt from matplotlib.lines import Line2D rd.seed(12345) fig,ax = plt.subplots() # Number of time steps in each series nb_steps = 5 t = np.linspace(1,nb_steps,nb_steps) # Series 1 x1 = rd.random(nb_steps) y1 = rd.random(nb_steps) # Series 2 x2 = 1+2*rd.random(nb_steps) y2 = 1+rd.random(nb_steps) # Plotting with increasing marker size within series marker_sizes = (t+3)**2 ax.scatter(x1,y1, marker = '+', s = marker_sizes) ax.scatter(x2,y2, marker = '+', s = marker_sizes) # Writing marker size legend handles = [] for size, step in zip(marker_sizes, t): handles.append(Line2D([0], [0], marker='+', lw = 0, color='k', markersize=size, label=step)) ax.legend(handles = handles, markerscale=1) plt.show() And here is the output figure Thanks in advance for your help. | The markersizes in pyplot.scatter use the squared form, proportional to area (as you have already noted). However, those in the legend appear not to. I suggest that you set your marker sizes as the linear form (so that they are OK for the legend): marker_sizes = t+3 and then make the squaring explicit in pyplot.scatter() (but NOT in the legend): ax.scatter(x1,y1, marker = '+', s = marker_sizes**2) | 3 | 2 |
78,728,576 | 2024-7-10 | https://stackoverflow.com/questions/78728576/polars-apply-function-to-check-if-a-row-value-is-a-substring-of-another-string | I'm trying to check if string_1 = "this example string" contains a column value as a substring. For example the first value in Col B should be True since "example" is a substring of string_1 string_1 = "this example string" df = pl.from_repr(""" ββββββββββ¬ββββββββββ¬βββββββββββββ β Col A β Col B β Col C β β --- β --- β --- β β str β str β str β ββββββββββͺββββββββββͺβββββββββββββ‘ β 448220 β example β 7101936801 β β 518398 β 99999 β 9999900091 β β 557232 β 424570 β 4245742060 β ββββββββββ΄ββββββββββ΄βββββββββββββ """) This is what I have tried so far, but it's returning the following error: df=df.with_columns(pl.col("Col B").apply(lambda x: x in string_1).alias("new_col")) AttributeError: 'Expr' object has no attribute 'apply' | It's always better to avoid using python functions and use native polars expressions. pl.lit() to create a dummy column from string_1 str.contains() to check if string contains a column value. ( df .with_columns(pl.lit(string_1).str.contains(pl.col('Col B')).alias('new_col') ) Or you can use name.keep() if you want to check all columns. ( df .with_columns(pl.lit(string_1).str.contains(pl.all()).name.keep()) ) βββββββββ¬ββββββββ¬ββββββββ β Col A β Col B β Col C β β --- β --- β --- β β bool β bool β bool β βββββββββͺββββββββͺββββββββ‘ β false β true β false β β false β false β false β β false β false β false β βββββββββ΄ββββββββ΄ββββββββ or something like this if you need all new columns: name.suffix() ( df .with_columns(pl.lit(string_1).str.contains(pl.all()).name.suffix('_match')) ) ββββββββββ¬ββββββββββ¬βββββββββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββββββββββ β Col A β Col B β Col C β Col A_match β Col B_match β Col C_match β β --- β --- β --- β --- β --- β --- β β str β str β str β bool β bool β bool β ββββββββββͺββββββββββͺβββββββββββββͺββββββββββββββͺββββββββββββββͺββββββββββββββ‘ β 448220 β example β 7101936801 β false β true β false β β 518398 β 99999 β 9999900091 β false β false β false β β 557232 β 424570 β 4245742060 β false β false β false β ββββββββββ΄ββββββββββ΄βββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ | 3 | 2 |
78,719,135 | 2024-7-8 | https://stackoverflow.com/questions/78719135/how-can-i-create-a-polars-struct-while-eval-ing-a-list | I am trying to create a Polars DataFrame that includes a column of structs based on another DataFrame column. Here's the setup: import polars as pl df = pl.DataFrame( [ pl.Series("start", ["2023-01-01"], dtype=pl.Date).str.to_date(), pl.Series("end", ["2024-01-01"], dtype=pl.Date).str.to_date(), ] ) shape: (1, 2) ββββββββββββββ¬βββββββββββββ β start β end β β --- β --- β β date β date β ββββββββββββββͺβββββββββββββ‘ β 2023-01-01 β 2024-01-01 β ββββββββββββββ΄βββββββββββββ df = df.with_columns( pl.date_range(pl.col("start"), pl.col("end"), "1mo", closed="left") .implode() .alias("date_range") ) shape: (1, 3) ββββββββββββββ¬βββββββββββββ¬ββββββββββββββββββββββββββββββββββ β start β end β date_range β β --- β --- β --- β β date β date β list[date] β ββββββββββββββͺβββββββββββββͺββββββββββββββββββββββββββββββββββ‘ β 2023-01-01 β 2024-01-01 β [2023-01-01, 2023-02-01, β¦ 202β¦ β ββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββββββββββββββ Now, I want to make a struct out of the year/month parts: df = df.with_columns( pl.col("date_range") .list.eval( pl.struct( { "year": pl.element().dt.year(), "month": pl.element().dt.month(), } ) ) .alias("years_months") ) But this does not work. Maybe I ought not to implode the date_range's output into a list, but I am not sure how to create a struct directly from its result either. My best idea is one I don't like because I have to repeatedly call pl.list.eval: df = ( df.with_columns( pl.col("date_range").list.eval(pl.element().dt.year()).alias("year"), pl.col("date_range").list.eval(pl.element().dt.month()).alias("month"), ) .drop("start", "end", "date_range") .explode("year", "month") .select(pl.struct("year", "month")) ) df The other idea is to use map_elements, but I think that ought to be something of a last resort. What's the idiomatic way to eval into a struct? | You shouldn't be passing a dictionary into the struct constructor. Just pass each IntoExpr as a keyword argument - that is, pl.struct(key1=IntoExprA, key2=IntoExprB, ...). In your case, you could replace your struct-ification code with the following: df = df.with_columns( pl.col("date_range") .list.eval( pl.struct( year = pl.element().dt.year(), month = pl.element().dt.month() ) ) .alias("years_months") ) Printing df gave me this output: ββββββββββββββ¬βββββββββββββ¬ββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β start β end β date_range β years_months β β --- β --- β --- β --- β β date β date β list[date] β list[struct[2]] β ββββββββββββββͺβββββββββββββͺββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββ‘ β 2023-01-01 β 2024-01-01 β [2023-01-01, 2023-02-01, β¦ 202β¦ β [{2023,1}, {2023,2}, β¦ {2023,1β¦ β ββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ If your dictionary was constructed earlier in the program, you can use unpacking to get the kwargs back out: df = df.with_columns( pl.col("date_range") .list.eval( pl.struct(**example_dict) ) .alias("years_months") ) | 3 | 3 |
78,727,228 | 2024-7-9 | https://stackoverflow.com/questions/78727228/replace-removed-function-pyarray-getcastfunc-in-numpy-2 | I'm migrating some python C extension to numpy 2. The extension basically gets a list of 2D numpy arrays and generates a new 2D array by combining them (average, median, etc,). The difficulty is that the input and output arrays are byteswapped. I cannot byteswap the input arrays to machine order (they are too many to fit in memory). So I read an element of each input array, bitswap them to machine order, then cast them to a list of doubles, perform my operation on the list to obtain a double cast the double to the dtype of the output array bitswap again and write the result in the output array To achieve this (using numpy 1.x C-API) I was using something like: PyArray_Descr* descr_in = PyArray_DESCR((PyArrayObject*)input_frame_1); PyArray_CopySwapFunc* swap_in = descr_in->f->copyswap; PyArray_VectorUnaryFunc* cast_in = PyArray_GetCastFunc(descr_in, NPY_DOUBLE); bool need_to_swap_in = PyArray_ISBYTESWAPPED((PyArrayObject*)input_frame_1); And something slightly different but similar for the output. I use the function swap_in to read a value from the input array, bitswap it and write it into a buffer and then cast_in to cast the contents of the buffer into a double. In numpy 2, the copyswap function is still accesible with a different syntax: PyArray_CopySwapFunc* swap_in = PyDataType_GetArrFuncs(descr_in)->copyswap; But the cast function is not. Although the member is still in the struct, most of its values are NULL. So this doesn't work: PyArray_VectorUnaryFunc* cast_in = PyDataType_GetArrFuncs(descr_in)->cast[NPY_DOUBLE]; The documentation says PyArray_GetCastFunc is removed. Note that custom legacy user dtypes can still provide a castfunc as their implementation, but any access to them is now removed. The reason for this is that NumPy never used these internally for many years. If you use simple numeric types, please just use C casts directly. In case you require an alternative, please let us know so we can create new API such as PyArray_CastBuffer() which could use old or new cast functions depending on the NumPy version. So the function has been removed, but there isn't a clear path to subtitute it with something else. What is the correct way of read and write values from/to bitswapped arrays? More detailed sample code. It just iterates over the input and saves the value in a double. double d_val = 0; char buffer[NPY_BUFSIZE]; PyObject* input_frame_1; // input_frame_1 is initialized over here // Conversion PyArray_Descr* descr_in = PyArray_DESCR((PyArrayObject*)input_frame_1); PyArray_CopySwapFunc* swap_in = descr_in->f->copyswap; PyArray_VectorUnaryFunc* cast_in = PyArray_GetCastFunc(descr_in, NPY_DOUBLE); bool need_to_swap_in = PyArray_ISBYTESWAPPED((PyArrayObject*)input_frame_1); // Iterator PyArrayIterObject* iter = PyArray_IterNew(input_frame_1); // Just reads the value and casts it into a double d_val while (iter->index < iter->size) { d_val = 0; // Swap the value if needed and store it in the buffer swap_in(buffer, iter->dataptr, need_to_swap_in, NULL); cast_in(buffer, &d_val, 1, NULL, NULL); /* Code to advance iter comes here */ } | I have found a solution for my problem using NpyIter iterators. This type of iterators can be commanded to take care of the buffering and casting that I was doing manually previously. So my example would be something like: PyObject* input_frame_1; // input_frame_1 is initialized over here /* This var will contain the output */ PyObjecj* out_res = NULL; /* required to create the iterator */ PyArray_Descr* dtype_res = NULL; npy_uint32 op_flags[2]; PyArray_Descr*> op_dtypes[2]; PyObject* ops[2]; NpyIter *iter = NULL; NpyIter_IterNextFunc *iternext; char** dataptr; /* I have an input array, the output array will be automatically allocated. The input array will be casted into double, and the output array will be double also */ ops[0] = input_frame_1; /* input operand */ ops[1] = NULL; /* output operand will be allocated */ op_flags[0] = NPY_ITER_READONLY | NPY_ITER_NBO; op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE | NPY_ITER_NBO| NPY_ITER_ALIGNED; dtype_res = PyArray_DescrFromType(NPY_DOUBLE); op_dtypes[0] = dtype_res; /* input is converted to double */ op_dtypes[1] = dtype_res; /* output is allocated as double */ iter = NpyIter_MultiNew(2, ops, NPY_ITER_BUFFERED, /* this must be enabled to allow bitswapping and casting */ NPY_KEEPORDER, NPY_UNSAFE_CASTING, op_flags, op_dtypes); Py_DECREF(dtype_res); dtype_res = NULL; if (iter == NULL) { return NULL; /* you will get and error if arrays are not compatible */ } /* Specific methods to advance the loop and get the data */ iternext = NpyIter_GetIterNext(iter, NULL); dataptr = NpyIter_GetDataPtrArray(iter); do { double *dbl_ptr; double value; /* Now dataptr contains correctly formated data */ /* the input */ dbl_ptr = (double*) dataptr[0]; value = *dbl_ptr; /* lets say our operation is b = 2 * a + 1 */ value = 2 * value + 1; /* and the output, stored in the other pointer */ memcpy(dataptr[1], &value, sizeof(double)); } while(iternext(iter)); /* The output array cab be recovered with */ out_res = NpyIter_GetOperandArray(iter)[1]; NpyIter_Deallocate(iter); There are lots of different flags, per operand and per loop. For example, you can use NPY_ITER_COPY instead of NPY_ITER_BUFFERED, or different rules for casting (or no casting), disallow broadcasting, get larger chunks of data for external loops, etc. Full documentation is here: https://numpy.org/doc/stable/reference/c-api/iterator.html Update: Altough this solution works well, NpyIter_MultiNew is limited to have NPY_MAXARGS inputs, being 32/64 in numpy 1/2. If you (as it's my case) need more inputs, this solution is not enough. | 2 | 0 |
78,704,322 | 2024-7-3 | https://stackoverflow.com/questions/78704322/summing-values-based-on-date-ranges-in-a-dataframe-using-polars | I have a DataFrame (df) that contains columns: ID, Initial Date, Final Date, and Value, and another DataFrame (dates) that contains all the days for each ID from df. On the dates dataframe i want to sum the values if exist on the range of each ID Here is my code import polars as pl from datetime import datetime data = { "ID" : [1, 2, 3, 4, 5], "Initial Date" : ["2022-01-01", "2022-01-02", "2022-01-03", "2022-01-04", "2022-01-05"], "Final Date" : ["2022-01-03", "2022-01-06", "2022-01-07", "2022-01-09", "2022-01-07"], "Value" : [10, 20, 30, 40, 50] } df = pl.DataFrame(data) dates = pl.datetime_range( start=datetime(2022,1,1), end=datetime(2022,1,7), interval="1d", eager = True, closed = "both" ).to_frame("date") shape: (5, 4) βββββββ¬βββββββββββββββ¬βββββββββββββ¬ββββββββ β ID β Initial Date β Final Date β Value β β --- β --- β --- β --- β β i64 β str β str β i64 β βββββββͺβββββββββββββββͺβββββββββββββͺββββββββ‘ β 1 β 2022-01-01 β 2022-01-03 β 10 β β 2 β 2022-01-02 β 2022-01-06 β 20 β β 3 β 2022-01-03 β 2022-01-07 β 30 β β 4 β 2022-01-04 β 2022-01-09 β 40 β β 5 β 2022-01-05 β 2022-01-07 β 50 β βββββββ΄βββββββββββββββ΄βββββββββββββ΄ββββββββ shape: (7, 1) βββββββββββββββββββββββ β date β β --- β β datetime[ΞΌs] β βββββββββββββββββββββββ‘ β 2022-01-01 00:00:00 β β 2022-01-02 00:00:00 β β 2022-01-03 00:00:00 β β 2022-01-04 00:00:00 β β 2022-01-05 00:00:00 β β 2022-01-06 00:00:00 β β 2022-01-07 00:00:00 β βββββββββββββββββββββββ In this case, on 2022-01-01 the value would be 10. On 2022-01-02, it would be 10 + 20, and on 2022-01-03, it would be 10 + 20 + 30, and so on. In other words, I want to check if the date exists within the range of each row in the DataFrame (df), and if it does, sum the values. I think the aproach for this is like this: ( dates.with_columns( pl.sum( pl.when( (df["Initial Date"] <= pl.col("date")) & (df["Final Date"] >= pl.col("date")) ).then(df["Value"]).otherwise(0) ).alias("Summed Value") ) ) | update join_where() was added in Polars 1.7.0 ( dates.join_where( df, pl.col("date") >= pl.col("Initial Date"), pl.col("date") <= pl.col("Final Date"), ).group_by("date") .agg(pl.col("Value").sum()) ) βββββββββββββββββββββββ¬ββββββββ β date β Value β β --- β --- β β datetime[ΞΌs] β i64 β βββββββββββββββββββββββͺββββββββ‘ β 2022-01-01 00:00:00 β 10 β β 2022-01-02 00:00:00 β 30 β β 2022-01-03 00:00:00 β 60 β β 2022-01-04 00:00:00 β 90 β β 2022-01-05 00:00:00 β 140 β β 2022-01-06 00:00:00 β 140 β β 2022-01-07 00:00:00 β 120 β βββββββββββββββββββββββ΄ββββββββ previous If you just want to know sum of values on each date within ranges in df, you don't even need dates dataframe. date_ranges() to create column with date ranges based on initial and final date. explode() to convert date ranges into rows. group_by() and agg() to sum the values. ( df .with_columns(date = pl.date_ranges("Initial Date", "Final Date")) .explode("date") .group_by("date", maintain_order = True) .agg(pl.col.Value.sum()) ) ββββββββββββββ¬ββββββββ β date β Value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2022-01-01 β 10 β β 2022-01-02 β 30 β β 2022-01-03 β 60 β β 2022-01-04 β 90 β β 2022-01-05 β 140 β β 2022-01-06 β 140 β β 2022-01-07 β 120 β β 2022-01-08 β 40 β β 2022-01-09 β 40 β ββββββββββββββ΄ββββββββ If you really want to use dates then you can join() result on dates dataframe: ( df .with_columns(date = pl.date_ranges("Initial Date", "Final Date")) .explode("date") .group_by("date", maintain_order = True) .agg(pl.col.Value.sum()) .join(dates, on="date", how="semi") ) ββββββββββββββ¬ββββββββ β date β Value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2022-01-01 β 10 β β 2022-01-02 β 30 β β 2022-01-03 β 60 β β 2022-01-04 β 90 β β 2022-01-05 β 140 β β 2022-01-06 β 140 β β 2022-01-07 β 120 β ββββββββββββββ΄ββββββββ Or just filter() the result: ( df .with_columns(date = pl.date_ranges("Initial Date", "Final Date")) .explode("date") .group_by("date", maintain_order = True) .agg(pl.col.Value.sum()) .filter(pl.col.date.is_between(datetime(2022,1,1), datetime(2022,1,7))) ) ββββββββββββββ¬ββββββββ β date β Value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2022-01-01 β 10 β β 2022-01-02 β 30 β β 2022-01-03 β 60 β β 2022-01-04 β 90 β β 2022-01-05 β 140 β β 2022-01-06 β 140 β β 2022-01-07 β 120 β ββββββββββββββ΄ββββββββ Alternative solution would be to use join on inequality, but polars is not great on it (yet). But in this case you can use DuckDB integration with Polars. duckdb.sql(""" select d.date, sum(df.value) as value from df inner join dates as d on d.date between df."Initial Date" and df."Final Date" group by d.date order by d.date """).pl() βββββββββββββββββββββββ¬ββββββββββββββββ β date β value β β --- β --- β β datetime[ΞΌs] β decimal[38,0] β βββββββββββββββββββββββͺββββββββββββββββ‘ β 2022-01-01 00:00:00 β 10 β β 2022-01-02 00:00:00 β 30 β β 2022-01-03 00:00:00 β 60 β β 2022-01-04 00:00:00 β 90 β β 2022-01-05 00:00:00 β 140 β β 2022-01-06 00:00:00 β 140 β β 2022-01-07 00:00:00 β 120 β βββββββββββββββββββββββ΄ββββββββββββββββ | 6 | 5 |
78,721,195 | 2024-7-8 | https://stackoverflow.com/questions/78721195/attributeerror-np-string-was-removed-in-the-numpy-2-0-release-use-np-bytes | I m interested in seeing neural network as graph using tensorboard. I have constructed a network in pytorch with following code- import torch BATCH_SIZE = 16 DIM_IN = 1000 HIDDEN_SIZE = 100 DIM_OUT = 10 class TinyModel(torch.nn.Module): def __init__(self): super(TinyModel, self).__init__() self.layer1 = torch.nn.Linear(DIM_IN, HIDDEN_SIZE) self.relu = torch.nn.ReLU() self.layer2 = torch.nn.Linear(HIDDEN_SIZE, DIM_OUT) def forward(self, x): x = self.layer1(x) x = self.relu(x) x = self.layer2(x) return x some_input = torch.randn(BATCH_SIZE, DIM_IN, requires_grad=False) ideal_output = torch.randn(BATCH_SIZE, DIM_OUT, requires_grad=False) model = TinyModel() Setting-up tensorboard from torch.utils.tensorboard import SummaryWriter # Create a SummaryWriter writer = SummaryWriter("checkpoint") # Add the graph to TensorBoard writer.add_graph(model, some_input) writer.close() While I run tensorboard --logdir=checkpoint on terminal , I receive the following error - Traceback (most recent call last): File "/home/k/python_venv/bin/tensorboard", line 5, in <module> from tensorboard.main import run_main File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/main.py", line 27, in <module> from tensorboard import default File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/default.py", line 39, in <module> from tensorboard.plugins.hparams import hparams_plugin File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/hparams_plugin.py", line 30, in <module> from tensorboard.plugins.hparams import backend_context File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/backend_context.py", line 26, in <module> from tensorboard.plugins.hparams import metadata File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/metadata.py", line 32, in <module> NULL_TENSOR = tensor_util.make_tensor_proto( File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/util/tensor_util.py", line 405, in make_tensor_proto numpy_dtype = dtypes.as_dtype(nparray.dtype) File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py", line 677, in as_dtype if type_value.type == np.string_ or type_value.type == np.unicode_: File "/home/k/python_venv/lib/python3.10/site-packages/numpy/__init__.py", line 397, in __getattr__ raise AttributeError( AttributeError: `np.string_` was removed in the NumPy 2.0 release. Use `np.bytes_` instead.. Did you mean: 'strings'? Probably the issue will be fixed in future releases, but is there a fix for now? | Your code seems to be written in numpy < 2.0 compatible way, but it looks like you are running it with numpy >=2. Try downgrading the numpy version to < 2.0. | 6 | 4 |
78,715,358 | 2024-7-6 | https://stackoverflow.com/questions/78715358/how-to-assign-value-to-a-zero-dimensional-torch-tensor | z = torch.tensor(1, dtype= torch.int64) z[:] = 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: slice() cannot be applied to a 0-dim tensor. I'm trying to assign a value to a torch tensor but because it has zero dimensions the slice operator doesn't work. How do I assign a new value then? | This is actually possible with copy_: z = torch.tensor(1, dtype=torch.int32) z.copy_(3.) >>> tensor(3, dtype=torch.int32) This will also cast it to the dtype of my variable. | 4 | 1 |
78,726,132 | 2024-7-9 | https://stackoverflow.com/questions/78726132/how-to-solve-importerror-dlopen-xxx-so-0x0002-symbol-not-found-in-flat | When I run a function in a python package, it returns: import gala.potential as gp Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/homebrew/lib/python3.11/site-packages/gala/potential/__init__.py", line 1, in <module> from .potential import * File "/opt/homebrew/lib/python3.11/site-packages/gala/potential/potential/__init__.py", line 2, in <module> from .cpotential import * ImportError: dlopen(/opt/homebrew/lib/python3.11/site-packages/gala/potential/potential/cpotential.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_gsl_sf_gamma' As the error message contains "_gsl_sf_gamma", so I suppose the problem is coming from the GSL library of C/C++. I am using Mac with M2 Chip. I have installed GSL before by homebrew, so the path of GSL is /opt/homebrew/Cellar/gsl/2.7.1 Moreover, I have also installed gcc before by homebrew. However, the default gcc is installed from Xcode, which uses clang and its path is /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin I suppose the problem is from the inconsistency between the GSL and the gcc. I have tried to reinstall GSL through conda (without uninstall the previous GSL), but it doesn't work. | OK so this is a linking problem. The library you are trying to import tries to call a function _gsl_sf_gamma (probably from the GSL library) but cannot find it. The library you are trying to import is programmed in a compiled language (most likely C or C++). In these languages, libraries, work a bit differently from how they work in python. There are two ways to get functions or other values reffered to by names (symbols) from a library. Static linking bundles the function from the library in the binary calling it. Dynamic linking loads the library on-demand, like python does with its import statement. It is more complicated because the binary calling the function needs to find the library somewhere on the system. So basically something was not properly linked. Probably that that cpotential.cpython-311-darwin.so expects the GSL library to already be dynamically loaded. Now the conda environment changes a bunch of things, including where programs are going to look for dynamically linked libraries. It probably loaded a different version of cpotential.cpython-311-darwin.so which loaded GSL correctly from conda's libraries | 3 | 1 |
78,706,017 | 2024-7-4 | https://stackoverflow.com/questions/78706017/change-style-of-datatable-in-shiny-for-python | I would like to set a background color in a Shiny for Python DataTable from palmerpenguins import load_penguins from shiny import render from shiny.express import ui penguins = load_penguins() ui.h2("Palmer Penguins") @render.data_frame def penguins_df(): return render.DataTable( penguins, summary="Viendo filas {start} a {end} de {total}", ) Using a similar question for RShiny I tried to add a style keyword argument to render.DataTable but got unexpected keyword argument error. How can I specify styling arguments when rendering a DataTable ? | Updated answer, Shiny >= 1.0.0 Shiny now supports basic cell styling. From the Release Notes: @render.data_frame's render.DataGrid and render.DataTable added support for cell styling with the new styles= parameter. This parameter can receive a style info object (or a list of style info objects), or a function that accepts a data frame and returns a list of style info objects. Each style info object can contain the rows and cols locations where the inline style and/or CSS class should be applied. If you e.g. want to set the background-color of the cells at positions [4, 3] and [10, 3] to yellow then such a style info object could look like this: { "location": "body", "rows": [4, 10], "cols": [3], "style": { "background-color": "yellow", }, } Note: Currently it is not possible to set "location" to something else than "body", so if you want to style the header, then you might need another approach, see e.g. my original answer for Shiny < 1.0.0 below for an example. We might expect more changes in a future release, see for example posit-dev/py-shiny#1472. Below is an example which yields the following result: from palmerpenguins import load_penguins from shiny import render from shiny.express import ui penguins = load_penguins() df_styles = [ { "location": "body", "style": { "background-color": "lightblue", "border": "0.5px solid black" }, }, { "location": "body", "rows": [4, 5, 10], "cols": [3], "style": { "background-color": "yellow", }, }, { "location": "body", "rows": [2, 8, 9], "cols": [2, 4], "style": { "background-color": "yellow", }, }, { "location": "body", "rows": [2], "cols": [2, 4], "style": { "width": "100px", "height": "75px", }, }, ] ui.h2("Palmer Penguins") @render.data_frame def penguins_df(): return render.DataTable( penguins, selection_mode=("rows"), editable=True, summary="Show entries {start} to {end} from {total}", styles=df_styles ) Original answer, Shiny < 1.0.0 The approach in the linked R question can't be used in a similar fashion in your example. When using render.DataTable one can make use of the fact that the generated table like Shiny in general provides Bootstrap by default. So in order to change the style we can edit CSS variables (also see the Bootstrap docs on CSS variables). In your example you would like to change the background color of the table, therefore we can apply this CSS as an example: .table thead tr { --bs-table-bg: #007bc2; --bs-table-color-state: white; } .table tbody tr{ --bs-table-bg: lightblue; } The variable --bs-table-bg defines the background color in the table which I separately defined here for the header and the body. --bs-table-color-state: white provides a white font in the header. You can provide the CSS via ui.tags.style. Also have a look at the Bootstrap docs on tables for more details and styling possibilities of tables. In the below example it looks like this: from palmerpenguins import load_penguins from shiny import render from shiny.express import ui from htmltools import Tag penguins = load_penguins() ui.h2("Palmer Penguins") ui.tags.style(""" .table thead tr { --bs-table-bg: #007bc2; --bs-table-color-state: white; } .table tbody tr{ --bs-table-bg: lightblue; } """) @render.data_frame def penguins_df(): return render.DataTable( penguins, summary="Viendo filas {start} a {end} de {total}" ) | 2 | 2 |
78,718,554 | 2024-7-7 | https://stackoverflow.com/questions/78718554/training-a-custom-feature-extractor-in-stable-baselines3-starting-from-pre-train | I am using the following custom feature extractor for my StableBaselines3 model: import torch.nn as nn from stable_baselines3 import PPO class Encoder(nn.Module): def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim=2): super(Encoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(input_dim, embedding_dim), nn.ReLU() ) self.regressor = nn.Sequential( nn.Linear(embedding_dim, hidden_dim), nn.ReLU(), ) def forward(self, x): x = self.encoder(x) x = self.regressor(x) return x model = Encoder(input_dim, embedding_dim, hidden_dim) model.load_state_dict(torch.load('trained_model.pth')) # Freeze all layers for param in model.parameters(): param.requires_grad = False class CustomFeatureExtractor(BaseFeaturesExtractor): def __init__(self, observation_space, features_dim): super(CustomFeatureExtractor, self).__init__(observation_space, features_dim) self.model = model # Use the pre-trained model as the feature extractor self._features_dim = features_dim def forward(self, observations): features = self.model(observations) return features policy_kwargs = { "features_extractor_class": CustomFeatureExtractor, "features_extractor_kwargs": {"features_dim": 64} } model = PPO("MlpPolicy", env=envs, policy_kwargs=policy_kwargs) The model is trained well so far with no issues and good results. Now I want to not freeze the weights, and try to train the Feature Extractor as well starting from the initial pre-trained weight. How can I do that with such a custom Feature Extractor defined as a class inside another class? My feature extractor is not the same as the one defined in the documentation, so I am not sure if it will be trained. Or will it start training if I unfreeze the layers? | UPDATED answer Because your CustomFE imports already freezer Encoder (with requires_grad = False) you have that kind of situation where all weights of CustomFE are frozen. Thus by default CustomFE is not trainable. You will need to unfreeze it manually: model = PPO("MlpPolicy", env='FrozenLake8x8', policy_kwargs=policy_kwargs) # get model feature extractor feature_extr: CustomFeatureExtractor = model.policy.features_extractor # convert all parameters to trainable for name, param in feature_extr.named_parameters(): param.requires_grad = True # check parameters before training encoder = feature_extr.model.encoder for name, param in encoder[0].named_parameters(): print(name, param.mean()) # train the model model.learn(total_timesteps = 5) # check parameters after training (if mean changed parameters are training) feature_extr: CustomFeatureExtractor = model.policy.features_extractor encoder = feature_extr.model.encoder for name, param in encoder[0].named_parameters(): print(name, param.mean()) | 2 | 1 |
78,719,729 | 2024-7-8 | https://stackoverflow.com/questions/78719729/multivariate-normal-distribution-using-python-scipy-stats-and-integrate-nquad | Let be independent normal random variables with means and unit variances, i.e. I would like to compute the probability Is there any easy way to compute the above probability using scipy.stats.multivariate_normal? If not, how do we do it using scipy.integrate? | Based on the info from one of SO post mentioned in the comments: https://math.stackexchange.com/questions/270745/compute-probability-of-a-particular-ordering-of-normal-random-variables Also same method mentioned on wiki here: https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation You should convert F(X) to F(Y) in the following way: import numpy as np from scipy.stats import norm, multivariate_normal mvn = multivariate_normal def compute_probability(thetas): """ following: https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Affine_transformation F(X1<X2<X3<...<Xn) = F(Y1<0, Y2<0, ... Y_{n-1}<0) where: Y_i = X_{i-1} - X_{i} ... or what is the same: Y = c + BX mean and sigma of Y can be found based on formulas from wikipedia """ n = len(thetas) # set diagonal to 1 B = np.eye(n) # set right to diagonal to -1 idx = np.arange(n-1) B[idx,idx+1] = -1 # remove last row B = B[:-1] # calculate multivate sigma and mu sigma = np.eye(n) mu_new = B.dot(thetas.T) sigma_new = B.dot(sigma).dot(B.T) MVN = mvn(mean=mu_new, cov = sigma_new) x0 = np.zeros(shape = n-1) p = MVN.cdf(x = x0) return p # Example usage: theta = np.array([1, -2, 0.5]) # Example coefficient p = compute_probability(theta) print(p) Outputs: theta = (0,0) p = 0.5 theta = (0,0,0) p = 0.1666 theta = (100, -100) p = 0 theta = (-100, 100) p = 1 theta = (0,0,0,100, -100) p = 0 | 4 | 7 |
78,716,824 | 2024-7-7 | https://stackoverflow.com/questions/78716824/what-is-the-internal-implementation-of-copy-deepcopy-in-python-and-how-to-ov | When reading Antony Hatchkins' answer to "How to override the copy/deepcopy operations for a Python object?", I am confused about why his implementation of __deepcopy()__ does not check memo first for whether the current object is already copied before copying the current object. This is also pointed out in the comment by AntonΓn Hoskovec. Jonathan H's comment also addressed this issue and mentioned that copy.deepcopy() appears to abort the call to __deepcopy()__ if an object has already been copied before. However, he does not point out clearly where this is done in the code of copy module. To illustrate the issue with not checking memo, suppose object a references b and c, and both objects b and c references object d. During a deepcopy of a, object d should be only copied once during the copy of b or c, whichever comes first. Essentially, I am asking the rationale for why Antony Hatchkins' answer does not do the following: from copy import deepcopy class A: def __deepcopy__(self, memo): # Why not add the following two lines? if id(self) in memo: return memo[id(self)] cls = self.__class__ result = cls.__new__(cls) memo[id(self)] = result for k, v in self.__dict__.items(): setattr(result, k, deepcopy(v, memo)) return result Therefore, it would be great if someone can explain the internal implementation of deepcopy() in the copy module both to demonstrate the best practice for overriding __deepcopy__ and also just to let me know what is happening under the hood. I took a brief look at the source code for copy.deepcopy() but was confused by things like copier, reductor, and _reconstruct(). I read answers like deepcopy override clarification and In Python, how can I call copy.deepcopy in my implementation of deepcopy()? but none of them gave a comprehensive answer and rationale. | The (reference) implementation for copy.deepcopy is here As you can see, the firsts thing that function does is check for the instance in the memo, so no need to check in your own implementation. Here is a breakdown of how that function works: deepcopy(x, memo=None) checks if x is in the memo. If it is, return the value associated to it. tries to work out the copying method, by, in that order looking for it in the _deepcopy_dispatch dictionary checking if x has a __deepcopy__ method, and using that checking if it can be reduced (see here). Ie if it can be pickled. If that is the case, it basically runs that, copies the reduced object, and then unpickles it. runs the found method to create a copy registers that copy in the memo. (I am ellipsing over some details, read the code if you are interesting in them) So to answer your questions (and others you may have): Q: what happens when you override __deepcopy__ A: It is called at step 3, instead of the default (unless there was a method in the _deepcopy_dispatch dictionary, but that dictionary should only contain methods for basic types) Q: when does the recursivity happen A: It happens when your __deepcopy__ function is called. This one should recursively call deepcopy with the same memo dictionary Q: Why does Antony Hatchkins' implementation register the instance in memo if deepcopy function also does it (step 4) A: because deepcopy registers the object in memo at the very end, whereas to avoid infinite recursion, you need to register it before doing recursive calls Note: For a simpler way to allow your custom classes to be copied, you can also implement the __gestate__ and __setstate__ methods, and relying on the fact that deepcopy falls back on pickling methods | 3 | 3 |
78,717,770 | 2024-7-7 | https://stackoverflow.com/questions/78717770/pandas-read-xml-file-with-designated-data-type | My code: df = pd.read_xml( path_or_buffer=PATH, xpath="//Data", compression="gzip" ) I'm using Pandas read_xml() function to read xml.gz format data. I'm using Pandas 1.3.2 version. When I tried to read the data, Pandas read data wrongly. The data looks like as below. Both colA and colB should be a string. 1st data file: <Data> <colA>abc</colA> <colB>168E3</colB> </Data> <Data> <colA>def</colA> </Data> 2nd data file: <Data> <colA>ghi</colA> <colB>23456</colB> </Data> <Data> <colA>jkl</colA> </Data> When I use read_xml() function, it looks like below: 1st dataframe: colA: abc, def colB: 168000.0, None 2nd dataframe: colA: ghi, jkl colB: 23456.0, None I want to read the data in string format but there is no dtype argument in pandas 1.3.2. I want to know: How can I read the data with designated data type? When there is missing data in a column, Pandas will assign the float type to that column. How to avoid it, or is there any setting to configure the data type of column with missing value when data is read? Please note that I can only this Pandas version and can't update it. | Applying a simple xslt transformation to force string type on every value could work (I don't have pandas 1.3.2 to test but works on pandas 2.x). It does not seem possible to obtain a canonical answer given the version contraint. An alternative could be to parse xml doc with lxml and manually populate the dataframe. The transformation concatenates a non number character that is later removed XSLT <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="node()"> <xsl:copy> <xsl:apply-templates select="node()"/> <xsl:value-of select="'|'"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Python import pandas as pd from io import StringIO xml_ ="""\ <root> <Data> <colA>abc</colA> <colB>168E3</colB> </Data> <Data> <colA>def</colA> </Data> </root>""" file = StringIO(xml_) df = pd.read_xml(xml_, stylesheet="/home/lmc/tmp/test.xslt" ) print(df.apply(lambda x: x.str[:-1])) Result colA colB 0 abc 168E3 1 def None Original result colA colB 0 abc| 168E3| 1 def| None | 2 | 5 |
78,727,839 | 2024-7-9 | https://stackoverflow.com/questions/78727839/create-config-file-for-python-script-with-variables | I have a config file for a python script that stores multiple different values my config.ini looks like this: [DHR] key1 = "\\path1\..." key2 = "\\path2\..." key3 = "\\path3\file-{today}.xlsx" my .py has a date variable that gets today's date like: today = str(date.today().strftime('%y%m%d')) However when I read the .ini file the variable does not get appended to the value as I expected. print(config.read("config.ini")) "\\path3\file-{today}.xlsx" How can I adjust my script to append the variable to the path so that it looks like this: "\\path3\file-240709.xlsx" | You could provide a filtered locals() dictionary to the parser to interpolate variables. However, the syntax of the ini needs to be changed to configparser's basic interpolation syntax: [DHR] key3 = "\\path3\file-%(today)s.xlsx" from configparser import ConfigParser from datetime import date today = str(date.today().strftime('%y%m%d')) # instantiate parser with local variables, filtered for string-only values parser = ConfigParser({k: v for k, v in locals().items() if isinstance(v, str)}) parser.read('config.ini') print(parser["DHR"]["key3"]) # => "\\path3\file-240710.xlsx" | 4 | 1 |
78,722,890 | 2024-7-8 | https://stackoverflow.com/questions/78722890/where-can-i-find-an-exhaustive-list-of-actions-for-spark | I want to know exactly what I can do in spark without triggering the computation of the spark RDD/DataFrame. It's my understanding that only actions trigger the execution of the transformations in order to produce a DataFrame. The problem is that I'm unable to find a comprehensive list of spark actions. Spark documentation lists some actions, but it's not exhaustive. For example show is not there, but it is considered an action. Where can I find a full list of actions? Can I assume that all methods listed here are also actions? | All the methods annotated in the with @group action are actions. They can be found as a list here in scaladocs. They can also be found in the source where each method is defined, looking like this: * @group action * @since 1.6.0 */ def show(numRows: Int): Unit = show(numRows, truncate = true) Additionally, some other methods do not have that annotation, but also perform an eager evaluation: Those that call withAction. Checkpoint, for example, actually performs an action but isn't grouped as such in the docs: private[sql] def checkpoint(eager: Boolean, reliableCheckpoint: Boolean): Dataset[T] = { val actionName = if (reliableCheckpoint) "checkpoint" else "localCheckpoint" withAction(actionName, queryExecution) { physicalPlan => val internalRdd = physicalPlan.execute().map(_.copy()) if (reliableCheckpoint) { To find all of them Go to the source Use control + F Search for private def withAction Click on withAction On the right you should see a list of methods that use them. This is how that list currently looks: | 9 | 5 |
78,725,322 | 2024-7-9 | https://stackoverflow.com/questions/78725322/how-to-use-depends-and-path-at-the-same-time-in-fastapi | I have the following endpoint: @router.get('/events/{base_id}') def asd(base_id: common_input_types.event_id) -> None: do_something() And this is common_input_types.event_id: event_id = Annotated[ int, fastapi.Depends(dependency_function), fastapi.Path( example=123, description="Lore", gt=5, le=100, ), ] This does not trigger the dependency function. When I remove the fastapi.Path part, it works. But then I won't have the example and the min-max in swagger. How do I specify both? | I don't think it's possible to annotate a parameter with Path and Depends at the same time. Depending on the logic you want to implement, you can move this logic to dependency_function: from typing import Annotated import fastapi app = fastapi.FastAPI() router = fastapi.APIRouter(prefix="") async def dependency_function( base_id: Annotated[ int, fastapi.Path(example=123, description="Lore", gt=5, le=100) ] ): some_condition = True if some_condition: return 123 else: return base_id @router.get('/events/{base_id}') def asd(base_id: Annotated[int, fastapi.Depends(dependency_function)]) -> None: print(base_id) app.include_router(router) | 2 | 2 |
78,728,028 | 2024-7-9 | https://stackoverflow.com/questions/78728028/performing-integral-over-infinity-with-numbers-in-excess-of-101000 | I am attempting to perform on an integral from -infinity to infinity, with an equation containing a term that is raised to the power M. At very high (5000+) values of M, this value of this term can exceed 10^100. I have this python (3.10) program, which works for M = 1000, but nothing much beyond that. import numpy as np from scipy import integrate, special def integrand(u, M, Z, r): sqrt_r = np.sqrt(r) sqrt_term = np.sqrt(2 * (1 - r)) arg1 = (Z + sqrt_r * u) / sqrt_term arg2 = (-Z + sqrt_r * u) / sqrt_term erf_term = special.erf(arg1) - special.erf(arg2) return np.exp(-u**2 / 2) / np.sqrt(2 * np.pi) * erf_term**M def evaluate_integral(M, Z, r): result, _ = integrate.quad(integrand, -np.inf, np.inf, args=(M, Z, r)) return 1 / (2**M) * result # Example usage: M = 1000 Z = 5 r = 0.6 integral_value = evaluate_integral(M, Z, r) The error I get is: IntegrationWarning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated. result, _ = integrate.quad(integrand, -np.inf, np.inf, args=(M, Z, r)) and the result for integral_value is nan. Is there a way I can perform this integral using Python, which would be convenient as it allows me to be included with the rest of my program, or will I be forced to use a different tool? | No, your only problem is that you took the factor 1/2**M outside of the integral. If you put it back inside you will then have (erf_term/2)**M and that will not blow up because the difference of two error functions cannot exceed 2, so the new bracketed factor is less than 1. | 3 | 3 |
78,728,033 | 2024-7-9 | https://stackoverflow.com/questions/78728033/numpythonic-ways-of-converting-sparse-to-dense-based-on-referenced-index | I have this sparse vector val with this Pythonic way: idx = [2, 5, 6] val = [69, 12, 15] _ = np.zeros(idx[-1]+1) for i in idx: _[i] = val[idx.index(i)] print(_) dbg(_) Here is the output: [ 0. 0. 69. 0. 0. 12. .15] (6,) float64 How do I achieve my goal where it's not using for loops, instead using a Numpythonic way? | idx = [2, 5, 6] val = [69, 12, 15] _ = np.zeros(idx[-1]+1) numpy lets you set multiple values at once _[idx] = val | 2 | 2 |
78,727,544 | 2024-7-9 | https://stackoverflow.com/questions/78727544/how-to-make-derivativefx-x-take-an-explicit-value-after-f-is-defined | So I need to make take some derivatives first and later in the code (when the explicit form of f(x) is given) substitute the Derivative(f(x), x) with the explicit form of the derivative import sympy as sp import math x = sp.symbols('x') f = sp.Function('f')('x') df = f.diff(x) f = sp.sin(x) print(df) What I expect to see as an output is cos(x) and not Derivative(f(x), x) that it gives me anyway. I tried subs and doit with no success. What would be a proper way of getting what I need? | Substitute your known function into the derivative and execute the doit method: import sympy as sp import math x = sp.symbols('x') f = sp.Function('f')('x') df = f.diff(x) f_known = sp.sin(x) df.subs(f, f_known).doit() | 2 | 3 |
78,727,309 | 2024-7-9 | https://stackoverflow.com/questions/78727309/django-error-reverse-for-user-workouts-not-found-user-workouts-is-not-a-va | I'm working on my project for a course and I'm totally stuck right now. I'm creating a website to manage workouts by user and the create workout do not redirect to the user's workouts page when I create the new workout instance. views.py # View to return all user's workouts def user_workouts(request, user_id): user = User.objects.get(id=user_id) print(user) workouts = get_list_or_404(Workout, user=user.id) return render(request, "users/workouts.html", {"user": user, "workouts": workouts}) # Create workout view def add_workout(request, user_id): user = get_object_or_404(User, id=user_id) print(user) print(request._post) if request.method == "POST": workout_title = request.POST.get("workout") reps = request.POST.get("reps") load = request.POST.get("load") last_update = request.POST.get("last_update") workout = Workout(workout=workout_title, reps=reps, load=load, last_update=last_update, user=user) workout.save() print(user.id) return redirect('user_workouts') context = {"workout": workout} return render(request, "users/add_workout.html", context=context) urls.py from django.urls import path from . import views app_name = "myapp" urlpatterns = [ path("", views.index, name="index"), path("user/<int:user_id>", views.user_detail, name="user_detail"), path("user/<int:user_id>/workouts", views.user_workouts, name="user_workouts"), path("user/<int:user_id>/workouts/<int:workout_id>", views.workout_detail, name="workout_detail"), path("user/<int:user_id>/workouts/Add Workout", views.open_workout_form, name="open_workout_form"), path("user/<int:user_id>/workouts/create/", views.add_workout, name="add_workout") ] | You specified an app_name = β¦ [Django-doc], so it is: return redirect('myapp:user_workouts') | 3 | 1 |
78,727,326 | 2024-7-9 | https://stackoverflow.com/questions/78727326/how-to-run-a-function-at-startup-before-requests-are-handled-in-django | I want to run some code when my Django server starts-up in order to clean up from the previous end of the server. How can I run some code once and only once at startup before the server processes any requests. I need to access the database during this time. I've read a couple of things online that seem a bit outdated and are also not guaranteed to run only once. Update I've looked at the suggested answer by looking into the ready function. That appears to work, but it is documented that you should not access the database there (https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready) There are many suggestions at Execute code when Django starts ONCE only?, but since that post is a few years old, I thought there might be some additional solutions. This looks promising from django.dispatch import receiver from django.db.backends.signals import connection_created @receiver(connection_created) def my_receiver(connection, **kwargs): with connection.cursor() as cursor: # do something to the database connection_created.disconnect(my_receiver) Any other thoughts? | This is often done with the .ready() method [Django-doc] of any of the AppConfig classes [Django-doc] of an installed app. So if you have an app named app_name, you can work with: # app_name/apps.py from django.apps import AppConfig class MyAppConfig(AppConfig): # β¦ def ready(self): # my management command # β¦ This will run after the models of all INSTALLED_APPS are loaded, so you can do management commands. This is also often used to hook signals to models. | 2 | 3 |
78,726,750 | 2024-7-9 | https://stackoverflow.com/questions/78726750/how-to-install-packages-using-uv-pip-install-without-creating-virtual-environm | I would like to install Python packages in the CI/CD pipeline using the uv package manager. I did not create a virtual environment because I would like to use the virtual machine's global Python interpreter. When I run the uv pip install <package> script, I get the following error: Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.12.4/x64/lib/python3.12/site-packages (24.1.1) Collecting pip Downloading pip-24.1.2-py3-none-any.whl.metadata (3.6 kB) Downloading pip-24.1.2-py3-none-any.whl (1.8 MB) ββββββββββββββββββββββββββββββββββββββββ 1.8/1.8 MB 32.0 MB/s eta 0:00:00 Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 24.1.1 Uninstalling pip-24.1.1: Successfully uninstalled pip-24.1.1 Successfully installed pip-24.1.2 Collecting uv Downloading uv-0.2.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB) Downloading uv-0.2.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.4 MB) ββββββββββββββββββββββββββββββββββββββββ 13.4/13.4 MB 104.1 MB/s eta 0:00:00 Installing collected packages: uv Successfully installed uv-0.2.23 error: No virtual environment found Is it possible to install it into the global Python environment? | It is possible to install packages into the system Python using the --system option: uv pip install --system <packages> The --system option instructs uv to instead use the first Python found in the system PATH. WARNING: --system is intended for use in continuous integration (CI) environments and should be used with caution, as it can modify the system Python installation. Reference to Installing into arbitrary Python environments. | 6 | 5 |
78,726,475 | 2024-7-9 | https://stackoverflow.com/questions/78726475/how-to-ignore-a-specific-breakpoint-interactively | Consider this script: print("before loop") for i in range(100): breakpoint() print("after loop") breakpoint() print("exit") Short of pressing "c" one hundred times, how can you get past the breakpoint within the loop at L3 and proceed to L5? I've tried the ignore command but couldn't work it out: $ python3 example.py before loop > /tmp/example.py(2)<module>() -> for i in range(100): (Pdb) ignore 0 *** Breakpoint 0 already deleted (Pdb) c > /tmp/example.py(2)<module>() -> for i in range(100): (Pdb) ignore 0 *** Breakpoint 0 already deleted (Pdb) c > /tmp/example.py(2)<module>() -> for i in range(100): I want to execute the remainder of the loop, without tripping again the breakpoint on L3, then print "after loop" and break before printing "exit", remaining in the debugger. The answer must not require exiting the debugger and re-entering the runtime, or modifying the source code. | You can use the PYTHONBREAKPOINT environment variable to call a custom breakpoint handler that you define in a separate file. For example: $ export PYTHONBREAKPOINT=mybreak.mybreak # mybreak/__init__.py import pdb _counter = 0 def mybreak(*args, **kwargs): global _counter if _counter >= 100: # default behavior pdb.set_trace(*args, **kwargs) else: # skip dropping into pdb while inside the range(100) loop pass _counter += 1 You might have to get a little tricky if there are other invocations of breakpoint() in the file but it would just be a matter of tracking what the counter value would be when you want to skip over the breakpoint. An alternate implementation of the custom handler, as suggested in nneonneo's answer: # mybreak/__init__.py import inspect import pdb def mybreak(): caller = inspect.stack()[1] if caller.filename.endswith("/foo.py") and caller.lineno == 2: # skip this breakpoint return pdb.set_trace(*args, **kwargs) | 4 | 2 |
78,726,288 | 2024-7-9 | https://stackoverflow.com/questions/78726288/attribute-auto-calculation-upon-retrieval | I am trying to make a point object in 3D in python, and I have the following code import numpy as np class Point3D: def __init__(self, x, y, z): self.vector = np.matrix([x, y, z]) self.vector.reshape(3, 1) def x(self): return self.vector.item((0, 0)) def y(self): return self.vector.item((0, 1)) def z(self): return self.vector.item((0, 2)) I am using numpy to create vertical matrices to eventually use this for a rendering engine. I want to be able to reference the x,y and z of this vector without having to type self.vector.item((0, 0)) for example. I want to be able to do point.x and get the x value from the vector automatically. I also want to be able to set the values as such: point.x = 5. Is there any way this possible? I tried setting them at the start in the init function but then I would have to update it every time. I then made functions to there would only have to be an extra 2 chars, but setting is still not implemented. | What you are wanting to do is property getter and setters. The @property decorator returns a value without having to call the method. I.e. use p.x instead of p.x(). To assign a value to the property, you use the @<propertyname>.setter decorator which implements assigning to the property via p.x = 3. I added a __repr__ so the object looks nice if you are in interactive mode in Python. import numpy as np class Point3D: def __init__(self, x, y, z): self.vector = np.matrix([x, y, z]).reshape(3,1) def __repr__(self): n = self.__class__.__name__ return f'{n}({self.x}, {self.y}, {self.z})' @property def x(self): return self.vector[0,0] @property def y(self): return self.vector[1,0] @property def z(self): return self.vector[2,0] @x.setter def x(self, x_val): self.vector[0,0] = x_val @y.setter def y(self, y_val): self.vector[1,0] = y_val @z.setter def z(self, z_val): self.vector[2,0] = z_val | 2 | 1 |
78,725,967 | 2024-7-9 | https://stackoverflow.com/questions/78725967/translate-pandas-groupby-plus-resample-to-polars-in-python | I have this code that generates a toy DataFrame (production df is much complex): import polars as pl import numpy as np import pandas as pd def create_timeseries_df(num_rows): date_rng = pd.date_range(start='1/1/2020', end='1/01/2021', freq='T') data = { 'date': np.random.choice(date_rng, num_rows), 'category': np.random.choice(['A', 'B', 'C', 'D'], num_rows), 'subcategory': np.random.choice(['X', 'Y', 'Z'], num_rows), 'value': np.random.rand(num_rows) * 100 } df = pd.DataFrame(data) df = df.sort_values('date') df.set_index('date', inplace=True, drop=False) df.index = pd.to_datetime(df.index) return df num_rows = 1000000 # for example df = create_timeseries_df(num_rows) Then perform this transformations with Pandas. df_pd = df.copy() df_pd = df_pd.groupby(['category', 'subcategory']) df_pd = df_pd.resample('W-MON') df_pd.agg({ 'value': ['sum', 'mean', 'max', 'min'] }).reset_index() But, obviously it is quite slow with Pandas (at least in production). Thus, I'd like to use Polars to speed up time. This is what I have so far: #Convert to Polars DataFrame df_pl = pl.from_pandas(df) #Groupby, resample and aggregate df_pl = df_pl.group_by('category', 'subcategory') df_pl = df_pl.group_by_dynamic('date', every='1w', closed='right') df_pl.agg( pl.col('value').sum().alias('value_sum'), pl.col('value').mean().alias('value_mean'), pl.col('value').max().alias('value_max'), pl.col('value').min().alias('value_min') ) But I get AttributeError: 'GroupBy' object has no attribute 'group_by_dynamic'. Any ideas on how to use groupby followed by resample in Polars? | You can pass additional columns to group by in a call to group_by_dynamic by passing a list with the named argument group_by=: df_pl = df_pl.group_by_dynamic( "date", every="1w", closed="right", group_by=["category", "subcategory"] ) With this, I get a dataframe that looks similar to the one your pandas code produces: shape: (636, 7) ββββββββββββ¬ββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββββ¬ββββββββββββ¬ββββββββββββ¬βββββββββββ β category β subcategory β date β sum β mean β max β min β β --- β --- β --- β --- β --- β --- β --- β β str β str β datetime[ns] β f64 β f64 β f64 β f64 β ββββββββββββͺββββββββββββββͺββββββββββββββββββββββͺβββββββββββββββͺββββββββββββͺββββββββββββͺβββββββββββ‘ β D β Z β 2019-12-30 00:00:00 β 55741.652346 β 50.399324 β 99.946595 β 0.008139 β β D β Z β 2020-01-06 00:00:00 β 76161.42206 β 50.139185 β 99.96917 β 0.138366 β β D β Z β 2020-01-13 00:00:00 β 80222.894298 β 49.581517 β 99.937069 β 0.117216 β β D β Z β 2020-01-20 00:00:00 β 82042.968995 β 50.456931 β 99.981101 β 0.009077 β β D β Z β 2020-01-27 00:00:00 β 82408.144078 β 49.494381 β 99.954734 β 0.023769 β β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β B β Z β 2020-11-30 00:00:00 β 79530.963748 β 49.737939 β 99.973554 β 0.007446 β β B β Z β 2020-12-07 00:00:00 β 80050.524653 β 49.566888 β 99.975546 β 0.003066 β β B β Z β 2020-12-14 00:00:00 β 77896.578291 β 50.029915 β 99.969098 β 0.033222 β β B β Z β 2020-12-21 00:00:00 β 76490.507942 β 49.636929 β 99.953563 β 0.021683 β β B β Z β 2020-12-28 00:00:00 β 46964.533378 β 50.553857 β 99.653981 β 0.042546 β ββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββ΄ββββββββββββ΄βββββββββββ | 7 | 7 |
78,723,998 | 2024-7-9 | https://stackoverflow.com/questions/78723998/convert-string-to-dataframe-after-extracting-using-beautifulsoup | import requests import pandas as pd from bs4 import BeautifulSoup as bs from io import StringIO url = "https://www.tickertape.in/stocks/oil-and-natural-gas-corporation-ONGC" r = requests.get(url=url)#,headers=headers) soup = bs(r.content,'html5lib') fin = soup.find_all(class_="financials-table-root") for f in fin: str_data = f.text break print(str_data) df = pd.read_csv(StringIO(str_data)) print(df) This is not giving me the desired result. I am not good with handing strings, please guide me a way to extract the values from the str_data, so that I can use it for further calculations. Output of str_data: Financial YearFY 2016FY 2017FY 2018FY 2019FY 2020FY 2021FY 2022FY 2023FY 2024TTMTotal Revenue1,33,084.682,95,230.293,33,135.154,32,783.974,06,772.743,15,162.695,00,202.156,41,531.266,57,654.976,57,751.89Raw Materials36,666.931,75,376.732,02,298.512,75,523.612,69,975.372,05,913.122,25,616.932,66,120.045,41,837.995,42,659.83Power & Fuel Cost1,182.381,957.932,177.502,139.732,078.921,980.442,239.162,174.44Employee Cost9,230.1715,128.1614,970.7215,850.5015,531.2614,135.1215,235.7414,898.79Selling & Administrative Expenses35,735.3136,378.9034,331.5839,621.4429,357.6623,004.0836,347.6144,365.47Operating & Other expenses10,068.00860.7212,038.3915,262.6036,742.869,385.991,34,092.392,38,475.37EBITDA40,201.8965,527.8567,318.4584,386.0953,086.6760,743.9486,670.3275,497.151,15,816.981,15,092.06Depreciation/Amortization16,384.0620,219.2023,111.9123,703.7026,634.8825,538.4726,883.1624,557.0628,762.7428,750.79PBIT23,817.8345,308.6544,206.5460,682.3926,451.7935,205.4759,787.1650,940.0987,054.2486,341.27Interest & Other Items3,765.583,591.114,999.045,836.737,489.345,079.035,696.047,889.3610,194.1710,194.17PBT20,052.2541,717.5439,207.5054,845.6618,962.4530,126.4454,091.1243,050.7376,860.0776,147.10Taxes & Other Items7,177.0417,298.2917,101.5824,299.628,158.8513,822.058,568.997,610.2827,638.6927,459.24Net Income12,875.2124,419.2522,105.9230,546.0410,803.6016,304.3945,522.1335,440.4549,221.3848,687.86EPS10.0319.0317.2324.048.5912.9636.1928.1739.1338.70DPS5.677.556.607.005.003.6010.5011.2512.2510.25Payout ratio0.560.400.380.290.580.280.290.400.310.26 Output of print(df): Empty DataFrame Columns: [Financial YearFY 2016FY 2017FY 2018FY 2019FY 2020FY 2021FY 2022FY 2023FY 2024TTMTotal Revenue1, 33, 084.682, 95, 230.293, 33.1, 135.154, 32, 783.974, 06, 772.743, 15, 162.695, 00, 202.156, 41, 531.266, 57, 654.976, 57.1, 751.89Raw Materials36, 666.931, 75, 376.732, 02, 298.512, 75.1, 523.612, 69, 975.372, 05, 913.122, 25, 616.932, 66, 120.045, 41.1, 837.995, 42, 659.83Power & Fuel Cost1, 182.381, 957.932, 177.502, 139.732, 078.921, 980.442, 239.162, 174.44Employee Cost9, 230.1715, 128.1614, 970.7215, 850.5015, 531.2614, 135.1215, 235.7414, 898.79Selling & Administrative Expenses35, 735.3136, 378.9034, 331.5839, 621.4429, 357.6623, 004.0836, 347.6144, 365.47Operating & Other expenses10, 068.00860.7212, 038.3915, 262.6036, 742.869, 385.991, 34, 092.392, 38, 475.37EBITDA40, 201.8965, 527.8567, 318.4584, 386.0953, 086.6760, 743.9486, 670.3275, 497.151, 15.1, 816.981, 15.2, 092.06Depreciation/Amortization16, 384.0620, 219.2023, 111.9123, 703.7026, 634.8825, 538.4726, 883.1624, 557.0628, 762.7428, 750.79PBIT23, 817.8345, 308.6544, 206.5460, 682.3926, 451.7935, ...] Index: [] Trying to make a dataframe to use the values. | You can use read_html with fixing columns names by remove Unnamed columns names and filter by length of them: url = "https://www.tickertape.in/stocks/oil-and-natural-gas-corporation-ONGC" df = pd.read_html(url)[2] cols = [c for c in df.columns if not c.startswith('Unnamed')] out = df.iloc[:, :len(cols)].set_axis(cols, axis=1) print (out) Financial Year FY 2016 FY 2017 FY 2018 FY 2019 FY 2020 FY 2021 FY 2022 FY 2023 FY 2024 TTM 0 Total Revenue 133084.68 295230.29 333135.15 432783.97 406772.74 315162.69 500202.15 641531.26 657654.97 657751.89 1 Raw Materials 36666.93 175376.73 202298.51 275523.61 269975.37 205913.12 225616.93 266120.04 541837.99 542659.83 2 Power & Fuel Cost 1182.38 1957.93 2177.50 2139.73 2078.92 1980.44 2239.16 2174.44 541837.99 542659.83 3 Employee Cost 9230.17 15128.16 14970.72 15850.50 15531.26 14135.12 15235.74 14898.79 541837.99 542659.83 4 Selling & Administrative Expenses 35735.31 36378.90 34331.58 39621.44 29357.66 23004.08 36347.61 44365.47 541837.99 542659.83 5 Operating & Other expenses 10068.00 860.72 12038.39 15262.60 36742.86 9385.99 134092.39 238475.37 541837.99 542659.83 6 EBITDA 40201.89 65527.85 67318.45 84386.09 53086.67 60743.94 86670.32 75497.15 115816.98 115092.06 7 Depreciation/Amortization 16384.06 20219.20 23111.91 23703.70 26634.88 25538.47 26883.16 24557.06 28762.74 28750.79 8 PBIT 23817.83 45308.65 44206.54 60682.39 26451.79 35205.47 59787.16 50940.09 87054.24 86341.27 9 Interest & Other Items 3765.58 3591.11 4999.04 5836.73 7489.34 5079.03 5696.04 7889.36 10194.17 10194.17 10 PBT 20052.25 41717.54 39207.50 54845.66 18962.45 30126.44 54091.12 43050.73 76860.07 76147.10 11 Taxes & Other Items 7177.04 17298.29 17101.58 24299.62 8158.85 13822.05 8568.99 7610.28 27638.69 27459.24 12 Net Income 12875.21 24419.25 22105.92 30546.04 10803.60 16304.39 45522.13 35440.45 49221.38 48687.86 13 EPS 10.03 19.03 17.23 24.04 8.59 12.96 36.19 28.17 39.13 38.70 14 DPS 5.67 7.55 6.60 7.00 5.00 3.60 10.50 11.25 12.25 10.25 15 Payout ratio 0.56 0.40 0.38 0.29 0.58 0.28 0.29 0.40 0.31 0.26 For select by indicators is possible set first column to index: df = pd.read_html(url, index_col=0)[2] cols = [c for c in df.columns if not c.startswith('Unnamed')] out = df.iloc[:, :len(cols)].set_axis(cols, axis=1) print (out) FY 2016 FY 2017 FY 2018 FY 2019 FY 2020 FY 2021 FY 2022 FY 2023 FY 2024 TTM Financial Year Total Revenue 133084.68 295230.29 333135.15 432783.97 406772.74 315162.69 500202.15 641531.26 657654.97 657751.89 Raw Materials 36666.93 175376.73 202298.51 275523.61 269975.37 205913.12 225616.93 266120.04 541837.99 542659.83 Power & Fuel Cost 1182.38 1957.93 2177.50 2139.73 2078.92 1980.44 2239.16 2174.44 541837.99 542659.83 Employee Cost 9230.17 15128.16 14970.72 15850.50 15531.26 14135.12 15235.74 14898.79 541837.99 542659.83 Selling & Administrative Expenses 35735.31 36378.90 34331.58 39621.44 29357.66 23004.08 36347.61 44365.47 541837.99 542659.83 Operating & Other expenses 10068.00 860.72 12038.39 15262.60 36742.86 9385.99 134092.39 238475.37 541837.99 542659.83 EBITDA 40201.89 65527.85 67318.45 84386.09 53086.67 60743.94 86670.32 75497.15 115816.98 115092.06 Depreciation/Amortization 16384.06 20219.20 23111.91 23703.70 26634.88 25538.47 26883.16 24557.06 28762.74 28750.79 PBIT 23817.83 45308.65 44206.54 60682.39 26451.79 35205.47 59787.16 50940.09 87054.24 86341.27 Interest & Other Items 3765.58 3591.11 4999.04 5836.73 7489.34 5079.03 5696.04 7889.36 10194.17 10194.17 PBT 20052.25 41717.54 39207.50 54845.66 18962.45 30126.44 54091.12 43050.73 76860.07 76147.10 Taxes & Other Items 7177.04 17298.29 17101.58 24299.62 8158.85 13822.05 8568.99 7610.28 27638.69 27459.24 Net Income 12875.21 24419.25 22105.92 30546.04 10803.60 16304.39 45522.13 35440.45 49221.38 48687.86 EPS 10.03 19.03 17.23 24.04 8.59 12.96 36.19 28.17 39.13 38.70 DPS 5.67 7.55 6.60 7.00 5.00 3.60 10.50 11.25 12.25 10.25 Payout ratio 0.56 0.40 0.38 0.29 0.58 0.28 0.29 0.40 0.31 0.26 | 2 | 2 |
78,721,505 | 2024-7-8 | https://stackoverflow.com/questions/78721505/choosing-between-yield-and-addfinalizer-in-pytest-fixtures-for-teardown | I've recently started using pytest for testing in Python and created a fixture to manage a collection of items using gRPC. Below is the code snippet for my fixture: import pytest @pytest.fixture(scope="session") def collection(): grpc_page = GrpcPages().collections def create_collection(collection_id=None, **kwargs): default_params = { "id": collection_id, "is_active": True, # some other params } try: return grpc_page.create_collection(**{**default_params, **kwargs}) except Exception as err: print(err) raise err yield create_collection def delete_created_collection(): # Some code to hard and soft delete created data This is my first attempt at creating a fixture, and I realized that I need a mechanism to delete data created during the fixture's lifecycle. While exploring options for implementing teardown procedures, I came across yield and addfinalizer. From what I understand, both can be used to define teardown actions in pytest fixtures. However, I'm having trouble finding clear documentation and examples that explain the key differences between these two approaches and when to choose one over the other. Here are the questions (for fast-forwarding :) ): What are the primary differences between using yield and addfinalizer in pytest fixtures for handling teardown? Are there specific scenarios where one is preferred over the other? | The main difference is not the number of addfinalizer or fixtures, there is no difference at all. You can add as many as you want (or just have more than one operation in on of them) @pytest.fixture(scope='session', autouse=True) def fixture_one(): print('fixture_one setup') yield print('fixture_one teardown') @pytest.fixture(scope='session', autouse=True) def fixture_two(): print('fixture_two setup') yield print('fixture_two teardown') def test_one(): print('test_one') Output example.py::test_one fixture_one setup fixture_two setup PASSED [100%]test_one fixture_two teardown fixture_one teardown The main difference is if the teardown will run in case of a failure in the setup stage. This is useful if there is need for cleanup even if the setup failed. Without finalizer the teardown won't run if there was an exception in the setup @pytest.fixture(scope='session', autouse=True) def fixture_one(): print('fixture_one setup') raise Exception('Error') yield print('fixture_one teardown') def test_one(): print('test_one') Output ERROR [100%] fixture_one setup test setup failed ... E Exception: Error example.py:8: Exception But with finalizer it will @pytest.fixture(scope='session', autouse=True) def fixture_one(request): def finalizer(): print('fixture_one teardown') request.addfinalizer(finalizer) print('fixture_one setup') raise Exception('Error') yield Output ERROR [100%] fixture_one setup test setup failed ... E Exception: Error example.py:13: Exception fixture_one teardown | 3 | 6 |
78,708,727 | 2024-7-4 | https://stackoverflow.com/questions/78708727/efficiently-find-matching-string-from-substrings-in-large-lists | I have two lists containing about 5 million items each. List_1 is a list of tuples, with two strings per tuple. List_2, is a long list of strings. I am trying to find a compound string, made from those tuples, in List_2. So if the tuple from List_1 is ("foo", "bar"), and List_2 contains ["flub", "blub", "barstool", "foo & bar: misadventures in python"], I would be trying to to fetch "foo & bar: misadventures in python" from List_2. The way that I currently do it is by iterating through List_1, and comprehension to scan through List_2. While the search through List_2 is fast, taking about a second to execute, it would need to iterate through all of List_1, and therefore requires an inordinate amount of time (the better part of 1000 hours) to complete, which made me wonder if there was a faster, more efficient way to do the same thing. Code Example: list_1 = [] #Insert List list_2 = [] #Insert List for search_term in list_1: compound_string = "{search_first} & {search_second}".format(search_first=search_term[0], search_second=search_term[1]) result = next((s for s in list_2 if compound_string in s), None) #Short-circuit, so we don't need to search through the whole list if result: #do exciting things I looked into using a set and intersection to perform the comparison, however, using a set intersection to do the comparison only works with whole strings. As I do not know the whole string ahead of time, using that method doesn't seem feasible without using a for loop and lists, which would run into the same problem. | The problem seems somewhat under-specified, but also over-specified. For example, what do tuples really have to do with it? I'll presume to reframe it like so: you have a list of strings, needles, and another list of strings, haystacks. You want to find all the haystacks that contain a (at least one) needle. First thing that comes to mind then is to preprocess the needles, to build a trie structure that allows searching for any of them more efficiently. Then march over the haystacks, one at a time, using that structure to test them. Here's simple code off the top of my head. It doesn't sound like RAM will be a problem for you, but if it is there are fancier ways to build "compressed" tries. BTW, if it's the case that all your needles contain the 3-character substring " & ", then best guess is that most haystacks won't, so you could get out cheap in most cases by checking for just that much first. from collections import defaultdict class Node: __slots__ = 'final', 'ch2node' def __init__(self): self.final = False self.ch2node = defaultdict(Node) def add(trie, s): for ch in s: trie = trie.ch2node[ch] trie.final = True # Does s[i:] start with a string in the trie? def find(trie, s, i): for i in range(i, len(s)): if trie.final: return True ch = s[i] if ch in trie.ch2node: trie = trie.ch2node[ch] else: return False return trie.final def search(trie, s): return any(find(trie, s, i) for i in range(len(s))) needles = ["a & b", "kik", "c & as"] haystacks = ["sldjkfa & b", "c&as", "akiko", "xc & asx", "kkc & a"] root = Node() for n in needles: add(root, n) print(list(h for h in haystacks if search(root, h))) Which prints ['sldjkfa & b', 'akiko', 'xc & asx'] EDIT A comment mentioned the Aho-Corasick algorithm, which is roughly related to the simple trie code above, but fancier and more efficient (it effectively searches "everywhere in the haystack simultaneously"). I haven't yet used it, but there's what looks like a capable Python package for that available on PyPI. EDIT2 I'm trying to get you unstuck, not give you a theoretically optimal solution. Try stuff! You may be surprised at how well even the simple code I gave may work for you. I fiddled the above to create 5 million "needles", each composed of 2 dictionary words (each at least 10 letters) separated by a single space. Building the trie took under 45 seconds (Python 3.12.4). Checking 5_008_510 lines against them took another 55 seconds, well under 2 minutes from start to finish. Contrast with "the better part of 1000 hours" you think you're facing now. This with no attempt to "optimize" anything, beyond just using a dirt simple trie. If I were to pursue it, I'd look first at memory use rather than speed. This consumed about 8.2GB of peak RAM. One way to cut that is to post-process the trie, to delete the empty dict on final nodes (or to not allocate a dict at all unless it's needed). But that would complicate the code some. Another is to look at using byte strings instead of Unicode strings. Then there's gonzo absurdities, like not using a Node class at all, instead using raw 2-tuples or 2-lists. But given all you said about your problem, it would be "good enough for me" already. TRADEOFFS The great advantage of a trie is that it's insensitive to how many needles there are - the time to check a haystack is about the same whether there's one or a billion needles to look for. The great potential disadvantage is the memory needed to hold a needle trie. 5 million needles is certainly on the large side, which is why I used as simple a trie structure as possible. The tradeoff there is that for a haystack of length L, it may need to do L distinct searches. The related Aho-Corasick automaton only needs to do one search, regardless of how large L is. But that's a fancier trie variant that requires more memory and hairier code. In the absence of any info about the distribution of your haystack (or even needle) sizes, "first try the simplest thing that could possibly work" rules. The potential quadratic (in L) time nature of the dirt-simple-trie search would kill it if, e.g., L could be as large as a million - but is a relatively minor thing if L won't get bigger than, say, 100 (being 100 times slower than theoretically necessary just doesn't matter much compared to saving a factor of 5 million). Sketching all the possible tradeoffs would require a small book. To get more focused suggestions, you need to give quantified details about your expected needles and haystacks. In case it's not obvious, here's a pragmatic thing: if the RAM for a needle trie is "too large", you can shave it by about a factor of K, by doing K runs, using only len(needles)/K needles per run. In that case, the needles should be sorted first (common prefixes are physically shared in a trie, and sorting will bring needles with common prefixes together). Or you can do a lot more work to build a disk-based trie. The possible solution space is large. QUICK COMPARISON As above, 5 million needles, but I cut their average size about in half, from around 21 characters to about 10.5 - RAM pressure otherwise with the other package. Somewhat over 5 million haystacks. Most had len under 70, but a few in the hundreds. Few haystacks contained a needle (only 910 total). For other code, I used ahocorapy, a pure-Python implementation of full-blown Aho-Corasick. Memory use for the other package was significantly higher. Expected. Where my Node class contains only 2 members, its similar State class contains 7. It needs to save away a lot more info to guarantee worst-case linear-time performance. For the same reason, building the needle trie was also slower. About 24 seconds for my code, about 60 for ahocorapy. But you get what you pay for ;-) Searching the 5M+ haystacks took about 55 seconds for my code, but only about 22 for ahocorapy. Since needles were rarely found, this is close to a worst case for my code (it has to try len(haystack) distinct searches to conclude that no needles are present). In all, my code was slightly faster overall, thanks to it doing much less work to build its dirt-dumb needle trie to begin with. Under the PyPy implementation of Python, all this code runs at least twice as fast. And with either code base, pickle could be used to save away the needle trie for reuse on a different collection of haystacks. ANOTHER STAB Here's new "dirt dumb" code, running faster, using less memory, better structured, and generalized to give the possibility of finding all contained needles. For terrible cases, consider a set of needles like x ax aax aaax aaaax aaaaax ... aaaaa.....aaax and a haystack like aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay. There are no matches, but one-at-a-time seaarching will take a long time. Full Aho-Corasick much faster. from collections import defaultdict class Node: __slots__ = 'final', 'ch2node' def __init__(self): self.final = "" self.ch2node = None class Trie: def __init__(self): self.root = Node() self.leaders = set() # Add a needle. def add(self, s): if not s: raise ValueError("empty string not allowed") trie = self.root for ch in s: if (ch2node := trie.ch2node) is None: ch2node = trie.ch2node = defaultdict(Node) trie = ch2node[ch] trie.final = s self.leaders.add(s[0]) # Search a haystack `s`. Generates a 2-tuple, (needle, i), for each needle the # haystack contains. The needle starts at s[i]. def search(self, s): leaders = self.leaders root = self.root for i, ch in enumerate(s): if ch not in leaders: continue trie = root for i in range(i, len(s)): if ((ch2node := trie.ch2node) and (ch := s[i]) in ch2node): trie = ch2node[ch] if trie.final: yield trie.final, i - len(trie.final) + 1 else: break Then, e.g., t = Trie() for n in 'eat', 'tea', 'car', 'care', 'cares', 'arc': t.add(n) for m in t.search('eateacarcares'): print(m) displays: ('eat', 0) ('tea', 2) ('car', 5) ('arc', 6) ('car', 8) ('care', 8) ('cares', 8) RABIN-KARP A different approach is to use the Rabin-Karp (RK) algorithm to match multiple needles. A particular hash code is precomputed for each needle, and all the storage needed is a dict mapping a hash code to a list of the needles with that hash code. Searching a haystack is just a matter of going over it once, left to right, computing RK's "rolling hash" for each window. The hash function is designed so that the next window's hash can be computed using a small & fixed number (independent of window size) of arithmetic operations. If the hash is in the dict, directly compare each needle with that hash against the haystack window. It's simple, fast, and very memory-frugal. BUT. Alas, it's only straightforward if all the needles have the same length, so that the window size is fixed. If there are K different needle lengths, it gets messy. You can, e.g., use K different rolling hashes, but that slows it by a factor of K. Or you can use a window size equal to the length of the shortest needle, but then the number of false positives can zoom. I used the latter strategy, and couldn't make it competitive under CPython. However, PyPy excels at speeding Python code doing arithmetic on native machine-size ints, and speed was comparable under that. Memory use was 10x smaller, a massive win. RK is also good at avoiding pathological cases, although that's probabilistic, not guaranteed. FOR SHORT--BUT NOT TOO SHORT--NEEDLES Now that we know your needles are neither tiny nor huge, this should yield a nice speedup, and requires vastly less memory: EDIT: added leaders, which gives a highly significant speedup on my data. EDIT: in searching, .startswith() avoids needing to construct a new string object. class RK: # not Rabin-Karp, but inspired by it def __init__(self): from collections import defaultdict self.needles = [] self.h2ns = defaultdict(list) self.leaders = set() def add(self, s): if not s: raise ValueError("empty string not allowed") if self.h2ns: raise ValueError("cannot add() after finalize() called") self.needles.append(s) self.leaders.add(s[0]) def finalize(self): h2ns = self.h2ns if h2ns: return # already finalized w = self.w = min(map(len, self.needles)) for n in self.needles: h2ns[hash(n[:w])].append(n) del self.needles def search(self, s): if not self.h2ns: raise ValueError("must call finalize() before search()") h2nsget = self.h2ns.get w = self.w leaders = self.leaders for i in range(len(s) - w + 1): if (s[i] in leaders and (ns := h2nsget(hash(s[i : i + w])))): for n in ns: if s.startswith(n, i): yield n, i Then, e.g., t = RK() for n in 'eat', 'tea', 'car', 'care', 'cares', 'arc', 's': t.add(n) t.finalize() for m in t.search('eateacarcares'): print(m) prints ('eat', 0) ('tea', 2) ('car', 5) ('arc', 6) ('car', 8) ('care', 8) ('cares', 8) ('s', 12) The searching part isn't actually faster, but finalize() is very much faster than building a trie. The length of the shortest needle is vital: the shorter it is, the more likely searching will have to weed out "false positives". On my same test data, total time from start to finish is about 15 seconds with this. Variant: why hash at all? h2ns could instead map a w-character string to the needles starting with that string. Hashing would still occur, of course, but under the covers, as part of Python doing the dict lookup. It makes little difference to speed, but boosts memory use (w-character dict keys require more space than machine int keys). That in turn can be reduced by storing needle[w:] in the lists instead, for a possible reduction in total character storage needed. But that makes finalize() slower - slicing isn't free. All such variants look "good enough" to me. | 2 | 5 |
78,721,341 | 2024-7-8 | https://stackoverflow.com/questions/78721341/efficiently-remove-rows-from-pandas-df-based-on-second-latest-time-in-column | I have a pandas Dataframe that looks similar to this: Index ID time_1 time_2 0 101 2024-06-20 14:32:22 2024-06-20 14:10:31 1 101 2024-06-20 15:21:31 2024-06-20 14:32:22 2 101 2024-06-20 15:21:31 2024-06-20 15:21:31 3 102 2024-06-20 16:26:51 2024-06-20 15:21:31 4 102 2024-06-20 16:26:51 2024-06-20 16:56:24 5 103 2024-06-20 20:05:44 2024-06-20 21:17:35 6 103 2024-06-20 22:41:22 2024-06-20 22:21:31 7 103 2024-06-20 23:11:56 2024-06-20 23:01:31 For each ID in my df I want to take the second latest time_1 (if it exists). I then want to compare this time with the timestamps in time_2 and remove all rows from my df where time_2 is earlier than this time. My expected output would be: Index ID time_1 time_2 1 101 2024-06-20 15:21:31 2024-06-20 14:32:22 2 101 2024-06-20 15:21:31 2024-06-20 15:21:31 3 102 2024-06-20 16:26:51 2024-06-20 15:21:31 4 102 2024-06-20 16:26:51 2024-06-20 16:56:24 7 103 2024-06-20 23:11:56 2024-06-20 23:01:31 This problem is above my pandas level. I asked ChatGPT and this is the solution I got which in principle does what I want: import pandas as pd ids = [101, 101, 101, 102, 102, 103, 103, 103] time_1 = ['2024-06-20 14:32:22', '2024-06-20 15:21:31', '2024-06-20 15:21:31', '2024-06-20 16:26:51', '2024-06-20 16:26:51', '2024-06-20 20:05:44', '2024-06-20 22:41:22', '2024-06-20 23:11:56'] time_2 = ['2024-06-20 14:10:31', '2024-06-20 14:32:22', '2024-06-20 15:21:31', '2024-06-20 15:21:31', '2024-06-20 16:56:24', '2024-06-20 21:17:35', '2024-06-20 22:21:31', '2024-06-20 23:01:31'] df = pd.DataFrame({ 'id': ids, 'time_1': pd.to_datetime(time_1), 'time_2': pd.to_datetime(time_2) }) grouped = df.groupby('id')['time_1'] mask = pd.Series(False, index=df.index) for id_value, group in df.groupby('id'): # Remove duplicates and sort timestamps unique_sorted_times = group['time_1'].drop_duplicates().sort_values() # Check if there's more than one unique time if len(unique_sorted_times) > 1: # Select the second last time second_last_time = unique_sorted_times.iloc[-2] # Update the mask for rows with time_2 greater than or equal to the second last time_1 mask |= (df['id'] == id_value) & (df['time_2'] >= second_last_time) else: # If there's only one unique time, keep the row(s) mask |= (df['id'] == id_value) filtered_data = df[mask] My issue with this solution is the for-loop. This seems rather inefficient and my real data is quite large. And also I am curious if there is a better, more efficient solution for this. | You can use .transform() to create the mask. Sorting is not necessary when you can just use .nlargest() and select the second one if it exists. Or if time_1 is already sorted, you can even skip .nlargest() (or sorting) entirely. Then you just need to replace NaT with the smallest possible Timestamp value so that time_2 can't be earlier than it when you do the comparison. second_last_times = df.groupby('id')['time_1'].transform( lambda s: s.drop_duplicates().nlargest(2).iloc[1:].squeeze()) mask = second_last_times.fillna(pd.Timestamp.min).le(df['time_2']) df[mask] Result: id time_1 time_2 1 101 2024-06-20 15:21:31 2024-06-20 14:32:22 2 101 2024-06-20 15:21:31 2024-06-20 15:21:31 3 102 2024-06-20 16:26:51 2024-06-20 15:21:31 4 102 2024-06-20 16:26:51 2024-06-20 16:56:24 7 103 2024-06-20 23:11:56 2024-06-20 23:01:31 For reference, second_last_times: 0 2024-06-20 14:32:22 1 2024-06-20 14:32:22 2 2024-06-20 14:32:22 3 NaT 4 NaT 5 2024-06-20 22:41:22 6 2024-06-20 22:41:22 7 2024-06-20 22:41:22 Name: time_1, dtype: datetime64[ns] If you want to generalize this, replace .nlargest(2).iloc[1:] with .nlargest(n).iloc[n-1:]. P.S. This is similar to mozway's solution, but I actually wrote the code before they posted, except the squeeze technique - thanks for that. | 3 | 1 |
78,722,450 | 2024-7-8 | https://stackoverflow.com/questions/78722450/specify-none-as-default-value-for-boolean-function-argument | The prototype for numpy.histogram contains an input argument density of type bool and default value None. If the caller does not supply density, what value does it take on? The closest Q&A that I can find is this. The first answer says "Don't use False as a value for a non-bool field", which doesn't apply here. It also says that bool(x) returns False, but that doesn't assure the caller that the function will set density to False if it isn't provided. Is this a mistake in the documentation of the prototype for numpy.histogram, or am I missing something about the documentation convention? The other answers to the above Q&A do not seem relevant to my question. | This is at best poorly documented. The documentation seems to imply that False and the default None will be treated equivalently, but it should either document that explicitly, or use the more sensible False as the default value. (The third option would be if the function makes a distinction between False and None, but that also should be explicitly documented.) | 3 | 1 |
78,722,284 | 2024-7-8 | https://stackoverflow.com/questions/78722284/how-to-add-a-colon-after-each-two-characters-in-a-string | I have a list like this: list1 = [ '000c.29e6.8fa5', 'fa16.3e9f.0c8c', 'fa16.3e70.323b' ] I'm going to convert them to mac addresses in format 00:0C:29:E5:8F:A5 in uppercase. How can I do that? I googled but found nothing. I also thought how to do, but still don't have any clues. I just know this: for x in list1: x = x.replace('.', '').upper()[::1] I know [::1] splits, but not sure if it's correct and if I can continue with this or not. | Another way, using bytes.fromhex() and bytes.hex(): >>> [":".join(bytes([b]).hex() for b in bytes.fromhex(l.replace(".", ""))) for l in list1] ['00:0c:29:e6:8f:a5', 'fa:16:3e:9f:0c:8c', 'fa:16:3e:70:32:3b'] Naturally, if you need upper-case, slap that on at the end. >>> [":".join(bytes([b]).hex() for b in bytes.fromhex(l.replace(".", ""))).upper() for l in list1] ['00:0C:29:E6:8F:A5', 'FA:16:3E:9F:0C:8C', 'FA:16:3E:70:32:3B'] | 2 | 2 |
78,719,212 | 2024-7-8 | https://stackoverflow.com/questions/78719212/reducing-code-expression-duplication-when-using-polars-with-columns | Consider some Polars code like so: df.with_columns( pl.date_ranges( pl.col("current_start"), pl.col("current_end"), "1mo", closed="left" ).alias("current_tpoints") ).drop("current_start", "current_end").with_columns( pl.date_ranges( pl.col("history_start"), pl.col("history_end"), "1mo", closed="left" ).alias("history_tpoints") ).drop( "history_start", "history_end" ) The key issue to note here is the repetitiveness of history_* and current_*. I could reduce duplication by doing this: for x in ["history", "current"]: fstring = f"{x}" + "_{other}" start = fstring.format(other="start") end = fstring.format(other="end") df = df.with_columns( pl.date_ranges( pl.col(start), pl.col(end), "1mo", closed="left", ).alias(fstring.format(other="tpoints")) ).drop(start, end) But are there any other ways to reduce duplication I ought to consider? | As it seems like you might not need any original columns, you could use select() instead of with_columns(), so you don't need to drop() columns. And you can loop over column names within select() / with_columns(): df.select( pl.date_ranges( pl.col(f"{c}_start"), pl.col(f"{c}_end"), "1mo", closed="left" ).alias(f"{c}_tpoints") for c in ["current", "history"] ) To explain why this works: According to documentation, both select() and with_columns() methods can *exprs: IntoExpr | Iterable[IntoExpr] which means variable amount of arguments. You see it can be either multiple expressions or multiple lists of expressions. This is exactly what we can do with list comprehension, we just create list of expressions. [ pl.date_ranges( pl.col(f"{c}_start"), pl.col(f"{c}_end"), "1mo", closed="left" ).alias(f"{c}_tpoints") for c in ["current", "history"] ] [<Expr ['col("current_start").date_rangβ¦'] at 0x206D93030E0>, <Expr ['col("history_start").date_rangβ¦'] at 0x206D8F85520>] Which we can then pass into the polars method. Notice that I didn't have square brackets in the final answer. This is cause we don't really need a list of expressions, we just need an iterable (in this case - generator). | 2 | 4 |
78,721,860 | 2024-7-8 | https://stackoverflow.com/questions/78721860/how-can-i-change-values-of-a-column-if-the-group-nunique-is-more-than-n | My DataFrame: import pandas as pd df = pd.DataFrame( { 'a': ['a', 'a', 'a', 'b', 'c', 'x', 'j', 'w'], 'b': [1, 1, 1, 2, 2, 3, 3, 3], } ) Expected output is changing column a: a b 0 a 1 1 a 1 2 a 1 3 NaN 2 4 NaN 2 5 NaN 3 6 NaN 3 7 NaN 3 Logic: The groups are based on b. If for a group df.a.nunique() > 1 then df.a == np.nan. This is my attempt. It works but I wonder if there is a one-liner/more efficient way to do it: df['x'] = df.groupby('b')['a'].transform('nunique') df.loc[df.x > 1, 'a'] = np.nan | More efficient than groupby, use duplicated with keep=False, and boolean indexing: df.loc[~df[['a', 'b']].duplicated(keep=False), 'a'] = float('nan') If you really want to use groupby.transform: df.loc[df.groupby('b')['a'].transform('nunique')>1, 'a'] = float('nan') Output: a b 0 a 1 1 a 1 2 a 1 3 NaN 2 4 NaN 2 5 NaN 3 6 NaN 3 7 NaN 3 | 2 | 1 |
78,721,501 | 2024-7-8 | https://stackoverflow.com/questions/78721501/making-an-image-move-in-matplotlib | I'm trying to have an image rotate and translate in matplotlib according to a predefined pattern. I'm trying to use FuncAnimation and Affine2D to achieve this. My code looks something like this : from matplotlib.animation import FumcAnimation from matplotlib.transforms import Affine2D from matplotlib import pyplot as plt fig, ax = plt.subplots() img = ax.imgshow(plt.imread("demo.png"),aspect="equal") def update(i): if i>0: img.set_transform(Affine2D().translate(1,0)) return img anim=FuncAnimation(fig,update,frames=(0,1)) plt.show() Instead of the image moving right, it disappears. | You're not using the value of the frames (0 or 1) to make the movement and that's not the only issue. So, assuming you want to simultaneously translate the image (to the right) and rotate_deg (relative to the center), you can do something like below : import matplotlib.pyplot as plt import matplotlib.transforms as mtransforms from matplotlib.animation import FuncAnimation img = plt.imread("mpl_log.png") W, H, *_ = img.shape fig, ax = plt.subplots(figsize=(10, 5)) # feel free to readapt.. ax.grid() ax.patch.set_facecolor("whitesmoke") aximg = ax.imshow(img, aspect="equal") ax.set(xlim=(-W // 2, W * 7), ylim=(-H, H * 2)) def move(frame): transfo = mtransforms.Affine2D() ( transfo # centered rotation .translate(-W / 2, -H / 2) .rotate_deg(frame % 190) .translate(W / 2, H / 2) # translation to the right .translate(frame, 0) ) aximg.set_transform(transfo + ax.transData) return (aximg,) anim = FuncAnimation(fig, move, frames=range(0, W * 6, 100)) Output (fps=10) : | 3 | 0 |
78,721,966 | 2024-7-8 | https://stackoverflow.com/questions/78721966/polars-compute-variance-row-wise | I've the following dataframe df = pl.DataFrame({ "col1": [1, 2, 3], "col2": [4, 5, 6], "col3": [7, 8, 9], "col4": ["a", "v", "b"], }) I'd like to add a column containing the variance for all columns except col4. So far I've found this that could be a workaround the lack of horizontal computations in polars. df = ( (_df := df.with_row_index('i')) .join( _df.melt('i').group_by('i').agg(pl.col('value').var()), on='i' ) .sort('i') .drop('i') ) However this only found when all columns are numerical. Is there a way to exclude col4 but to have it in the final dataframe? | selectors to filter out non-numeric columns. concat_list() to get chosen columns into list. list.var() to calculate variance. import polars.selectors as cs df.with_columns(var = pl.concat_list(cs.numeric()).list.var()) ββββββββ¬βββββββ¬βββββββ¬βββββββ¬ββββββ β col1 β col2 β col3 β col4 β var β β --- β --- β --- β --- β --- β β i64 β i64 β i64 β str β f64 β ββββββββͺβββββββͺβββββββͺβββββββͺββββββ‘ β 1 β 4 β 7 β a β 9.0 β β 2 β 5 β 8 β v β 9.0 β β 3 β 6 β 9 β b β 9.0 β ββββββββ΄βββββββ΄βββββββ΄βββββββ΄ββββββ | 2 | 4 |
78,712,904 | 2024-7-5 | https://stackoverflow.com/questions/78712904/create-a-similar-matrix-object-in-matlab-and-python | For comparison purposes, I want to create an object which would have the same shape and indexing properties in matlab and python (numpy). Let's say that on the matlab side the object would be : arr_matlab = cat(4, ... cat(3, ... [ 1, 2; 3, 4; 5, 6], ... [ 7, 8; 9, 10; 11, 12], ... [ 13, 14; 15, 16; 17, 18], ... [ 20, 21; 22, 23; 24, 25]), ... cat(3, ... [ 26, 27; 28, 29; 30, 31], ... [ 32, 33; 34, 35; 36, 37], ... [ 38, 39; 40, 41; 42, 43], ... [ 44, 45; 46, 47; 48, 49]), ... cat(3, ... [ 50, 51; 52, 53; 54, 55], ... [ 56, 57; 58, 59; 60, 61], ... [ 62, 63; 64, 65; 66, 67], ... [ 68, 69; 70, 71; 72, 73]), ... cat(3, ... [ 74, 75; 76, 77; 78, 79], ... [ 80, 81; 82, 83; 84, 85], ... [ 86, 87; 88, 89; 90, 91], ... [ 92, 93; 94, 95; 96, 97]), ... cat(3, ... [ 98, 99; 100, 101; 102, 103], ... [104, 105; 106, 107; 108, 109], ... [110, 111; 112, 113; 114, 115], ... [116, 117; 118, 119; 120, 121])); K>> size(arr_matlab) ans = 3 2 4 5 K>> arr_matlab(1, 2, 1 ,1) ans = 2 size(arr_matlab) should be identical to arr_python.shape and indexing should give the same result (same result for arr_python[0,1,0,0] and arr_matlab(1,2,1,1) for example). For the moment I can't do it. data = np.array([ ...: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, ...: 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, ...: 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, ...: 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, ...: 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121 ...: ]) ...: ...: # Reshape to 3x2x4x5 ...: arr_python = data.reshape((3, 2, 4, 5), order='F') In [138]: arr_python.shape Out[138]: (3, 2, 4, 5) arr_python[0,1,0,0] Out[143]: 4 | This is what you're trying to achieve in Python and MatLab: MatLab arr_matlab = reshape(1:120, [3, 2, 4, 5]); L = size(arr_matlab); disp(L); disp(arr_matlab(1, 2, 1, 1)); Prints 3 2 4 5 4 Python import numpy as np data = np.arange(1, 120 + 1) arr_python = data.reshape((3, 2, 4, 5), order='F') print(arr_python.shape) print(arr_python[0, 1, 0, 0]) Prints (3, 2, 4, 5) 4 Comments: Note that Python's end range is not inclusive. You could simplify data = np.array(list(range(1, 120 + 1))) to data = np.arange(1, 120 + 1) in the Python version. Pythonβs F ordering is reversed from its C ordering. But this doesnβt match MATLABβs ordering. If Pythonβs C ordering is indexed as a,b,c,d, then MATLABβs ordering is c,d,b,a. Itβs weird, I know. Test your code with more than that one indexing operation, you might find discrepancies. β Cris Luengo OK, my comment above relates to the interpretation of axes in MATLAB and Python, but not about the storage order, which is what this question is about. I did test the indexing for this solution, and it does match. β Cris Luengo | 2 | 4 |
78,721,135 | 2024-7-8 | https://stackoverflow.com/questions/78721135/is-it-possible-to-dynamically-set-the-label-in-cypher-with-unwind-without-using | Here's my query: def create_entity_nodes(tx, report_id, entities): query = ( "UNWIND $entities AS entity " "MATCH (report:Report {id: $report_id}) " "CREATE (entity.name:entity.type entity), " "(report)-[:OWNS]->(entity.name) " "(entity.name)-[:BELONGS_TO]->(report)" "RETURN report" ) I previously only did it with one entity in my code below and it worked. So, I wanted to try it with a list but I'm not sure if it's possible. entity_type = entity["type"] if not entity_type: raise ValueError("Entity type is required.") entity_type = entity_type[0].upper() + entity_type[1:].lower() entity.pop("type") query = ( f"MATCH (report:Report {{id: $report_id}}) " f"CREATE ({entity['name']}:{entity_type} $props), " f"({entity['name']})-[:BELONGS_TO]->(report), " f"(report)-[:OWNS]->({entity['name']})" f"RETURN report" ) result = tx.run(query, report_id=report_id, props=entity) | No, your first query won't work. At the moment, the only way to create labels dynamically is to build the query string as you did with your second query, or by using an apoc function like apoc.create.node. | 2 | 2 |
78,721,181 | 2024-7-8 | https://stackoverflow.com/questions/78721181/replace-subtrings-in-a-list-of-strings-using-dictionary-in-python | I have a list of substrings: ls = ['BLAH a b c A B C D 12 34 56', 'BLAH d A B 12 45 78', 'BLAH a/b A B C 12 45 78', 'BLAH a/ b A 12 45 78', 'BLAH a b c A B C D 12 34 99'] I want to replace the lower case substrings with an identifier: dict2 = {'a b c':'1','d':'2','a/b':'3','a/ b':'4'} I am looping over all the items in the list and trying to perform a replace using: [[ls[i].replace(k,v) for k,v in dict2.items()][0] for i in range(len(ls))] but this does not perform the correct replacement from the dictionary for each element(d, a/b, and a/ b were not replaced), and results in: ['BLAH 1 A B C D 12 34 56', 'BLAH d A B 12 45 78', 'BLAH a/b A B C 12 45 78', 'BLAH a/ b A 12 45 78', 'BLAH 1 A B C D 12 34 99'] I would like to end up with: ['BLAH 1 A B C D 12 34 56', 'BLAH 2 A B 12 45 78', 'BLAH 3 A B C 12 45 78', 'BLAH 4 A 12 45 78', 'BLAH 1 A B C D 12 34 99'] | This way seems to work for me. Here is an online test: https://www.pythonmorsels.com/p/2qmeh/ result = [] for s in ls: for k, v in dict2.items(): s = s.replace(k, v) result.append(s) print(result) | 2 | 5 |
78,720,295 | 2024-7-8 | https://stackoverflow.com/questions/78720295/how-to-efficiently-set-column-values-based-on-multiple-other-columns | I have a dataframe that contains "duplicated" data in all columns but one called source. I match these records one to one per source into groups. Example data for such dataframe: id,str_id,partition_number,source,type,state,quantity,price,m_group,m_status 1,s1_1,111,1,A,1,10,100.0,,0 2,s1_2,111,1,A,1,10,100.0,,0 3,s1_3,222,1,B,2,20,150.0,,0 4,s1_4,333,1,C,1,30,200.0,,0 5,s1_5,111,1,A,1,10,100.0,,0 6,s1_6,111,1,A,1,10,100.0,,0 7,s2_1,111,5,A,1,10,100.0,,0 8,s2_2,111,5,A,1,10,100.0,,0 9,s2_3,111,5,A,1,10,100.0,,0 10,s2_4,222,5,B,2,20,150.0,,0 11,s2_5,444,5,D,1,40,250.0,,0 12,s3_1,111,6,A,1,10,100.0,,0 13,s3_2,111,6,A,1,10,100.0,,0 14,s3_3,111,6,A,1,10,100.0,,0 15,s3_4,222,6,B,2,20,150.0,,0 16,s3_5,444,6,D,1,40,250.0,,0 17,s3_6,333,6,C,1,30,200.0,,0 Loaded into dataframe: βββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββ¬βββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ β id β str_id β part_ β source β type β stat β quantity β price β m_group β m_status β β --- β β number β β --- β --- β --- β --- β β β β i64 β --- β --- β --- β str β i64 β i64 β f64 β --- β --- β β β str β str β i64 β β β β β β β βββββββͺβββββββββββͺβββββββββββͺβββββββββββͺβββββββͺβββββββͺβββββββββββͺββββββββββͺββββββββββͺβββββββββββ‘ β 1 β s1_1 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 2 β s1_2 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 3 β s1_3 β 222 β 1 β B β 2 β 20. β 150.0000 β [] β [] β β β β β β β β β 000 β β β β 4 β s1_4 β 333 β 1 β C β 1 β 30. β 200.0000 β [] β [] β β β β β β β β β 000 β β β β 5 β s1_5 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 6 β s1_6 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 7 β s2_1 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 8 β s2_2 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 9 β s2_3 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 10 β s2_4 β 222 β 5 β B β 2 β 20. β 150.0000 β [] β [] β β β β β β β β β 000 β β β β 11 β s2_5 β 444 β 5 β D β 1 β 40. β 250.0000 β [] β [] β β β β β β β β β 000 β β β β 12 β s3_1 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 13 β s3_2 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 14 β s3_3 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 15 β s3_4 β 222 β 6 β B β 2 β 20. β 150.0000 β [] β [] β β β β β β β β β 000 β β β β 16 β s3_5 β 444 β 6 β D β 1 β 40. β 250.0000 β [] β [] β β β β β β β β β 000 β β β β 17 β s3_6 β 333 β 6 β C β 1 β 30. β 200.0000 β [] β [] β β β β β β β β β 000 β β β βββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββ΄βββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ After I match these, I have an output dataframe that contains three columns of [list] type that aggregete the ids, str_ids and sources into groups of "duplicated" records: βββββββββββββββ¬βββββββββββββββββββββββββββ¬βββββββββββββββββ β id β str_id β source β β --- β --- β --- β β list[i64] β list[str] β list[i64] β βββββββββββββββͺβββββββββββββββββββββββββββͺβββββββββββββββββ‘ β [5, 9, 14] β ["s1_5", "s2_3", "s3_3"] β [1, 5, 6] β β [2, 8, 13] β ["s1_2", "s2_2", "s3_2"] β [1, 5, 6] β β [6] β ["s1_6"] β [1] β β [3, 10, 15] β ["s1_3", "s2_4", "s3_4"] β [1, 5, 6] β β [1, 7, 12] β ["s1_1", "s2_1", "s3_1"] β [1, 5, 6] β β [11, 16] β ["s2_5", "s3_5"] β [5, 6] β β [4, 17] β ["s1_4", "s3_6"] β [1, 6] β βββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββββ What's the most optimal way to either: update the values for m_status columns in original dataframe, for example, for every record that has a group of size at least 2, set the value of m_status to values of opposing sources if source == 1, else set the value of m_status to value of 1 if there is source 1 in the group. so the outcome would be: βββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββ¬βββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ β id β str_id β part_ β source β type β stat β quantity β price β m_group β m_status β β --- β β number β β --- β --- β --- β --- β β β β i64 β --- β --- β --- β str β i64 β i64 β f64 β --- β --- β β β str β str β i64 β β β β β β β βββββββͺβββββββββββͺβββββββββββͺβββββββββββͺβββββββͺβββββββͺβββββββββββͺββββββββββͺββββββββββͺβββββββββββ‘ β 1 β s1_1 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [5,6] β β β β β β β β β 000 β β β β 2 β s1_2 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [5,6] β β β β β β β β β 000 β β β β 3 β s1_3 β 222 β 1 β B β 2 β 20. β 150.0000 β [] β [5,6] β β β β β β β β β 000 β β β β 4 β s1_4 β 333 β 1 β C β 1 β 30. β 200.0000 β [] β [6] β β β β β β β β β 000 β β β β 5 β s1_5 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [5,6] β β β β β β β β β 000 β β β β 6 β s1_6 β 111 β 1 β A β 1 β 10. β 100.0000 β [] β [] β β β β β β β β β 000 β β β β 7 β s2_1 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 8 β s2_2 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 9 β s2_3 β 111 β 5 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 10 β s2_4 β 222 β 5 β B β 2 β 20. β 150.0000 β [] β [1] β β β β β β β β β 000 β β β β 11 β s2_5 β 444 β 5 β D β 1 β 40. β 250.0000 β [] β [] β β β β β β β β β 000 β β β β 12 β s3_1 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 13 β s3_2 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 14 β s3_3 β 111 β 6 β A β 1 β 10. β 100.0000 β [] β [1] β β β β β β β β β 000 β β β β 15 β s3_4 β 222 β 6 β B β 2 β 20. β 150.0000 β [] β [1] β β β β β β β β β 000 β β β β 16 β s3_5 β 444 β 6 β D β 1 β 40. β 250.0000 β [] β [] β β β β β β β β β 000 β β β β 17 β s3_6 β 333 β 6 β C β 1 β 30. β 200.0000 β [] β [1] β β β β β β β β β 000 β β β βββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββ΄βββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ create a completely new dataframe (can be in a different order) that contains the ids, str_ids and m_status in the same way as above. This way I wouldn't have to a lookup to original dataframe (but if I have ids it should not be expensive) and could just iterate to create a new one. My solution so far: df_out = df_out.select("id", "str_id", "source") m_status_mapping = {} for ids, str_ids, sources in df_out.iter_rows(): for i, id_ in enumerate(ids): opposite_sources = [str(rep) for j, s in enumerate(sources) if j != i] m_status_mapping[id_] = ','.join(opposite_sources) df = df_original.with_columns( pl.col("id").replace(m_status_mapping).alias("m_status") ) df = df.with_columns(pl.col("m_status").str.split(",")) df.select("id", "str_id", "m_status") Which results in following output: id str_id m_status i64 str list[str] 1 "s1_1" ["5", "6"] 2 "s1_2" ["5", "6"] 3 "s1_3" ["5", "6"] 4 "s1_4" ["6"] 5 "s1_5" ["5", "6"] 6 "s1_6" [""] 7 "s2_1" ["1", "6"] 8 "s2_2" ["1", "6"] 9 "s2_3" ["1", "6"] 10 "s2_4" ["1", "6"] 11 "s2_5" ["6"] 12 "s3_1" ["1", "5"] 13 "s3_2" ["1", "5"] 14 "s3_3" ["1", "5"] 15 "s3_4" ["1", "5"] 16 "s3_5" ["5"] 17 "s3_6" ["1"] It almost works, I get too many sources in m_status for rows with source != 1. Also it's probably terrible efficiency-wise, there must be a much better way to do this. | Using dataframe with aggregated duplicate records: list.len() to check length of the list. list.contains() to check if 1 is present. list.set_difference() to get list of non-1 elements. explode() to explode lists back into rows. when/then to conditionally create resulting m_status column. ( df.with_columns( l = pl.col.source.len(), has1 = pl.col.source.list.contains(1), excl1 = pl.col.source.list.set_difference([1]) ).explode(pl.col("id","str_id","source")) .select( pl.col("id","str_id","source"), m_status = pl.when(pl.col.l >= 2, pl.col.source == 1).then(pl.col.excl1) .when(pl.col.l >= 2, pl.col.has1).then([1]) .otherwise([]) ) .sort("id") ) βββββββ¬βββββββββ¬βββββββββ¬ββββββββββββ β id β str_id β source β m_status β β --- β --- β --- β --- β β i64 β str β i64 β list[i64] β βββββββͺβββββββββͺβββββββββͺββββββββββββ‘ β 1 β s1_1 β 1 β [6, 5] β β 2 β s1_2 β 1 β [6, 5] β β 3 β s1_3 β 1 β [6, 5] β β 4 β s1_4 β 1 β [6] β β 5 β s1_5 β 1 β [6, 5] β β 6 β s1_6 β 1 β [] β β 7 β s2_1 β 5 β [1] β β 8 β s2_2 β 5 β [1] β β 9 β s2_3 β 5 β [1] β β 10 β s2_4 β 5 β [1] β β 11 β s2_5 β 5 β [] β β 12 β s3_1 β 6 β [1] β β 13 β s3_2 β 6 β [1] β β 14 β s3_3 β 6 β [1] β β 15 β s3_4 β 6 β [1] β β 16 β s3_5 β 6 β [] β β 17 β s3_6 β 6 β [1] β βββββββ΄βββββββββ΄βββββββββ΄ββββββββββββ Just an addition, this is how you can aggregate "duplicate" records ( df .with_columns(i = pl.int_range(pl.len()).over("source","partition_number")) .group_by("i","partition_number", maintain_order=True) .agg("id","str_id","source") .drop("i","partition_number") ) βββββββββββββββ¬βββββββββββββββββββββββββββ¬ββββββββββββ β id β str_id β source β β --- β --- β --- β β list[i64] β list[str] β list[i64] β βββββββββββββββͺβββββββββββββββββββββββββββͺββββββββββββ‘ β [1, 7, 12] β ["s1_1", "s2_1", "s3_1"] β [1, 5, 6] β β [2, 8, 13] β ["s1_2", "s2_2", "s3_2"] β [1, 5, 6] β β [3, 10, 15] β ["s1_3", "s2_4", "s3_4"] β [1, 5, 6] β β [4, 17] β ["s1_4", "s3_6"] β [1, 6] β β [5, 9, 14] β ["s1_5", "s2_3", "s3_3"] β [1, 5, 6] β β [6] β ["s1_6"] β [1] β β [11, 16] β ["s2_5", "s3_5"] β [5, 6] β βββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββ Using this aggregation, you can also calculate m_status over groups, without aggregating Something like this: ( df .with_columns(i = pl.int_range(pl.len()).over("source","partition_number")) .with_columns( l = pl.len().over("partition_number","i"), has1 = (pl.col.source == 1).any().over("partition_number","i"), excl1 = pl.col.source.filter(pl.col.source != 1).over("partition_number","i", mapping_strategy="join") ) .select( pl.col("id","str_id","source"), m_status = pl.when(pl.col.l >= 2, pl.col.source == 1).then(pl.col.excl1) .when(pl.col.l >= 2, pl.col.has1).then([1]) .otherwise([]) ) ) βββββββ¬βββββββββ¬βββββββββ¬ββββββββββββ β id β str_id β source β m_status β β --- β --- β --- β --- β β i64 β str β i64 β list[i64] β βββββββͺβββββββββͺβββββββββͺββββββββββββ‘ β 1 β s1_1 β 1 β [5, 6] β β 2 β s1_2 β 1 β [5, 6] β β 3 β s1_3 β 1 β [5, 6] β β 4 β s1_4 β 1 β [6] β β 5 β s1_5 β 1 β [5, 6] β β 6 β s1_6 β 1 β [] β β 7 β s2_1 β 5 β [1] β β 8 β s2_2 β 5 β [1] β β 9 β s2_3 β 5 β [1] β β 10 β s2_4 β 5 β [1] β β 11 β s2_5 β 5 β [] β β 12 β s3_1 β 6 β [1] β β 13 β s3_2 β 6 β [1] β β 14 β s3_3 β 6 β [1] β β 15 β s3_4 β 6 β [1] β β 16 β s3_5 β 6 β [] β β 17 β s3_6 β 6 β [1] β βββββββ΄βββββββββ΄βββββββββ΄ββββββββββββ | 4 | 1 |
78,720,213 | 2024-7-8 | https://stackoverflow.com/questions/78720213/putting-polars-api-extensions-in-dedicated-module-how-to-import-from-target-mo | I want to extend polars API as described in the docs, like this: @pl.api.register_expr_namespace("greetings") class Greetings: def __init__(self, expr: pl.Expr): self._expr = expr def hello(self) -> pl.Expr: return (pl.lit("Hello ") + self._expr).alias("hi there") def goodbye(self) -> pl.Expr: return (pl.lit("SayΕnara ") + self._expr).alias("bye") If I were to put the actual registration in a dedicated module (extensions.py), how am I supposed to import the methods from the respective class from within another module? Going with the dataframe example in the docs, let's say I put the following code in a module called target.py. I need to make the greetings-namespace available. How can I do it, i.e. how excactly should the import look like? pl.DataFrame(data=["world", "world!", "world!!"]).select( [ pl.all().greetings.hello(), pl.all().greetings.goodbye(), ] ) | You could just import extensions(assuming that the file is relative) in your target.py file to register the greetings namespace. import polars as pl import extensions # noqa: F401 pl.DataFrame(data=["world", "world!", "world!!"]).select( [ pl.all().greetings.hello(), pl.all().greetings.goodbye(), ] ) See also: polars-xdt | 2 | 2 |
78,705,398 | 2024-7-4 | https://stackoverflow.com/questions/78705398/break-the-curve-wherever-there-is-a-significant-change-in-the-curve | Wherever there's a significant change in the curve, I need to break the curve with a break thickness of just 1 pixel (I just want to break the curves into multiple parts). I have attached an image for reference. So after i read the image, i am thinning the curve and wherever there are red dots, i need to split it around that area. The first image is the input image and red dots indicate where I want the cut (the image will not actually have the red dot)/ The second image is the current output that I am getting. The third image is the unaltered image for reference. I have tried implementing the following codes: import cv2 import numpy as np from matplotlib import pyplot as plt image_path = rf'C:\Users\User\Desktop\output.png' img = cv2.imread(image_path, 0) ret, binary = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY) binary = cv2.ximgproc.thinning(binary, thinningType=cv2.ximgproc.THINNING_GUOHALL) coords = np.column_stack(np.where(binary > 0)) def calculateAngle(p1, p2, p3): v1 = np.array(p2) - np.array(p1) v2 = np.array(p3) - np.array(p2) angle = np.arctan2(v2[1], v2[0]) - np.arctan2(v1[1], v1[0]) angle = np.degrees(angle) if angle < 0: angle += 360 return angle startBlackImg = np.zeros((binary.shape[0], binary.shape[1], 1), np.uint8) i = 1 while i < (len(coords) - 1): p1 = coords[i - 1] p2 = coords[i] p3 = coords[i + 1] i += 1 angle = calculateAngle(p1, p2, p3) if angle < 45 or angle > 315: startBlackImg[p2[0], p2[1]] = 255 else: startBlackImg[p2[0], p2[1]] = 0 cv2.namedWindow('Check', 0) cv2.imshow('Check', startBlackImg) cv2.waitKey(0) cv2.destroyAllWindows() and the other logic is while kk < len(cutContour) - 10: xCdte = cutContour[kk][0][0] yCdte = cutContour[kk][0][1] xNextCdte = cutContour[kk + 10][0][0] yNextCdte = cutContour[kk + 10][0][1] kk += 1 if totalDistance <= 0.3048: startBlackImg[yCdte, xCdte] = np.array([255, 255, 255]) startBlackImg[yNextCdte, xNextCdte] = np.array([255, 255, 255]) else: if (abs(xCdte - xNextCdte) < 10 and abs(yCdte - yNextCdte) >= 10) or (abs(xCdte - xNextCdte) >= 10 and abs(yCdte - yNextCdte) < 10): startBlackImg[yCdte, xCdte] = np.array([255, 255, 255]) startBlackImg[yNextCdte, xNextCdte] = np.array([255, 255, 255]) else: startBlackImg[yCdte, xCdte] = np.array([0, 0, 0]) kk += 10 So far I am not getting what I want. Its breaking at multiple points and not just where I intend it to. Is there any library or and code to do this? | Hough Transform https://en.wikipedia.org/wiki/Hough_transform K-means iterations with different K's - find how many line groups K-means again with the right K Pixel clustering by min distance to lines import math import cv2 as cv import numpy as np import matplotlib.pyplot as plt orig_im = cv.imread("/home/ophir/temp/stackoverflow2.png",cv.IMREAD_GRAYSCALE) # use Hough Transform to get lots of straight lines lines = cv.HoughLines(orig_im, 1, np.pi/180, 30); im = cv.cvtColor(orig_im, cv.COLOR_GRAY2BGR) im2 = im.copy() im3 = im.copy() plt.figure() plt.imshow(im) # draw all straight lines on image rho_vals = [] theta_vals = [] for line in lines: for rho,theta in line: rho_vals.append(rho) theta_vals.append(theta) a = np.cos(theta) b = np.sin(theta) x0 = a*rho y0 = b*rho x1 = int(x0 + 1000*(-b)) y1 = int(y0 + 1000*(a)) x2 = int(x0 - 1000*(-b)) y2 = int(y0 - 1000*(a)) cv.line(im2,(x1,y1),(x2,y2),(0,0,255),2) rho_vals = np.array(rho_vals) rho_vals = np.expand_dims(rho_vals, axis=0) theta_vals = np.array(theta_vals) theta_vals = np.expand_dims(theta_vals, axis=0) Z = np.vstack((rho_vals,theta_vals)).T Z = np.float32(Z) # use K-means to cluster all straight lines to groups # I don't know how many groups, so I check several K's criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 10, 1.0) compactness = [] for i in range(2, 9): ret,label,center=cv.kmeans(Z,i + 1,None,criteria,10,cv.KMEANS_RANDOM_CENTERS) compactness.append(ret) compactness = np.array(compactness) # choose the right k, where the compactness isn't getting any better derivative = compactness[1:] - compactness[:-1] amax = np.argmax(derivative > -800) + 2 # do K-means again, this time with the right K ret,label,center=cv.kmeans(Z, amax,None,criteria,10,cv.KMEANS_RANDOM_CENTERS) # draw the centers of thr clusters on the lines parameters graph lines = [] for rho, theta in center: a = np.cos(theta) b = np.sin(theta) x0 = a*rho y0 = b*rho x1 = int(x0 + 1000*(-b)) y1 = int(y0 + 1000*(a)) x2 = int(x0 - 1000*(-b)) y2 = int(y0 - 1000*(a)) lines.append((x1,y1,x2,y2)) cv.line(im3,(x1,y1),(x2,y2),(0,0,255),2) # cluster the pixels in the original image by the minimum distance to a line pixels = cv.findNonZero(orig_im) labels = [] for pixel in pixels: x0, y0 = pixel[0] distances = [] for line in lines: x1, y1, x2, y2 = line # https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line dist = abs((y2 - y1) * x0 - (x2 - x1) * y0 + x2 * y1 - y2 * x1) / math.sqrt((y2 - y1)**2 + (x2 - x1)**2) distances.append(dist) labels.append(np.argmin(np.array(distances))) im4 = np.zeros(im2.shape) colors = [[0, 0, 255], [0, 255, 0], [255, 0, 0], [255, 255, 0], [255, 0, 255], [0, 255, 255]] # assign different color to each pixel by the label of the clustering for pixel, label in zip(pixels, labels): x, y = pixel[0] color = colors[label] im4[y,x, 0] = color[0] im4[y,x, 1] = color[1] im4[y,x, 2] = color[2] plt.figure() plt.plot(list(range(2,9)), compactness) plt.scatter(amax, compactness[amax - 2], c = 'r') plt.ylabel("compactness") plt.xlabel("K") plt.title("K-means compactness") plt.figure() plt.imshow(im2) plt.figure() plt.imshow(im3) plt.figure() plt.imshow(im4) plt.figure() plt.scatter(rho_vals, theta_vals) plt.scatter(center[:,0],center[:,1],s = 80,c = 'y', marker = 's') plt.xlabel("rho") plt.ylabel("theta") plt.title("lines parameters") plt.show() It's not perfect, there are still some issues, but you get the idea. | 3 | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.