question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
75,593,863
2023-2-28
https://stackoverflow.com/questions/75593863/python-match-case-continue-to-match-next-case-even-matched
key: Literal['all', 'a', 'b'] = 'a' def do_a(): pass def do_b(): pass match key: case 'a': do_a() case 'b': do_b() case 'all': do_a() do_b() Is there any way to simplify the code so I can remove the case 'all'? Something like match key: case 'a' | 'all': do_a() case 'b' | 'all': do_b()
Example Setup: from typing import Literal key: Literal['all', 'a', 'b'] def do_a(): print('do_a') def do_b(): print('do_b') Solution I: You can just use if: if key in ('a', 'all'): do_a() if key in ('b', 'all'): do_b() Solution II: You could use a function mapper, like so: function_mapper = { 'all': (do_a, do_b), 'a': (do_a,), 'b': (do_a,), } key = 'all' for func in function_mapper[key]: func()
3
2
75,587,316
2023-2-28
https://stackoverflow.com/questions/75587316/subtract-values-of-single-row-polars-frame-from-multiple-row-polars-frame
Lets say I have a polars dataframe like this in python newdata = pl.DataFrame({ 'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8], 'C': [9, 10, 11, 12], 'D': [13, 14, 15, 16] }) And I want to subtract from every value in every column the corresponding value from another frame baseline = pl.DataFrame({ 'A': [1], 'B': [2], 'C': [3], 'D': [4] }) In pandas, and numpy, the baseline frame is automagically broadcasted to the size of the newdata, and I can just do; data=newdata-baseline But that doesn't work in polars. So what is the cleanest way to achieve this in polars?
Here loop by column names can be used: newdata.with_columns( [pl.col(c) - baseline[c] for c in newdata.columns] )
3
3
75,566,105
2023-2-25
https://stackoverflow.com/questions/75566105/authenticate-a-get-request-to-google-play-purchase-api-with-service-account-pyth
I need to verify purchases of my android App from my AWS lambda in python. I have seen many posts of how to do so and the documentation and here is the code I have written : url = f"{google_verify_purchase_endpoint}/{product_id}/tokens/{token}" response = requests.get(url=url) data = response.json() logging.info(f"Response from Google Play API : {data}") When I do so, it throws a 401 status code not allowed. Alright, I have created a service account to allow the request with OAuth, but how can I use it to allow the request ? Unfortunately I can't use the google-api-python-client as mentioned here which is too big for my AWS lambda maximum size of 250Mb unzipped package. So my question is to use the service account with a simple GET requests or how can I authenticate automatically without the google-api-python-client ? Thanks in advance
Pre-Requisites and Assumptions It looks like you have already set-up a service account but need a hand with obtaining a JSON Web Token (JWT) before going after the verify_purchase endpoint. Generating a JWT is documented here. You should read this to understand what the following code is doing. I note that you have a storage-constraint, but you are almost-definitely going to need an additional library to deal with the cryptographic aspect of the token generation. PyJwt is reasonably small (including its requirements). Let's install this first: pip3 install PyJwt Obtaining a Private Key Next, let's grab our Service Account Private Key from Google Cloud. Open your project in Google Cloud. Go to "APIs & Services". Go to "Credentials". Click "Service Account". Find your Service Account and Select "Manage Keys". Select "Create new key" from the "ADD KEY" drop down. Select JSON. Save this JSON file to a secure location accessible by your script. Putting it to Use Now we can make a start on a Python Script. Here is an example to get you started (you should review this before putting it into production): import time import json import requests import jwt claim_start = int(time.time()) # Generate a claim header, this will always be the same. header = {"alg":"RS256","typ":"JWT"} # This is a claim for 1hr access to the android publisher API. # Replace <<EMAIL ADDRESS OF YOUR SERVICE ACCOUNT>> as appropriate. claims = { "iss": "<<EMAIL ADDRESS OF YOUR SERVICE ACCOUNT>>", "scope": "https://www.googleapis.com/auth/androidpublisher", "aud": "https://oauth2.googleapis.com/token", "exp": claim_start + 3600, "iat": claim_start } with open("<<PATH TO JSON FILE>>", 'r') as f: json_key = json.load(f) key = json_key.get('private_key') # Generate a signed jwt token = jwt.encode(claims, key, headers=header, algorithm="RS256") # Mandatory parameters required by GCloud. params = { "grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer", "assertion": token } r = requests.post("https://www.googleapis.com/oauth2/v4/token", data=params) r.raise_for_status() access_token = r.json()['access_token'] Using our Access Token The access_token can then be used as a JWT bearer for your project. Please note, I have changed the token variable to subscription_token from your original post to make it clear that it has nothing to do with this authentication mechanism. (It refers to "the token provided to the user's device when the subscription was purchased." as per the documentation that you provided.) headers = { "Host": "www.googleapis.com", "Authorization": "Bearer " + access_token, "Content-Type": "application/json" } google_verify_purchase_endpoint="" product_id="" subscription_token="" url = f"{google_verify_purchase_endpoint}/{product_id}/tokens/{subscription_token}" response = requests.get(url=url, headers=headers) data = response.json() logging.info(f"Response from Google Play API : {data}") Closing Remarks This is merely meant to serve as an introduction into authenticating against the Google Cloud APIs without the SDK. Ultimately, you are responsible for the security of your project and anyone else reading this should probably use the SDK where possible. I also suggest that you neaten the above code up into subsequent functions to call upon where appropriate. Good luck with the rest of your project!
5
2
75,563,361
2023-2-25
https://stackoverflow.com/questions/75563361/meaning-of-pop-style-stack-without-without-push-in-kivy
What does it mean when Kivy gives this warning message? What can cause it? [WARNING] [Label ] pop style stack without push
I don't see any documentation about this message specifically, but here's the relevant bit from the source code that handles BBCode-style text markup: def _pop_style(self, k): if k not in self._style_stack or len(self._style_stack[k]) == 0: Logger.warning('Label: pop style stack without push') return v = self._style_stack[k].pop() self.options[k] = v The BBCode format lets you, for instance, make a word italic like [i]this[/i]. As the documentation puts it: A tag is defined as [tag], and should have a corresponding [/tag] closing tag. The wording of the error message is probably not the most user-friendly. It refers to the fact that the tags are handled internally as a stack; an opening tag is pushed onto the stack, and a closing tag is popped off of it. The error means you're trying to pop a style that's not already pushed onto the stack—in other words, you've closed a tag that's not already open.
3
3
75,584,837
2023-2-27
https://stackoverflow.com/questions/75584837/pass-returned-value-from-a-previous-python-operator-task-to-another-in-airflow
I am a new user to Apache Airflow. I am building a DAG like the following to schedule tasks: def add(): return 1 + 1 def multiply(a): return a * 999 dag_args = { 'owner': 'me', 'depends_on_past': False, 'start_date': datetime(2023, 2, 27), 'email': ['[email protected]'], 'email_on_failure': True, 'email_on_retry': True, 'retries': 1, 'retry_delay': timedelta(minutes=3)} with DAG( dag_id='dag', start_date=datetime(2023, 2, 27), default_args=dag_args, schedule_interval='@once', end_date=None,) as dag: t1 = PythonOperator(task_id="t1", python_callable=add, dag=dag ) t2 = PythonOperator(task_id="t2", python_callable=multiply, dag=dag) As you can see, t2 is dependent on the result of t1. I wonder that is there any way for me to pass the return result from t1 directly to t2. I am using Apache Airflow 2.5.1 version and Python 3.9. I did some research on xcom, and found that all results of Airflow tasks are stored there, which can be accessed via code task_instance = kwargs['t1'] task_instance.xcom_pull(task_ids='t1')
Your DAG can be simplified using taskflow API. It will handle the Xcom and simplify the code. import pendulum from airflow.decorators import dag, task @dag( schedule_interval=None, start_date=pendulum.datetime(2023, 1, 1, tz="UTC"), catchup=False, ) def taskflow_api_etl(): @task() def add(): return 1+1 @task() def multiply(a: int): return a * 99 order_data = add() multiply(order_data) # multiply uses the Xcom genreated by add() etl_dag = taskflow_api_etl() This code will generate the DAG: When executing, add() task will generate Xcom with value 2: The downstream multiply will read the xcom and do 2*99:
3
3
75,583,768
2023-2-27
https://stackoverflow.com/questions/75583768/tell-pip-package-to-install-build-dependency-for-its-own-install-and-all-install
I am installing a package whose dependency needs to import numpy inside its setup.py. It also needs Cython to correctly build this dependency. This dependency is scikit-learn==0.21.2. Here is the setup.py of my own package called mypkgname: from setuptools import find_packages, setup import Cython # to check that Cython is indeed installed import numpy # to check that numpy is indeed installed setup( name="mypkgname", version="0.1.0", packages=find_packages("src", exclude=["tests"]), package_dir={"": "src"}, install_requires=[ "scikit-learn==0.21.2" ], ) To make sure that numpy and Cython are available inside mypkgname's setup.py when pip installs mypkgname, I set up the pyproject.toml like this: [build-system] requires = ["setuptools>=40.8.0", "Cython", "numpy>=1.11.0,<=1.22.4", "wheel"] build-backend = "setuptools.build_meta" After running pip install -e ., the import numpy; import Cython in mypkgname's setup.py work, but the import Cython inside the scikit-learn==0.21.2 install does not: File "/home/vvvvv/.pyenv/versions/3.8.12/envs/withingswpm04-38/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1016, in get_subpackage config = self._get_configuration_from_setup_py( File "/home/vvvvv/.pyenv/versions/3.8.12/envs/withingswpm04-38/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 958, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "sklearn/utils/setup.py", line 8, in configuration from Cython import Tempita ModuleNotFoundError: No module named 'Cython' error: subprocess-exited-with-error I don't understand why Cython is available for the install of my own mypkgname, but not for the install_requires packages of mypkgname. As if Cython was uninstalled(?) before installing the install_requires packages. But by checking the logs with pip install -v, it does not seem it is the case. I tried installing setuptools and Cython beforehand: pip install setuptools Cython pip install -e . and it works. Indeed, it seems that scikit-learn==0.21.2 needs these 2 packages already installed to be properly installed. However, the scikit-learn version I am trying to install does not specify any Cython build requirements inside a pyproject.toml. Here is a link to the setup.py of the scikit-learn package. If I just install setuptools, it still fails with the same error as in the first example (ModuleNotFoundError: No module named 'Cython'). Notes: I am forced for some out-of-context reasons to use this specific version of scikit-learn. I am using Python 3.8.12, pip 23.0.1 and Ubuntu 22.04. I tried this using pyenv and virtualenv with the same results. I also tried with pip install -v -U --no-cache-dir. How can I tell mypkgname that I need numpy, Cython and setuptools installed from the beginning of the pip install until the very end? I want the numpy, Cython and setuptools packages to be available in the install of mypkgname and in every install of the install_requires packages. Since it will be a package deployed on pypi, I don't want people to have anything other than pip and maybe setuptools already installed when running pip install mypkgname.
For building packages pip uses build Isolation. I.e. it installs build dependencies into a separate virtual environment, build a package and remove the isolating venv. So build dependencies (in your case Cython and numpy) are removed along with the isolating venv. You can disable isolation but better and more correct way is to declare your dependencies two times — as build dependencies and runtime dependencies: install_requires=[ "scikit-learn==0.21.2", "Cython", "numpy>=1.11.0,<=1.22.4", ], When building your package pip/setuptools include the list of dependencies into the wheel and they will be automatically installed in the target environment along with your package.
3
3
75,579,904
2023-2-27
https://stackoverflow.com/questions/75579904/mkdocs-with-auto-generated-references
I am building a TensorFlow model and have a ton of functions and modules that have proper docstrings. I installed mkdocs due to popular demand and the documentation does appear to be very easy to write. Nevertheless, I don't want to manually write up the entire API reference of all my modules inside this package. I am using mkdocstrings but I am unable to find a way to automate all of these and store them in the references section in mkdocs as you see with any documentation sites like numpy/pandas. ​ I tried pdoc3, but it only solves 1 problem for me. mkgendocs was something I was hoping it would work, but this requires another config file! I followed this post but it was not working for me. ​ Any suggestions/resources on how I can autogenerate all my API docstrings into an api references page in mkdocs? Sphinx is too advanced to work with sorry. I am trying to get my team to document more so I need something easy to use and MkDocs looks like the best option currently.
The solution is described in this recipe for mkdocstrings: https://mkdocstrings.github.io/recipes/#automatic-code-reference-pages
3
4
75,581,571
2023-2-27
https://stackoverflow.com/questions/75581571/in-numpy-what-is-the-difference-between-calling-ma-masked-where-and-ma-masked-a
Calling masked_array (the class constructor) and the masked_where function both seem to do exactly the same thing, in terms of being able to construct a numpy masked array given the data and mask values. When would you use one or the other? >>> import numpy as np >>> import numpy.ma as MA >>> vals = np.array([0,1,2,3,4,5]) >>> cond = vals > 3 >>> vals array([0, 1, 2, 3, 4, 5]) >>> cond array([False, False, False, False, True, True], dtype=bool) >>> MA.masked_array(data=vals, mask=cond) masked_array(data = [0 1 2 3 -- --], mask = [False False False False True True], fill_value = 999999) >>> MA.masked_where(cond, vals) masked_array(data = [0 1 2 3 -- --], mask = [False False False False True True], fill_value = 999999) The optional argument copy to masked_where (its only documented optional argument) is also supported by masked_array, so I don't see any options that are unique to masked_where. Although the converse is not true (e.g. masked_where doesn't support dtype), I don't understand the purpose of masked_where as a separate function.
You comment: If I call them with inconsistently shaped value and masked arrays, I get the same error message in both cases. I don't think we can help you without more details on what's different. For example if I try the obvious inconsistency, that of length, I get different error messages: In [121]: np.ma.masked_array(vals, cond[:-1]) MaskError: Mask and data not compatible: data size is 5, mask size is 4. In [122]: np.ma.masked_where(cond[:-1], vals) IndexError: Inconsistent shape between the condition and the input (got (4,) and (5,)) The test for the where message is obvious from the code that Corralien shows. The Masked_Array class definition has this test: # Make sure the mask and the data have the same shape if mask.shape != _data.shape: (nd, nm) = (_data.size, mask.size) if nm == 1: mask = np.resize(mask, _data.shape) elif nm == nd: mask = np.reshape(mask, _data.shape) else: msg = "Mask and data not compatible: data size is %i, " + \ "mask size is %i." raise MaskError(msg % (nd, nm)) I'd expect the same message only if the shapes made it past the where test, but were caught by the Class's test. If so that should be obvious in the full error traceback. Here's an example that fails on the where, but passes the base. In [138]: np.ma.masked_where(cond[:,None],vals) IndexError: Inconsistent shape between the condition and the input (got (5, 1) and (5,)) In [139]: np.ma.masked_array(vals, cond[:,None]) Out[139]: masked_array(data=[--, 1, --, 3, --], mask=[ True, False, True, False, True], fill_value=999999) The base class can handle cases where the cond differs in shape, but matches in size (total number of elements). It tries to reshape it. A scalar cond passes both though the exact test differs. Based on my reading of the code, I can't conceive of a difference that passes the where, but not the base. All the Masked Array code is python readable (see the link the other answer). While there is one base class definition, there are a number of constructor or helper functions, as the where docs makes clear. I won't worry too much about which function(s) to use, especially if you aren't trying to push the boundaries of what's logical. Masked arrays, while a part of numpy for a long time, does not get a whole lot of use, at least judging by relative lack of SO questions. I suspect pandas has largely replaced it when dealing with data that can have missing values (e.g. time series).
3
2
75,572,878
2023-2-26
https://stackoverflow.com/questions/75572878/shading-regions-inside-an-mplfinance-chart
I am using matplotlib v 3.7.0, mplfinance version '0.12.9b7', and Python 3.10. I am trying to shade regions of a plot, and although my logic seems correct, the shaded areas are not being displayed on the plot. This is my code: import yfinance as yf import mplfinance as mpf import pandas as pd # Download the stock data df = yf.download('TSLA', start='2022-01-01', end='2022-03-31') # Define the date ranges for shading red_range = ['2022-01-15', '2022-02-15'] blue_range = ['2022-03-01', '2022-03-15'] # Create a function to shade the chart regions def shade_region(ax, region_dates, color): region_dates.sort() start_date = region_dates[0] end_date = region_dates[1] # plot vertical lines ax.axvline(pd.to_datetime(start_date), color=color, linestyle='--') ax.axvline(pd.to_datetime(end_date), color=color, linestyle='--') # create fill xmin, xmax = ax.get_xlim() ymin, ymax = ax.get_ylim() ax.fill_between(pd.date_range(start=start_date, end=end_date), ymin, ymax, alpha=0.2, color=color) ax.set_xlim(xmin, xmax) ax.set_ylim(ymin, ymax) # Plot the candlestick chart with volume fig, axlist = mpf.plot(df, type='candle', volume=True, style='charles', title='TSLA Stock Price', ylabel='Price ($)', ylabel_lower='Shares\nTraded', figratio=(2,1), figsize=(10,5), tight_layout=True, returnfig=True) # Get the current axis object ax = axlist[0] # Shade the regions on the chart shade_region(ax, red_range, 'red') shade_region(ax, blue_range, 'blue') # Show the plot mpf.show() Why are the selected regions not being shaded, and how do I fix this?
The problem is that, when show_nontrading=False (which is the default value when not specified) then the X-axis are not dates as you would expect. Thus the vertical lines and the fill_between that you are specifying by date are actually ending up way off the chart. The simplest solution is to set show_nontrading=True. Using your code: fig, axlist = mpf.plot(df, type='candle', volume=True, style='charles', title='TSLA Stock Price', ylabel='Price ($)', ylabel_lower='Shares\nTraded', figratio=(2,1), figsize=(10,5), tight_layout=True, returnfig=True, show_nontrading=True) # Get the current axis object ax = axlist[0] # Shade the regions on the chart shade_region(ax, red_range, 'red') shade_region(ax, blue_range, 'blue') # Show the plot mpf.show() There are two other solutions, to the problem, that allow you to leave show_nontrading=False if that is your preference. 1. The first solution is to use mplfinance's kwargs, and do not use returnfig. Here is the documentation: vlines kwarg, fill_between kwarg. This is the prefered solution since it is always a good idea to let mplfinance do all manipulation of Axes objects, unless there is something that you cannot accomplish otherwise. And here is an example modifying your code: red_range = ['2022-01-15', '2022-02-15'] blue_range = ['2022-03-01', '2022-03-15'] vline_dates = red_range + blue_range vline_colors = ['red','red','blue','blue'] vline_dict = dict(vlines=vline_dates,colors=vline_colors,line_style='--') ymax = max(df['High'].values) ymin = min(df['Low'].values) # create a dataframe from the datetime index # for using in generating the fill_between `where` values: dfdates = df.index.to_frame() # generate red boolean where values: where_values = pd.notnull(dfdates[(dfdates >= red_range[0]) & (dfdates <= red_range[1])].Date.values) # put together the red fill_between specification: fb_red = dict(y1=ymin,y2=ymax,where=where_values,alpha=0.2,color='red') # generate blue boolean where values: where_values = pd.notnull(dfdates[(dfdates >= blue_range[0]) & (dfdates <= blue_range[1])].Date.values) # put together the red fill_between specification: fb_blue = dict(y1=ymin,y2=ymax,where=where_values,alpha=0.2,color='blue') # Plot the candlestick chart with volume mpf.plot(df, type='candle', volume=True, style='charles', title='TSLA Stock Price', ylabel='Price ($)', label_lower='Shares\nTraded', figratio=(2,1), figsize=(10,5), tight_layout=True, vlines=vline_dict, fill_between=[fb_red,fb_blue]) Notice there is a slight space between the left most vertical line and the shaded area. That is because the date you have selected ('2022-01-15') is on the weekend (and a 3-day weekend at that). If you change the date to '2022-01-14' or '2022-01-18' it will work fine, as shown here: 2. The last solution requires returnfig=True. This is not the recommended solution but it does work. First, it's important to understand the following: When show_nontrading is not specified, it defaults to False, which means that, although you see datetimes displayed on the x-axis, the actual values are the row number of your dataframe. Click here for a more detailed explanation. Therefore,in your code, instead of specifying dates, you specify the row number where that date appears. The simplest way to specify the row number is to use function date_to_iloc(df.index.to_series(),date) as defined here: def date_to_iloc(dtseries,date): '''Convert a `date` to a location, given a date series w/a datetime index. If `date` does not exactly match a date in the series then interpolate between two dates. If `date` is outside the range of dates in the series, then raise an exception . ''' d1s = dtseries.loc[date:] if len(d1s) < 1: sdtrange = str(dtseries[0])+' to '+str(dtseries[-1]) raise ValueError('User specified line date "'+str(date)+ '" is beyond (greater than) range of plotted data ('+sdtrange+').') d1 = d1s.index[0] d2s = dtseries.loc[:date] if len(d2s) < 1: sdtrange = str(dtseries[0])+' to '+str(dtseries[-1]) raise ValueError('User specified line date "'+str(date)+ '" is before (less than) range of plotted data ('+sdtrange+').') d2 = dtseries.loc[:date].index[-1] # If there are duplicate dates in the series, for example in a renko plot # then .get_loc(date) will return a slice containing all the dups, so: loc1 = dtseries.index.get_loc(d1) if isinstance(loc1,slice): loc1 = loc1.start loc2 = dtseries.index.get_loc(d2) if isinstance(loc2,slice): loc2 = loc2.stop - 1 return (loc1+loc2)/2.0 The function takes as input the data frame index converted to a series. So the following changes to your code will allow it to work using this method: # Define the date ranges for shading red_range = [date_to_iloc(df.index.to_series(),dt) for dt in ['2022-01-15', '2022-02-15']] blue_range = [date_to_iloc(df.index.to_series(),dt) for dt in ['2022-03-01', '2022-03-15']] ... ax.axvline(start_date, color=color, linestyle='--') ax.axvline(end_date, color=color, linestyle='--') ... ax.fill_between([start_date,end_date], ymin, ymax, alpha=0.2, color=color) Everything else stays the same and you get:
3
4
75,567,023
2023-2-25
https://stackoverflow.com/questions/75567023/descriptors-in-python-for-implementing-perls-tie-scalar-operation
I need some help with descriptors in python. I wrote an automatic translator from perl to python (Pythonizer) and I'm trying to implement tied scalars, which is basically an object that acts as a scalar, but has FETCH and STORE operations that are called appropriately. I'm using a dynamic class namespace 'main' to store the variable values. I'm attempting to define __get__ and __set__ operations for the object, but they are not working. Any help will be appreciated! main = type('main', tuple(), dict()) class TiedScalar(dict): # Generated code - can't be changed FETCH_CALLED_v = 0 STORE_CALLED_v = 0 def STORE(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.STORE_CALLED_v = TiedScalar.STORE_CALLED_v + 1 self["value"] = _args.pop(0) if _args else None return self.get("value") #TiedScalar.STORE = lambda *_args, **_kwargs: perllib.tie_call(STORE, _args, _kwargs) def FETCH(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.FETCH_CALLED_v = TiedScalar.FETCH_CALLED_v + 1 return self.get("value") #TiedScalar.FETCH = lambda *_args, **_kwargs: perllib.tie_call(FETCH, _args, _kwargs) @classmethod def TIESCALAR(*_args): _args = list(_args) class_ = _args.pop(0) if _args else None self = {"value": (_args.pop(0) if _args else None)} #self = perllib.bless(self, class_) #return perllib.add_tie_methods(self) return add_tie_methods(class_(self)) def __init__(self, d): for k, v in d.items(): self[k] = v #setattr(self, k, v) def add_tie_methods(obj): # This code is part of perllib and can be changed cls = obj.__class__ classname = cls.__name__ result = type(classname, (cls,), dict()) cls.__TIE_subclass__ = result def __get__(self, obj, objtype=None): return self.FETCH() result.__get__ = __get__ def __set__(self, obj, value): return self.STORE(value) result.__set__ = __set__ obj.__class__ = result return obj main.tied_scalar_v = TiedScalar.TIESCALAR(42) # Generated code assert main.tied_scalar_v == 42 main.tied_scalar_v = 100 assert main.tied_scalar_v == 100 print(TiedScalar.FETCH_CALLED_v) print(TiedScalar.STORE_CALLED_v) Here the 2 print statements print 1 and 0, so esp STORE is not being called, and fetch is not being called enough times based on the code. Note that 'main' stores all of the user's variables, also the TiedScalar class is (mostly) generated from the user’s perl code.
Ok - with your help, I was able to figure out how to do it. Basically I have to create a metaclass for my main, then use that class to store the initial object for tied scalars. Here is my updated code with changes marked # new: meta = type('mainmeta', (type,), { '__init__': lambda cls, name, bases, attrs: type.__init__(cls, name, bases, attrs) }) # new #main = type('main', tuple(), dict()) main = meta('main', tuple(), dict()) # new class TiedScalar(dict): # Generated code - can't be changed FETCH_CALLED_v = 0 STORE_CALLED_v = 0 def STORE(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.STORE_CALLED_v = TiedScalar.STORE_CALLED_v + 1 self["value"] = _args.pop(0) if _args else None return self.get("value") #TiedScalar.STORE = lambda *_args, **_kwargs: perllib.tie_call(STORE, _args, _kwargs) def FETCH(*_args): _args = list(_args) self = _args.pop(0) if _args else None TiedScalar.FETCH_CALLED_v = TiedScalar.FETCH_CALLED_v + 1 return self.get("value") #TiedScalar.FETCH = lambda *_args, **_kwargs: perllib.tie_call(FETCH, _args, _kwargs) @classmethod def TIESCALAR(*_args): _args = list(_args) class_ = _args.pop(0) if _args else None self = {"value": (_args.pop(0) if _args else None)} #self = perllib.bless(self, class_) #return perllib.add_tie_methods(self) return add_tie_methods(class_(self)) def __init__(self, d): for k, v in d.items(): self[k] = v #setattr(self, k, v) def add_tie_methods(obj): # This code is part of perllib and can be changed cls = obj.__class__ classname = cls.__name__ result = type(classname, (cls,), dict()) cls.__TIE_subclass__ = result def __get__(self, obj, objtype=None): #print(f'__get__({self}, {obj}, {objtype})') return self.FETCH() result.__get__ = __get__ def __set__(self, obj, value): #print(f'__set__({self}, {obj}, {value})') return self.STORE(value) result.__set__ = __set__ obj.__class__ = result return obj def assign_meta(cls, var, value): # new meta = cls.__class__ setattr(meta, var, value) return value # main.tied_scalar_v = TiedScalar.TIESCALAR(42) assign_meta(main, 'tied_scalar_v', TiedScalar.TIESCALAR(42)) # new assert main.tied_scalar_v == 42 main.tied_scalar_v = 100 assert main.tied_scalar_v == 100 print(TiedScalar.FETCH_CALLED_v) print(TiedScalar.STORE_CALLED_v)
5
1
75,570,820
2023-2-26
https://stackoverflow.com/questions/75570820/pynamodb-last-evaluated-key-always-return-null
I'm using Pynamodb for interacting with dynamodb. However, last_evaluated_key always returns null even if there are multiple items. When I run this query results = RecruiterProfileModel.profile_index.query( hash_key=UserEnum.RECRUITER, scan_index_forward=False, limit=1, ) If I try getting this value results.last_evaluated_key it always returns null. However, if I dump the results object, it returns { "page_iter": { "_operation": {}, "_args": [ "recruiter" ], "_kwargs": { "range_key_condition": null, "filter_condition": null, "index_name": "PROFILE_INDEX", "exclusive_start_key": { "sk": { "S": "PROFILE#6ab0f5bc-7283-4236-9e37-ea746901d19e" }, "user_type": { "S": "recruiter" }, "created_at": { "N": "1677398017.685749" }, "pk": { "S": "RECRUITER#6ab0f5bc-7283-4236-9e37-ea746901d19e" } }, "consistent_read": false, "scan_index_forward": false, "limit": 1, "attributes_to_get": null }, "_last_evaluated_key": { "sk": { "S": "PROFILE#95a4f201-2475-45a7-b096-5167f6a4d639" }, "user_type": { "S": "recruiter" }, "created_at": { "N": "1677398017.68518" }, "pk": { "S": "RECRUITER#95a4f201-2475-45a7-b096-5167f6a4d639" } }, "_is_last_page": false, "_total_scanned_count": 2, "_rate_limiter": null, "_settings": { "extra_headers": null } }, "_map_fn": {}, "_limit": 0, "_total_count": 1, "_index": 1, "_count": 1, "_items": [ { "company_name": { "S": "fletcher-rodriguez" }, "created_at": { "N": "1677398017.685749" }, "last_name": { "S": "craig" }, "first_name": { "S": "tyler" }, "designation": { "S": "manager" }, "verification_status": { "S": "pending" }, "sk": { "S": "PROFILE#6ab0f5bc-7283-4236-9e37-ea746901d19e" }, "user_type": { "S": "recruiter" }, "email": { "S": "[email protected]" }, "pk": { "S": "RECRUITER#6ab0f5bc-7283-4236-9e37-ea746901d19e" } } ] } You can clearly see that last_evaluated_key exists over there. I'm getting confused here. Please help. I've deadlines to meet. Thank you in advance. Edit: Here is a video attachment of trying to get the value of last_eveluated_key in different ways and they all return null pynamodb last_evaluated_key issue Gdrive video
The result returned by the query() is the ResultIterator object, which is an iterator. Its last_evaluated_key is the key of the last item you pulled from the iterator, not DynamoDB. Because your code have not yet asked to retrieve any items, the last_evaluated_key is not set. You need to process some or all of the items in the result: for item in results: print(item) return results.last_evaluated_key then you will get the key of the last item you processed.
3
4
75,565,527
2023-2-25
https://stackoverflow.com/questions/75565527/how-to-efficiently-calculate-combinations-of-the-sum-of-two-lists-and-avoid-sel
I am not the best coder, but I am trying to figure out how to calculate the number of possible combinations and actually generate every combination, but with some rules. I have two sets of "things," primaries (P) and secondaries (S). In this case I have P = 16 and S = 7. So a valid combination needs at least one P value in it, but does not need an S value to be valid, thus: P1, S1, S2 is Valid P1, P2, P3 is Valid P1, P2, P3, P4, S1, S2 is Valid But, S1, S2, S3 is NOT Valid. Also P1, S1 is the same as S1, P1. I wrote a program, which I think does the trick, but it is terrible and takes like two days to run. I created this code, which outputs the correct results (I think), but which is very resource intensive: import itertools P_num = 16 S_num = 7 R = P_num + S_num P = list(range(1,P_num+1)) S = list(range(1,S_num+1)) P = ["P" + str(suit) for suit in P] S = ["S" + str(suit) for suit in S] stuff = P + S totalarray = {new_list: [] for new_list in range(1,R+1)} for L in range(len(stuff) + 1): print(L) for subset in itertools.combinations(stuff, L): sublist = sorted(subset) if any(x in sublist for x in P): if sublist not in totalarray[L]: totalarray[L].append(sublist) run = 0 for each in totalarray.keys(): print(each, len(totalarray[each])) run += len(totalarray[each]) print(run) I could really use some advise on ways to optimize this problem, I am sure there is a better way to do this without so many nested operations. I am hoping to get the same results, but just more optimized.
Printing to a terminal is relatively slow. According to your rules, there will be 8,388,480 valid combinations. Writing the valid combinations to a file will be much faster than sending output to a terminal. Try this: from itertools import combinations from time import perf_counter OUTPUT_FILE = '/Volumes/G-Drive/combos.txt' def isvalid(c): for v in c: if v[0] == 'P': return True return False start = perf_counter() P_num = 16 S_num = 7 stuff = [f'P{n}' for n in range(1, P_num+1)] + [f'S{n}' for n in range(1, S_num+1)] count = 0 with open(OUTPUT_FILE, 'w') as cfile: for k in range(1, len(stuff)+1): for combo in combinations(stuff, k): if isvalid(combo): cfile.write(f'{combo}\n') count += 1 print(f'Count={count:,}, Duration={perf_counter()-start:.2f}s') Output: Count=8,388,480, Duration=11.07s
3
1
75,556,221
2023-2-24
https://stackoverflow.com/questions/75556221/why-is-np-dot-so-much-faster-than-np-sum
Why is np.dot so much faster than np.sum? Following this answer we know that np.sum is slow and has faster alternatives. For example: In [20]: A = np.random.rand(1000) In [21]: B = np.random.rand(1000) In [22]: %timeit np.sum(A) 3.21 µs ± 270 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [23]: %timeit A.sum() 1.7 µs ± 11.5 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [24]: %timeit np.add.reduce(A) 1.61 µs ± 19.6 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) But all of them are slower than: In [25]: %timeit np.dot(A,B) 1.18 µs ± 43.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) Given that np.dot is both multiplying two arrays elementwise and then summing them, how can this be faster than just summing one array? If B were set to the all ones array then np.dot would simply be summing A. So it seems the fastest option to sum A is: In [26]: O = np.ones(1000) In [27]: %timeit np.dot(A,O) 1.16 µs ± 6.37 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) This can't be right, can it? This is on Ubuntu with numpy 1.24.2 using openblas64 on Python 3.10.6. Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 Update The order of the timings reverses if the array is much longer. That is: In [28]: A = np.random.rand(1000000) In [29]: O = np.ones(1000000) In [30]: %timeit np.dot(A,O) 545 µs ± 8.87 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [31]: %timeit np.sum(A) 429 µs ± 11 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [32]: %timeit A.sum() 404 µs ± 2.95 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [33]: %timeit np.add.reduce(A) 401 µs ± 4.21 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) This implies to me that there is some fixed sized overhead when calling np.sum(A), A.sum(), np.add.reduce(A) that doesn't exist when calling np.dot() but the part of the code that does the summation is in fact faster. ——————————- Any speed ups using cython, numba, python etc would be great to see.
This answer completes the good answer of @user2357112 by providing additional details. Both functions are optimized. That being said the pair-wise summation is generally a bit slower while providing generally a more accurate result. It is also sub-optimal yet though relatively good. OpenBLAS which is used by default on Windows does not perform a pair-wise summation. Here is the assembly code for the Numpy code: Here is the assembly code for the OpenBLAS code: The main issue with the Numpy code is that it does not use AVX (256-bit SIMD instruction set) but SSE (128-bit SIMD instruction set) as opposed to OpenBLAS, at least at the version 1.22.4 (the one I use) and before. Even worst: the instructions are scalar one in the Numpy code! We recently worked on this and the last version of Numpy should now use AVX. That being said, it may still be not as fast as OpenBLAS because of the pair-wise summation (especially for big arrays). Note that both functions spent a non-negligible time in overheads because of the arrays being too small. Such overheads can be removed using a hand-written implementation in Numba. The order of the timings reverses if the array is much longer. This is expected. Indeed, functions are rather compute-bound when they operate in the cache, but they become memory-bound when the array is big and fit in the L3 cache or even the RAM. As a result, np.dot stats to be slower for bigger array since it needs to read twice bigger data from memory. More specifically, it needs to read 8*1000000*2/1024**2 ~= 15.3 MiB from memory so you likely need to read data from your RAM which have a pretty limited throughout. In fact, a good bi-channel 3200 MHz DDR4 RAM like mine can reach a practical throughput close to 40 GiB and 15.3/(40*1024) ~= 374 µs. That being said, sequential codes can hardly completely saturate this throughput so reaching a 30 GiB/s in sequential is already great not to mention many mainstream PC RAM operate at a lower frequency. A 30 GHz/s throughput result in ~500 µs which is close to your timings. Meanwhile, np.sum and np.add.reduce are more compute-bound because of their inefficient implementation, but the amount of data to be read is twice smaller and may actually fit better in the L3 cache having a significantly bigger throughput. To prove this effect, you can simply try to run: # L3 cache of 9 MiB # 2 x 22.9 = 45.8 MiB a = np.ones(3_000_000) b = np.ones(3_000_000) %timeit -n 100 np.dot(a, a) # 494 µs => read from RAM %timeit -n 100 np.dot(a, b) # 1007 µs => read from RAM # 2 x 7.6 = 15.2 MiB a = np.ones(1_000_000) b = np.ones(1_000_000) %timeit -n 100 np.dot(a, a) # 90 µs => read from the L3 cache %timeit -n 100 np.dot(a, b) # 283 µs => read from RAM # 2 x 1.9 = 3.8 MiB a = np.ones(250_000) b = np.ones(250_000) %timeit -n 100 np.dot(a, a) # 40 µs => read from the L3 cache (quite compute-bound) %timeit -n 100 np.dot(a, b) # 46 µs => read from the L3 cache too (quite memory-bound) On my machine, the L3 has only a size of 9 MiB so the second call needs not only to read twice more data but also read it more from the slower RAM than from the L3 cache. For small array, the L1 cache is so fast that reading data should not be a bottleneck. On my i5-9600KF machine, the throughput of the L1 cache is huge : ~268 GiB/s. This means the optimal time to read two arrays of size 1000 is 8*1000*2/(268*1024**3) ~= 0.056 µs. In practice, the overhead of calling a Numpy function is much bigger than that. Fast implementation Here is a fast Numba implementation: import numba as nb # Function eagerly compiled only for 64-bit contiguous arrays @nb.njit('float64(float64[::1],)', fastmath=True) def fast_sum(arr): s = 0.0 for i in range(arr.size): s += arr[i] return s Here are performance results: array items | time | speedup (dot/numba_seq) --------------------------|------------------------ 3_000_000 | 870 µs | x0.57 1_000_000 | 183 µs | x0.49 250_000 | 29 µs | x1.38 If you use the flag parallel=True and nb.prange instead of range, Numba will use multiple threads. This is good for large arrays, but it may not be for small ones on some machine (due to the overhead to create threads and share the work): array items | time | speedup (dot/numba_par) --------------------------|-------------------------- 3_000_000 | 465 µs | x1.06 1_000_000 | 66 µs | x1.36 250_000 | 10 µs | x4.00 As expected, Numba can be faster for small array (because the Numpy call overhead is mostly removed) and be competitive with OpenBLAS for large array. The code generated by Numba is pretty efficient: .LBB0_7: vaddpd (%r9,%rdx,8), %ymm0, %ymm0 vaddpd 32(%r9,%rdx,8), %ymm1, %ymm1 vaddpd 64(%r9,%rdx,8), %ymm2, %ymm2 vaddpd 96(%r9,%rdx,8), %ymm3, %ymm3 vaddpd 128(%r9,%rdx,8), %ymm0, %ymm0 vaddpd 160(%r9,%rdx,8), %ymm1, %ymm1 vaddpd 192(%r9,%rdx,8), %ymm2, %ymm2 vaddpd 224(%r9,%rdx,8), %ymm3, %ymm3 vaddpd 256(%r9,%rdx,8), %ymm0, %ymm0 vaddpd 288(%r9,%rdx,8), %ymm1, %ymm1 vaddpd 320(%r9,%rdx,8), %ymm2, %ymm2 vaddpd 352(%r9,%rdx,8), %ymm3, %ymm3 vaddpd 384(%r9,%rdx,8), %ymm0, %ymm0 vaddpd 416(%r9,%rdx,8), %ymm1, %ymm1 vaddpd 448(%r9,%rdx,8), %ymm2, %ymm2 vaddpd 480(%r9,%rdx,8), %ymm3, %ymm3 addq $64, %rdx addq $-4, %r11 jne .LBB0_7 That being said, it is not optimal : the LLVM-Lite JIT compiler uses a 4x unrolling while a 8x unrolling should be optimal on my Intel CoffeeLake processor. Indeed, the latency of the vaddpd instruction is 4 cycle while 2 instruction can be executed per cycle so 8 registers are needed to avoid a stall and the resulting code being latency bound. Besides, this assembly code is optimal on Intel Alderlake and Sapphire Rapids processors since they have a twice lower vaddpd latency. Saturating FMA SIMD processing units is far from being easy. I think the only way to write a faster function is to write a (C/C++) native code using SIMD intrinsics though it is less portable. Note the Numba code does not support special numbers like NaN or Inf value because of fastmath (AFAIK OpenBLAS does). In practice, it should still work on x86-64 machines but this is not guaranteed. Besides, the Numba code is not numerically stable for very large arrays. The Numpy code should be the most numerically stable of the three variants (then the OpenBLAS code). You can compute the sum by chunk to improve the numerical stability though it makes the code more complex. There is not free lunch.
48
35
75,565,030
2023-2-25
https://stackoverflow.com/questions/75565030/does-python-allows-elif-statement-without-else-statement
While teaching python to a friend i tried this statement : val = "hi" if (val=="hello") or ("w" in val): print("hello") elif(val=="hi"): print("hi") And to my great surprise it worked. I always tought in Python you couldn't do an elif without else. Has it been always like that or the syntax has changed since a particular version?
else is optional, and follows any number of elif statements. From the specification of version 1.6: if_stmt: "if" expression ":" suite ("elif" expression ":" suite)* ["else" ":" suite] The * in this syntax means zero or more elements, and [ and ] means an optional element. Python 1.6 was the first version released as open source. That said, I'm almost certain it has always been like that, because it is standard among most, if not all, programming languages.
3
4
75,559,538
2023-2-24
https://stackoverflow.com/questions/75559538/third-clone-of-the-turtle
I made this program, when trying to make a chase game, but I stumbled along something really strange. I created a clone of the turtle, but at the middle of the map a third one appeared. Does anybody know what causes this? import turtle sc = turtle.Screen() t = turtle.Turtle() c = turtle.clone() c.penup t.penup c.goto(100,100) def turnleft(): t.left(30) def turnright(): t.right(30) while True: t.forward(2) c.forward(2) sc.onkey(turnleft, "Left") sc.onkey(turnright, "Right") sc.listen()
Very good question. I'm able to reproduce the behavior: if you only make one turtle, print(len(turtle.turtles())) gives 1 as expected, but after cloning once, there's suddenly 3. Here's a minimal example: import turtle t = turtle.Turtle() print(len(turtle.turtles())) # => 1, no problem c = turtle.clone() print(len(turtle.turtles())) # => 3 !!? The problem is calling .clone() on turtle (the module) rather than on the turtle instance you want to clone: import turtle t = turtle.Turtle() c = t.clone() print(len(turtle.turtles())) # => 2 as expected This is a classic turtle gotcha: mistaking the functional interface for the object-oriented interface. When you call turtle.clone(), it's a functional call, so the module creates its singleton non-OOP turtle, then clones it and returns the clone which you store in c. The inimitable cdlane advocates for the following import: from turtle import Screen, Turtle which makes it hard to mess things up. If you're curious about the CPython turtle internals that cause the behavior to occur, here's the code (from Lib/turtle.py#L3956): ## The following mechanism makes all methods of RawTurtle and Turtle available ## as functions. So we can enhance, change, add, delete methods to these ## classes and do not need to change anything here. __func_body = """\ def {name}{paramslist}: if {obj} is None: if not TurtleScreen._RUNNING: TurtleScreen._RUNNING = True raise Terminator {obj} = {init} try: return {obj}.{name}{argslist} except TK.TclError: if not TurtleScreen._RUNNING: TurtleScreen._RUNNING = True raise Terminator raise """ def _make_global_funcs(functions, cls, obj, init, docrevise): for methodname in functions: method = getattr(cls, methodname) pl1, pl2 = getmethparlist(method) if pl1 == "": print(">>>>>>", pl1, pl2) continue defstr = __func_body.format(obj=obj, init=init, name=methodname, paramslist=pl1, argslist=pl2) exec(defstr, globals()) globals()[methodname].__doc__ = docrevise(method.__doc__) _make_global_funcs(_tg_screen_functions, _Screen, 'Turtle._screen', 'Screen()', _screen_docrevise) _make_global_funcs(_tg_turtle_functions, Turtle, 'Turtle._pen', 'Turtle()', _turtle_docrevise) This takes all of the turtle Screen() and Turtle() methods and wires them into the module's globals, so turtle.Turtle().clone is set to turtle.clone. As part of this rewiring, __func_body adds a bit of boilerplate to every call which checks to see whether Turtle._pen or Turtle._screen exist already, and creates them if they don't. Unrelated, but you need to call pendown() with parentheses if you want it to do anything. Also, it's a good idea to set event listeners before the loop rather than inside of it, or just remove that entirely since it's not relevant to the question.
3
3
75,553,212
2023-2-24
https://stackoverflow.com/questions/75553212/best-way-to-check-if-a-numpy-array-is-all-non-negative
This works, but not algorithmically optimal since I dont need the min value to be stored while the function is parsing the array: def is_non_negative(m): return np.min(m) >= 0 Edit: Depending on the data an optimal function could indeed save a lot because it will terminate at the first encounter of a negative value. If only one negative value is expected, time will be cut by a factor of two in average. However building the optimal algorithm outside numpy library will be at a huge cost (Python code vs C++ code).
One pure-Numpy solution is to use a chunk based strategy: def is_non_negative(m): chunkSize = max(min(65536, m.size/8), 4096) # Auto-tunning for i in range(0, m.size, chunkSize): if np.min(m[i:i+chunkSize]) < 0: return False return True This solution is only efficient if the arrays are big, and chunks are big enough for the Numpy call overhead to be small and small enough to split the global array in many parts (so to benefit from the early cut). The chunk size needs to be pretty big so to balance the relatively big overhead of np.min on small arrays. Here is a Numba solution: import numba as nb # Eagerly compiled funciton for some mainstream data-types. @nb.njit(['(float32[::1],)', '(float64[::1],)', '(int_[::1],)']) def is_non_negative_nb(m): for e in m: if e < 0: return False return True It turns out this is faster than using np.min on my machine although the code is not well auto-vectorized (ie. do not use SIMD instruction) by LLVM-Lite (the JIT of Numba). For an even faster code, you need to use a C/C++ code and use a chunk-based SIMD-friendly code, and possibly use SIMD intrinsics if the compiler does not generate an efficient code which is unfortunately rather frequent in this case.
3
2
75,561,461
2023-2-24
https://stackoverflow.com/questions/75561461/how-do-i-efficiently-perform-the-same-function-across-multiple-groups-of-columns
I am cleaning a csv for data analysis and I'm new to python, so I am trying my best to make this as straightforward as possible in case anyone wants to go back into this later. I want to perform a straightforward operation on four columns and add a new column with the result, then efficiently repeat that for 10 other sets of columns. My dataframe looks like this: df = pd.DataFrame({'A1' : [10, 20, 30, 10], 'A2' : [10,20,30,40], 'A3' : [30, 0, 40, 10], 'A4' : [75, 0, 0, 25], 'B1' : [10, 20, 30, 40], 'B2' : [30, 0, 20, 40], 'B3' : [10, 10, 20, 30], 'B4' : [40, 30, 20, 10]}) # A1 A2 A3 A4 B1 B2 B3 B4 # 10 10 30 75 10 30 10 40 # 20 20 0 0 20 0 10 30 # 30 30 40 0 30 20 20 20 # 10 40 10 25 40 40 30 10 I want to create a new column (A_dif) with the value of (A1+A2+A3)-A4. I've able to do that as follows: df['A_dif'] = df.loc[:, A1:A3].sum(numeric_only=True, axis=1) - df.loc[:,'A4'] However, I need to do that for the B columns (and about 10 similar groups of columns). I can do that manually, but I would like an efficient function that accomplishes this. I tried to create following function (and then make a loop with it) but can't get it to work: def difference(df, a: str, b: str, c: str) : df.loc[:, a:b].sum(numeric_only=True, axis=1) - df.loc[:,c] test = difference(df, 'A1', 'A3', 'A4') print(test) # returns None Thank you for any help you can offer!
You can group columns by a suffix (here the first letter of column name) and compute your function: def difference(df): return df.iloc[:, :3].sum(numeric_only=True, axis=1) - (df.iloc[:, 3]) df1 = df.groupby(df.columns.str[0], axis=1).apply(difference).add_suffix('_diff') out = pd.concat([df, df1], axis=1) print(out) # Output A1 A2 A3 A4 B1 B2 B3 B4 A_diff B_diff 0 10 10 30 75 10 30 10 40 -25 10 1 20 20 0 0 20 0 10 30 40 0 2 30 30 40 0 30 20 20 20 100 50 3 10 40 10 25 40 40 30 10 35 100 You can also group columns by position. If you need to iterate over 4 columns each time: df1 = (df.groupby(np.arange(len(df.columns)) // 4, axis=1) .apply(difference).add_suffix('_diff'))
3
2
75,559,543
2023-2-24
https://stackoverflow.com/questions/75559543/how-to-unpack-tuples-within-a-list-to-use-in-map
I'm just going to simplify my problem a bit. I have a function like this: def func(a,b): return a+b I also have a list of tuples which I would like to map to this function. num = [(0,4),(6,3),(2,2),(9,1)] I want to be able to map the tuples within the list like (a,b) to the function I provided. In javascript you could acheive this by changing the function definition like so: def func((a,b)): return a+b num = [(0,4),(6,3),(2,2),(9,1)] map(func,num) This obviously doesn't work in Python. I know that I could also just pass the tuple into the function and then return tuple[0]+tuple[1], but I didn't know if there was a cleaner option.
map won't perform unpacking without writing a wrapper function to do that actual unpacking for you. That's why itertools.starmap exists: from itertools import starmap def func(a,b): return a+b num = [(0,4),(6,3),(2,2),(9,1)] for result in starmap(func, num): print(result) The name "starmap" is referring to the implicit star-unpacking operation it performs, making it equivalent to map with the iterable assumed to contain pre-packed iterables of arguments that must be *-unpacked, changing from map being the equivalent of (func(x) for x in iterable) to starmap being the equivalent of (func(*x) for x in iterable).
4
8
75,559,368
2023-2-24
https://stackoverflow.com/questions/75559368/getting-422-error-while-trying-to-use-coveralls-with-github-actions
I'm trying to set up Coveralls to work with GitHub Actions for a Python project, and although I've reviewed the documentation multiple times and followed all the instructions to the best of my understanding, I'm still facing the following error: Bad Response 422 {“message”: “Couldn’t find a repository matching this job”, “error”: true} Here is a minimal version of my YAML file: name: coveralls on: pull_request: branches: - main jobs: tests: runs-on: ubuntu-latest steps: - name: checkout uses: actions/checkout@v3 - name: setup python uses: actions/setup-python@v4 with: python-version: '3.9' - name: install requirements run: | pip install --upgrade pip pip install pytest pip install pytest-cov pip install -r app/requirements.txt - name: run tests run: | pytest --cov=app coverage report -m coverage lcov - name: upload coveralls uses: coverallsapp/github-action@master with: github-token: ${{ secrets.GH_TOKEN }} path-to-lcov: coverage.lcov
The documentation is not clear enough at this point: Name Requirement Description github-token required Must be in form github-token: ${{ secrets.GITHUB_TOKEN }}; Coveralls uses this token to verify the posted coverage data on the repo and create a new check based on the results. It is built into Github Actions and does not need to be manually specified in your secrets store. More Info While it suggests that the GitHub token does not require manual specification in your secrets store, it is presented as a recommendation rather than a strict rule. It would be more appropriate to state that "it must not be manually specified", since using a custom variable like GH_TOKEN instead of the default GITHUB_TOKEN will not function properly. That being said, you need to replace this line: github-token: ${{ secrets.GH_TOKEN }} with this line: github-token: ${{ secrets.GITHUB_TOKEN }}
4
5
75,556,141
2023-2-24
https://stackoverflow.com/questions/75556141/error-modulenotfounderror-no-module-named-azure-keyvault-secrets-although-i-i
I have a Python script to retrieve username and password from Key Vault (Azure). 3 months ago it worked but now it gives me the error No module named 'azure.keyvault.secrets' when I run 'from azure.keyvault.secrets import SecretClient'. Why I get this error? It gives me error also if I try to run pip install azure!
Error ModuleNotFoundError: No module named 'azure.keyvault.secrets' although I installed the package: If you run pip install azure it won't work for azure.keyvaults module to be used in the code. You need to install or update with the latest version using pip command: pip install azure-keyvault-secrets //4.6.0 is the latest one. Installed successfully:
5
6
75,553,432
2023-2-24
https://stackoverflow.com/questions/75553432/cant-locate-popup-button-with-selenium
I have been trying to use selenium on a webpage but this popup is refraining me to do so. note that the popup is only shown when you are not signed in (means you have to run my code so that selenium opens up a new browser window for you which does not have any accounts) I want to click on the "Not Interested" button through selenium. I don't want to close the popup every time manually, is there a way to automate this? here is my code: # relevant packages & modules from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options import time # relevant website website = 'https://www.daraz.pk/' # initialize Chrome driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe') # open website driver.get(website) #maximize window driver.maximize_window() # waiting for popup time.sleep(5) # dealing with pop up # with xpath pop_up_deny = driver.find_element(By.XPATH , '/html/body/div[9]//div/div/div[3]/button[1]') pop_up_deny.click() It raised this error: My chrome version : 110.0.5481.178 (Official Build) (64-bit) My selenium version : ChromeDriver 110.0.5481.77
The popup element is inside Shadow-root element you need to reach to shadow-root first then identify the button not interested shadowRoot= driver.execute_script('''return document.querySelector("div.airship-html-prompt-shadow").shadowRoot''') shadowRoot.find_element(By.CSS_SELECTOR,"button.airship-btn.airship-btn-deny").click()
3
2
75,547,065
2023-2-23
https://stackoverflow.com/questions/75547065/how-to-check-if-feature-descriptors-and-matches-are-correct
I'm trying to find common over laps between two images and for this I am using a ORB feature detector and BEBLID feature descriptor. Using these features, find the homography between them and align the images. The function code is as follows: for pair in image_pairs: img1 = cv2.cvtColor(pair[0], cv2.COLOR_BGR2GRAY) img2 = cv2.cvtColor(pair[1], cv2.COLOR_BGR2GRAY) detector = cv2.ORB_create(1000) kpts1 = detector.detect(img1, None) kpts2 = detector.detect(img2, None) descriptor = cv2.xfeatures2d.BEBLID_create(0.75) kpts1, desc1 = descriptor.compute(img1, kpts1) kpts2, desc2 = descriptor.compute(img2, kpts2) method = cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING matcher = cv2.DescriptorMatcher_create(method) matches = matcher.match(desc1, desc2, None) matches = sorted(matches, key=lambda x: x.distance) percentage = 0.2 keep = int(len(matches) * percentage) matches = matches[:keep] matchedVis = cv2.drawMatches(self.images[pair[0]], kpts1, self.images[pair[1]], kpts2, matches, None) cv2.imwrite("feature_match.png", matchedVis) ptsA = np.zeros((len(matches), 2), dtype="float") ptsB = np.zeros((len(matches), 2), dtype="float") for (i, m) in enumerate(matches): ptsA[i] = kpts1[m.queryIdx].pt ptsB[i] = kpts2[m.trainIdx].pt (H, mask) = cv2.findHomography(ptsA, ptsB, method=cv2.RANSAC) (h, w) = img2.shape[:2] aligned = cv2.warpPerspective(self.images[pair[0]], H, (w, h)) cv2.imwrite("wrap.png", aligned) A successful alignment of two images looks like: And an unsuccessful alignment of two images looks like: Some of the images in image_pairs list have no common overlaps and hence the alignment fails. Is there a way to detect such failures or even detect a successful alignment without explicitly looking at the warp.png images?
This thread and its two top answers are a useful resource for what you are doing: Detecting garbage homographies from findHomography in OpenCV? One of the things the selected answer suggests is to check the determinant of the homography matrix. Where negative determinants signals a "flipped image", while a very large or very small determinant signals a blown-up and shrunken results respectively. I think this is a great way to filter out most of the garbage homography RANSAC happens to give you. Great supplementary material on determinants of linear transformations However, in your example case with the image "twisting", I frustratingly cannot find a proper math term for describing the transformation. I've personally dealt with a simpler problem, where I had 4 control points at the corner of every image I want to transform, but I manually implemented the "find homography" function from scratch. The issue arises when I mismatched the control points like this: (Pardon my terrible hand-drawn illustration) Where two of the control points are "swapped" and the transformed image is "twisted". With this illustration, I implemented a naive, brute-force check to verify that no two borders in the transformed image can intersect each other. I found this Korean blog describing this issue, where they also appear to provide mathematical checks for any "abnormal conversions". According to the blog, checking for negative determinants is enough to catch these twisting transformations. They also provided additional checks for ensuring the image doesn't blow up in size or gets shrunken too much. Here is a Google translated excerpt: Note that threshold values of sx, sy and P can be adjusted for your usecase. Finally, the blog mentions that these checks will not catch "concave transformations" (see image in blog) and that you should manually check for them. Google translated text from blog: However, the case of concave is not inspected under the above D<0 condition. To find the concave case, first transform the four points (0, 0), (1, 0), (0, 1), (1, 0) into a 2×2 submatrix and then check whether the transformed result is concave. should be inspected. To determine whether it is concave or not, refer to [Mathematics] - Calculating the Area (Area) of a Polygonal Figure.
3
3
75,553,614
2023-2-24
https://stackoverflow.com/questions/75553614/tensorflow-the-channel-dimension-of-the-inputs-should-be-defined
I am new to Tensorflow, and am trying to train a specific deep learning neural network. I am using Tensorflow (2.11.0) to get a deep neural network model which is described below. The data which I use is also given below: Data: Here is some example data. For sake of ease we can consider 10 samples in data. Here, each sample has shape: (128,128). One can consider the below code as example training data. x_train = np.random.rand(10, 128, 128, 1) Normalization layer: normalizer = tf.keras.layers.Normalization(axis=-1) normalizer.adapt(x_train) Build model: def build_and_compile_model(norm): model = tf.keras.Sequential([ norm, layers.Conv2D(128, 128, activation='relu'), layers.Conv2D(3, 3, activation='relu'), layers.Flatten(), layers.Dense(units=32, activation='relu'), layers.Dense(units=1) ]) model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.001)) return model When I do dnn_model = build_and_compile_model(normalizer) dnn_model.summary() I get the below error: ValueError: The channel dimension of the inputs should be defined. The input_shape received is (None, None, None, None), where axis -1 (0-based) is the channel dimension, which found to be `None`. What am I doing wrong here? I have tried to get insights from this, this, this and this. But, I have not found a workable solution yet. What should I do to remove the error and get the model to work? I will appreciate any help.
Define the input shape directly in the normalization layer (or add an Input layer), since it cannot be inferred directly: import numpy as np import tensorflow as tf x_train = np.random.rand(10, 128, 128, 1) normalizer = tf.keras.layers.Normalization(input_shape=[128, 128, 1], axis=-1) normalizer.adapt(x_train) def build_and_compile_model(norm): model = tf.keras.Sequential([ norm, tf.keras.layers.Conv2D(64, 64, activation='relu'), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(units=32, activation='relu'), tf.keras.layers.Dense(units=1) ]) model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.001)) return model dnn_model = build_and_compile_model(normalizer) dnn_model.summary() Also, your model does not work as it is, you are using a kernel size of 128 in your first Conv2D layer and then another Conv2D layer with a kernel size of 3 but your data has the shape (10, 128, 128, 1). I changed it to make your code executable.
7
1
75,554,263
2023-2-24
https://stackoverflow.com/questions/75554263/beanie-exceptions-collectionwasnotinitialized-error
I'm new to the Beanie library which is an asynchronous Python object-document mapper (ODM) for MongoDB. Data models are based on Pydantic. I was trying this library with fastAPI framework, and made an ODM for some document, let's say it's name is SomeClass and then tried to insert some data in the db using this ODM. Here's the code for ODM and the method to create a document (insomeClass.py): from beanie import Document from pydantic import Field, BaseModel class SomeClassDto(BaseModel): """ A Class for Data Transferring. """ name: str = Field(max_length=maxsize, min_length=1) class SomeClassDao: """ This is a class which holds the 'SomeClass' class (inherited from Beanie Document), and also, the methods which use the 'SomeClass' class. """ class SomeClass(Document): name: str = Field(max_length=20, min_length=1) @classmethod async def create_some_class(cls, body: SomeClassDto): some_class = cls.SomeClass(**body.dict()) return await cls.SomeClass.insert_one(some_class) I've used and called the create_some_class function, but it throwed this error: beanie.exceptions.CollectionWasNotInitialized However the error is self-explanatory but I didn't understand at first, and couldn't find any relatable question about my problem in SO, so I decided to post this question and answer it, for the sake of future.
As the error tells us, we should first Initialize the collection. We should initialize the collection via the init_beanie. I’ve used this function like this (in databse.py): from beanie import init_beanie import motor.motor_asyncio from someClass import SomeClassDao async def init_db(cls): MONGO_DB_DATABASE_NAME = "SomeDBName" MOTOR_CLIENT = motor.motor_asyncio.AsyncIOMotorClient() DATABASE = MOTOR_CLIENT[MONGO_DB_DATABASE_NAME] document_models = [SomeClassDao.SomeClass,] await init_beanie(database=cls.DATABASE, document_models=document_models) And then we should use this function at the startup of the app, so we use it like this (in main.py): from fastapi import FastAPI from database import init_db app = FastAPI() @app.on_event("startup") async def start_db(): await init_db() Then after creating the init_beanie function and adding the document we want to use to the document_models, it worked without error. Hope this helps.
3
3
75,540,223
2023-2-23
https://stackoverflow.com/questions/75540223/how-to-sort-a-2d-numpy-object-array-based-on-a-list
I have a 2D numpy object array: aa = np.array([["aaa","05","1","a"], ["ccc","30","2","v"], ["ddd","50","2","v"], ["bbb","10","1","v"]]) and the following list: sample_ids = ["aaa", "bbb", "ccc", "ddd"] I would like to sort the numpy array based on the list so that I get the following: [["aaa","05","1","a"], ["bbb","10","1","v"], ["ccc","30","2","v"], ["ddd","50","2","v"]] Edit: If there are keys (in sample_ids) that are not present in the array. The resulting array would not include these missing keys (i.e. no addition of empty rows). So if we have the following: sample_ids = ["aaa", "bbb", "ccc", "ddd", "eee"] The final array would still be the same. Also, if the array would contain a row (i.e. row key) that is missing from the keys. That row would be left out of the resulting array as well. Edit 2: Starting from Nick's answer, I came up with this to deal with absent keys. sample_ids2 = ["aaa", "bbb", "eee", "ccc", "ddd"] idxs = [] for i,v in enumerate(sample_ids2): if str(list(aa.T[0])).find(v) != -1: k = list(aa.T[0]).index(v) idxs.append(k) else: print(v + " was not found!!!") print(aa[idxs]) Output: [['aaa' '05' '1' 'a'] ['bbb' '10' '1' 'v'] ['ccc' '30' '2' 'v'] ['ddd' '50' '2' 'v']]
Here are a couple of possible solutions. Using numpy: subs = list(aa.T[0]) idxs = [subs.index(i) for i in sample_ids if i in subs] res = aa[idxs] # array([['aaa', '05', '1', 'a'], # ['bbb', '10', '1', 'v'], # ['ccc', '30', '2', 'v'], # ['ddd', '50', '2', 'v']], dtype='<U3') Using pandas: res = np.array(pd.DataFrame(aa).set_index(0).reindex(sample_ids).dropna().reset_index()) # array([['aaa', '05', '1', 'a'], # ['bbb', '10', '1', 'v'], # ['ccc', '30', '2', 'v'], # ['ddd', '50', '2', 'v']], dtype=object) For both cases, if sample_ids = ["aaa", "bbb", "ccc", "ddd", "eee"], the output will be the same. If sample_ids = ["ddd", "aaa", "bbb"], the output will be: array([['ddd', '50', '2', 'v'], ['aaa', '05', '1', 'a'], ['bbb', '10', '1', 'v']])
3
2
75,550,639
2023-2-23
https://stackoverflow.com/questions/75550639/how-to-separate-strings-in-a-list-multiplied-by-a-number
I need to take a list, multiply every item by 4 and separate them by coma. My code is: conc = ['0.05 ml : 25 ml', '0.05 ml : 37.5 ml', '0.05 ml : 50 ml', '0.05 ml : 62.5 ml', '0.05 ml : 75 ml'] new_conc = [", ".join(i*4) for i in conc] print(new_conc) But when I run it, I get every SYMBOL separated by come. What I need is multiplied number of shown EXPRESSIONS separated by coma. So the output should be: ['0.05 ml : 25 ml', '0.05 ml : 25 ml', '0.05 ml : 25 ml', '0.05 ml : 25 ml', '0.05 ml : 37.5 ml', '0.05 ml : 37.5 ml', '0.05 ml : 37.5 ml', '0.05 ml : 37.5 ml', '0.05 ml : 50 ml', '0.05 ml : 50 ml', '0.05 ml : 50 ml', '0.05 ml : 50 ml', '0.05 ml : 62.5 ml', '0.05 ml : 62.5 ml', '0.05 ml : 62.5 ml', '0.05 ml : 62.5 ml', '0.05 ml : 75 ml', '0.05 ml : 75 ml', '0.05 ml : 75 ml', '0.05 ml : 75 ml'] I found this answered question, but as I already mentioned, I get separate symbols, separated by coma.
You can use a simple for loop. new_conc = [] for item in conc: new_conc.extend([item] * 4)
3
3
75,548,903
2023-2-23
https://stackoverflow.com/questions/75548903/how-to-make-snakemake-wildcard-work-for-empty-string
I expected Snakemake to allow wildcards to be empty strings, alas, this isn't the case. How can I make a wildcard accept an empty string?
Wildcards by default only match the regex .+ meaning everything but empty strings. This is unfortunately not documented beyond a Google group conversation. To make a wildcard accept empty strings, simply add a custom wildcard constraint wildcard_constraints: foo=".*", either within the scope of a rule or globally: # Option 1 wildcard_constraints: foo=".*" rule a: input: in{foo}.txt output: out{foo}.txt wildcard_constraints: foo=".*" # Option 2 shell: "cp {input} {output}"
4
6
75,547,631
2023-2-23
https://stackoverflow.com/questions/75547631/overwrite-single-file-in-a-google-cloud-storage-bucket-via-python-code
I have a logs.txt file at certain location, in a Compute Engine VM Instance. I want to periodically backup (i.e. overwrite) logs.txt in a Google Cloud Storage bucket. Since logs.txt is the result of some preprocessing made inside a Python script, I want to also use that script to upload / copy that file, into the Google Cloud Storage bucket (therefore, the use of cp cannot be considered an option). Both the Compute Engine VM instance, and the Cloud Storage bucket, stay at the same GCP project, so "they see each other". What I am attempting right now, based on this sample code, looks like: from google.cloud import storage bucket_name = "my-bucket" destination_blob_name = "logs.txt" source_file_name = "logs.txt" # accessible from this script storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(destination_blob_name) generation_match_precondition = 0 blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition) print(f"File {source_file_name} uploaded to {destination_blob_name}.") If gs://my-bucket/logs.txt does not exist, the script works correctly, but if I try to overwrite, I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2571, in upload_from_file created_json = self._do_upload( File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2372, in _do_upload response = self._do_multipart_upload( File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 1907, in _do_multipart_upload response = upload.transmit( File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/upload.py", line 153, in transmit return _request_helpers.wait_and_retry( File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/_request_helpers.py", line 147, in wait_and_retry response = func() File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/upload.py", line 149, in retriable_request self._process_response(result) File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/_upload.py", line 114, in _process_response _helpers.require_status_code(response, (http.client.OK,), self._get_status_code) File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/_helpers.py", line 105, in require_status_code raise common.InvalidResponse( google.resumable_media.common.InvalidResponse: ('Request failed with status code', 412, 'Expected one of', <HTTPStatus.OK: 200>) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/my_folder/upload_to_gcs.py", line 76, in <module> blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition) File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2712, in upload_from_filename self.upload_from_file( File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2588, in upload_from_file _raise_from_invalid_response(exc) File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 4455, in _raise_from_invalid_response raise exceptions.from_http_status(response.status_code, message, response=response) google.api_core.exceptions.PreconditionFailed: 412 POST https://storage.googleapis.com/upload/storage/v1/b/production-onementor-dt-data/o?uploadType=multipart&ifGenerationMatch=0: { "error": { "code": 412, "message": "At least one of the pre-conditions you specified did not hold.", "errors": [ { "message": "At least one of the pre-conditions you specified did not hold.", "domain": "global", "reason": "conditionNotMet", "locationType": "header", "location": "If-Match" } ] } } : ('Request failed with status code', 412, 'Expected one of', <HTTPStatus.OK: 200>) I have checked the documentation for upload_from_filename, but it seems there is no flag to "enable overwritting". How to properly overwrite a file existing in a Google Cloud Storage Bucket, using Python language?
It's because of if_generation_match As a special case, passing 0 as the value for if_generation_match makes the operation succeed only if there are no live versions of the blob. This is what is meant by the return message "At least one of the pre-conditions you specified did not hold." You should pass None or leave out that argument altogether.
7
8
75,545,370
2023-2-23
https://stackoverflow.com/questions/75545370/how-to-configure-the-entrypoint-cmd-for-docker-based-python3-lambda-functions
I switched from a zip-based deployment to a docker-based deployment of two lambda functions (which are used in an API Gateway). Both functions where in the same zip file and I want to have both functions in the same docker-based container (meaning I can't use the cmd setting in my Dockerfile (or to be precise need to overwrite it anyway). Previously, I used the handler attribute in the cloudformation template for specifying which handler function to call in which module, e.g. ... ConfigLambda: Type: 'AWS::Serverless::Function' Properties: Handler: config.handler ... ... LogLambda: Type: 'AWS::Serverless::Function' Properties: Handler: logs.handler ... but with a docker-based build one has to define an ImageConfig, i.e. ... LogLambda: Type: 'AWS::Serverless::Function' Properties: PackageType: Image ImageUri: !Ref EcrImageUri FunctionName: !Sub "${AWS::StackName}-Logs" ImageConfig: WorkingDirectory: /var/task Command: ['logs.py'] EntryPoint: ['/var/lang/bin/python3'] ... ConfigLambda: Type: 'AWS::Serverless::Function' Properties: PackageType: Image ImageUri: !Ref EcrImageUri FunctionName: !Sub "${AWS::StackName}-Config" ImageConfig: WorkingDirectory: /var/task Command: ['config.py'] EntryPoint: ['/var/lang/bin/python3'] I'm a bit stuck because this does not work, no matter what combination I pass to the command array. If I fire a test event in the AWS console, I get the following error RequestId: <uuid> Error: Runtime exited without providing a reason Runtime.ExitError Judging from the full output, the file is loaded and executed, but the handler function is not invoked (there is some output from a logging setup function which is called right after the module imports). The section in the AWS documentation on python3 based lambdas state that naming for handlers should be file_name.function (e.g. function_lambda.lambda_handler), but this doesn't give any clues on how do to this for command array in a ImageConfig. How do I set the Command section correctly for my lambda function in my cloudformation template?
First, the container you deploy to AWS Lambda has to implement the Lambda Runtime Interface. AWS Lambda isn't a generic docker container runtime, it only supports running containers that implement a specific interface. The easiest way to ensure your container implements this interface is to base it on one of the AWS provided base images. Note how the ENTRYPOINT for the base Python images isn't python it is a custom script. That script expects the COMMAND to be in the same app.handler format it is in for non-container based Lambda functions. See an example Dockerfile here.
6
6
75,529,064
2023-2-22
https://stackoverflow.com/questions/75529064/how-to-load-multiple-partition-parquet-files-from-gcs-into-pandas-dataframe
I am trying to read multiple parquet files stored as partitions from google cloud storage and read them as 1 single pandas data frame. As an example, here is the folder structure at gs://path/to/storage/folder/ And inside each of the event_date=*, there are multiple parquet files So the directory structure is something like this - --gs://path/to/storage/folder/ ---event_date=2023-01-01/ ---abc.parquet ---def.parquet ---event_date=2023-01-02/ ---ghi.parquet ---jkl.parquet I want to load this to pandas data frame and I used below code import pandas as pd import gcsfs from pyarrow import parquet url = "gs://path/to/storage/folder/event_date=*/*" fs = gcsfs.GCSFileSystem() files = ["gs://" + path for path in fs.glob(url)] print(files) data = parquet.ParquetDataset(files, filesystem=fs) multiple_dates_df = data.read().to_pandas() print(multiple_dates_df.shape) But I get below error - OSError: Passed non-file path: gs://path/to/storage/folder/event_date=2023-01-01/abc.parquet How do I fix this?
Seems it is not possible for pandas to read multiple parquet files stored under a gcs path,There is a bug raised for this at github, which is still open further progress can be tracked there.
3
3
75,535,679
2023-2-22
https://stackoverflow.com/questions/75535679/implementation-of-adamw-is-deprecated-and-will-be-removed-in-a-future-version-u
How to fix this deprecated AdamW model? I tried to use the BERT model to perform a sentiment analysis on the hotel reviews, when I run this piece of code, it prompts the following warning. I am still studying the transformers and I don't want the code to be deprecated very soon. I searched on the web and I can't find the solution yet. I found this piece of information, but I don't know how to apply it to my code. To switch optimizer, put optim="adamw_torch" in your TrainingArguments (the default is "adamw_hf") could anyone kindly help with this? from transformers import BertTokenizer, BertForSequenceClassification import torch_optimizer as optim from torch.utils.data import DataLoader from transformers import AdamW import pandas as pd import torch import random import numpy as np import torch.nn as nn from torch.nn import CrossEntropyLoss from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, roc_auc_score, classification_report from sklearn.model_selection import train_test_split from sklearn import preprocessing from tqdm.notebook import tqdm import json from collections import OrderedDict import logging from torch.utils.tensorboard import SummaryWriter skip some code... param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [{ 'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01 }, { 'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0 }] # optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5) ##deprecated #optimizer = optim.AdamW(optimizer_grouped_parameters, lr=1e-5) ##torch.optim.AdamW (not working) step = 0 best_acc = 0 epoch = 10 writer = SummaryWriter(log_dir='model_best') for epoch in tqdm(range(epoch)): for idx, batch in tqdm(enumerate(train_loader), total=len(train_texts) // batch_size, leave=False): optimizer.zero_grad() input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) labels = batch['labels'].to(device) outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels) loss = outputs[0] # Calculate Loss logging.info( f'Epoch-{epoch}, Step-{step}, Loss: {loss.cpu().detach().numpy()}') step += 1 loss.backward() optimizer.step() writer.add_scalar('train_loss', loss.item(), step) logging.info(f'Epoch {epoch}, present best acc: {best_acc}, start evaluating.') accuracy, precision, recall, f1 = eval_model(model, eval_loader) # Evaluate Model writer.add_scalar('dev_accuracy', accuracy, step) writer.add_scalar('dev_precision', precision, step) writer.add_scalar('dev_recall', recall, step) writer.add_scalar('dev_f1', f1, step) if accuracy > best_acc: model.save_pretrained('model_best') # Save Model tokenizer.save_pretrained('model_best') best_acc = accuracy
If you comment out both these lines: import torch_optimizer as optim from transformers import AdamW and then use: optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=1e-5) does it work? If not, what is the error? To switch optimizer, put optim="adamw_torch" in your TrainingArguments (the default is "adamw_hf") This is referring to Huggingface Trainer, which is configured with a TrainingArguments instance. But as you are using your own training loop, this is not applicable to you.
5
5
75,537,816
2023-2-22
https://stackoverflow.com/questions/75537816/transform-a-dataframe-for-network-analysis-using-pandas
I have a data frame of online game matches including two specific columns: IDs of matches and IDs of players participated in a particular match. For instance: match_id player_id 0 1 0 2 0 3 0 4 0 5 1 6 1 1 1 7 1 8 1 2 Hence, player_id is a unique identificator of a player. Meanwhile, match_id is an ID of a match played, and it is duplicated fixed number of times (say, 5), since 5 is a maximum number of players that are able to participate in a certain match. So in each row, match_id corresponds player_id meaning that a certain player participated in a particular game. As it can be seen from the table above, two or more players can play together more than one time (or they can have not any co-plays at all). And it's why I'm interested in transforming this initial data frame into a adjacency matrix, in which the intersection of row and a column would give the number of co-played matches. Another option would be to create a data frame like following: player_1 player_2 coplays_number 1 2 2 1 3 1 1 4 1 1 10 0 1 5 1 ... ... ... Hereby, my task is to prepare the data for a further analysis of a co-plays network using igraph or networkx. I also want to get a weighted network, that is a weight of an edge would mean a number of co-played matches between two nodes (players). Edge in this case means that two users have played together, i.e. they have participated in the same match once or they have played together as a team in two or more matches (like players' IDs 1 and 2 in the initial data example above). My question is: how can I transform my initial data frame into network data, that igraph or networkx functions would take as an argument, using pandas and numpy? Or maybe I do not need any data manipulations and igraph or networkx functions are able to work with the initial data frame? Thanks in advance for your answers and recommendations!
I think you don't need networkx if you use permutations from itertools and pd.crosstab: from itertools import permutations pairs = (df.groupby('match_id')['player_id'] .apply(lambda x: list(permutations(x, r=2))) .explode()) adj = pd.crosstab(pairs.str[0], pairs.str[1], rownames=['Player 1'], colnames=['Player 2']) Output: >>> adj Player 2 1 2 3 4 5 6 7 8 Player 1 1 0 2 1 1 1 1 1 1 2 2 0 1 1 1 1 1 1 3 1 1 0 1 1 0 0 0 4 1 1 1 0 1 0 0 0 5 1 1 1 1 0 0 0 0 6 1 1 0 0 0 0 1 1 7 1 1 0 0 0 1 0 1 8 1 1 0 0 0 1 1 0 If you want a flat list (not an adjacency matrix), use combinations: from itertools import combinations pairs = (df.groupby('match_id')['player_id'] .apply(lambda x: frozenset(combinations(x, r=2))) .explode().value_counts()) coplays = pd.DataFrame({'Player 1': pairs.index.str[0], 'Player 2': pairs.index.str[1], 'coplays_number': pairs.tolist()}) Output: >>> coplays Player 1 Player 2 coplays_number 0 1 2 2 1 2 4 1 2 6 2 1 3 8 2 1 4 7 2 1 5 1 7 1 6 6 7 1 7 1 8 1 8 6 8 1 9 6 1 1 10 3 5 1 11 1 3 1 12 2 5 1 13 4 5 1 14 2 3 1 15 1 4 1 16 1 5 1 17 3 4 1 18 7 8 1
3
2
75,537,221
2023-2-22
https://stackoverflow.com/questions/75537221/learning-python-regex-why-can-t-i-use-and-operator-in-if-statement
I’m trying to create a very basic mock password verification program to get more comfortable with meta characters. The program is supposed to take an input, use regex to verify it has at least one capital letter and at least one number, then return either “Password created” if it does, or “Wrong format” if it doesn’t. I’m trying to use an AND statement inside of my conditional statement and I know it’s not optimal, I just don’t understand why it doesn’t work at all. Here’s the code: import re password = input() #check input for at least one cap letter and at least one number if re.match(r"[A-Z]*", password) and re.match(r"[0-9]*", password): print("Password created") else: print("Wrong format") Edit: To everyone helping and asking for clarification, I’d like to apologize. The original code did not have the asterisks because I’m new to StackOverflow and did not use the correct formatting. I’m also new to asking coding questions so I’ll give some more context as requested. I’ve since changed the code to this: import re password = input() #check input for at least one cap letter and at least one number if re.search(r"[A-Z]*", password) and re.search(r"[0-9]*", password): print("Password created") else: print("Wrong format") Here are some example inputs and their expected vs actual outputs: In: “Greatness” Expect: “Wrong format” Actual: “Password created” In: “12335” Expect: “Wrong format” Actual: “Password created” In: “Gm16gs” Expect: “Password created” Actual: “Password created” If I’m missing any more context please let me know as I am still new to this. Update: I’m a moron. It wasn’t the and, it was the asterisks. Thank you so much everyone. I’ve marked the first answer as correct because the comments show me that I should’ve been using “+” and not “*”
Use re.search to find a match anywhere in the string. re.match will only return a match if the match starts from the beginning of the string. if re.search("[A-Z]", password) and re.search("[0-9]", password):
4
5
75,534,231
2023-2-22
https://stackoverflow.com/questions/75534231/how-can-i-connect-to-remote-database-using-psycopg3
I'm using Psycopg3 (not 2!) and I can't figure out how can I connect to a remote Postgres server psycopg.connect(connection_string) https://www.psycopg.org/psycopg3/docs/ Thanks!
Psycopg3 uses the postgresql connection string which can either be a string of keyword=value elements (separated by spaces) with psycopg.connect("host=your_server_hostname port=5432 dbname=your_db_name") as conn: or a URI with psycopg.connect("postgresql://user:user_password@db_server_hostname:5432") as conn: So, put all together with their example (mixed with one example from above connection strings) from their docs: # Note: the module name is psycopg, not psycopg3 import psycopg # Connect to an existing database with psycopg.connect("postgresql://user:user_password@db_server_hostname:5432") as conn: # Open a cursor to perform database operations with conn.cursor() as cur: # Execute a command: this creates a new table cur.execute(""" CREATE TABLE test ( id serial PRIMARY KEY, num integer, data text) """) # Pass data to fill a query placeholders and let Psycopg perform # the correct conversion (no SQL injections!) cur.execute( "INSERT INTO test (num, data) VALUES (%s, %s)", (100, "abc'def")) # Query the database and obtain data as Python objects. cur.execute("SELECT * FROM test") cur.fetchone() # will return (1, 100, "abc'def") # You can use `cur.fetchmany()`, `cur.fetchall()` to return a list # of several records, or even iterate on the cursor for record in cur: print(record) # Make the changes to the database persistent conn.commit()
4
10
75,528,960
2023-2-22
https://stackoverflow.com/questions/75528960/extracting-replies-from-yahoo-finance-forum
I am trying to scrape comments from the Yahoo Finance conversation page (e.g. TSLA) using Python Selenium. I would like to extract all comments together with their replies. As Yahoo Finance does not automatically show all the replies under each comment and have no unique identifier for individual comment, there are also problems of deleted comments, what would be the most efficient way to do it?
If you'll inspect the network tab, you'll notice the API that the client communicates with to fetch the comments and related data. It required some data like spotId and uuid. I guess this is to identify the article. With this information, you can simply use BeautifulSoup and requests to make the process much more efficient and faster than using Selenium. Some example code: url = 'https://finance.yahoo.com/quote/TSLA/community?p=TSLA' response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0'}) soup = BeautifulSoup(response.text) data = json.loads(soup.select_one('#spotim-config').get_text(strip=True))['config'] url = "https://api-2-0.spot.im/v1.0.0/conversation/read" payload = json.dumps({ "conversation_id": data['spotId'] + data['uuid'].replace('_', '$'), "count": 250, "offset": 0 }) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0', 'Content-Type': 'application/json', 'x-spot-id': data['spotId'], 'x-post-id': data['uuid'].replace('_', '$'), } response = requests.post(url, headers=headers, data=payload) data = response.json() Since the API is paginated, play around with the values of count and offset to get subsequent pages.
3
2
75,515,475
2023-2-21
https://stackoverflow.com/questions/75515475/unable-to-display-two-tables-side-by-side-inside-a-panel
Using Python (version: 3.10.6) and the Rich (version: 13.3.1) package, I'm attempting to display two tables side by side inside a panel, like this: from rich.panel import Panel from rich.table import Table from rich.console import Console console = Console() table1 = Table() table1.add_column("a") table1.add_column("b") table1.add_row("a", "b") table1.add_row("a", "b") table2 = Table() table2.add_column("c") table2.add_column("d") table2.add_row("c", "d") table2.add_row("c", "d") panel = Panel.fit( [table1, table2], title="My Panel", border_style="red", title_align="left", padding=(1, 2), ) console.print(panel) This code is producing the following Traceback: File "/home/linux/ToolkitExtraction (copy 1)/table_in_panel.py", line 33, in <module> console.print(panel) File "/home/linux/.local/lib/python3.10/site-packages/rich/console.py", line 1694, in print extend(render(renderable, render_options)) File "/home/linux/.local/lib/python3.10/site-packages/rich/console.py", line 1326, in render for render_output in iter_render: File "/home/linux/.local/lib/python3.10/site-packages/rich/panel.py", line 204, in __rich_console__ else console.measure( File "/home/linux/.local/lib/python3.10/site-packages/rich/console.py", line 1278, in measure measurement = Measurement.get(self, options or self.options, renderable) File "/home/linux/.local/lib/python3.10/site-packages/rich/measure.py", line 109, in get get_console_width(console, options) File "/home/linux/.local/lib/python3.10/site-packages/rich/padding.py", line 132, in __rich_measure__ measure_min, measure_max = Measurement.get(console, options, self.renderable) File "/home/linux/.local/lib/python3.10/site-packages/rich/measure.py", line 119, in get raise errors.NotRenderableError( rich.errors.NotRenderableError: Unable to get render width for [<rich.table.Table object at 0x7f562c1c2470>, <rich.table.Table object at 0x7f562c0411e0>]; a str, Segment, or object with __rich_console__ method is required I tried inserting console.width = 150 prior to panel creation though this did not make any difference.
I think if you want side-by-side tables inside your panel, you'll need to wrap them in a Columns: panel = Panel.fit( Columns([table1, table2]), title="My Panel", border_style="red", title_align="left", padding=(1, 2), ) console.print(panel) That results in: ╭─ My Panel ───────────────────────────────────────────────────────────────────╮ │ │ │ ┏━━━┳━━━┓ ┏━━━┳━━━┓ │ │ ┃ a ┃ b ┃ ┃ c ┃ d ┃ │ │ ┡━━━╇━━━┩ ┡━━━╇━━━┩ │ │ │ a │ b │ │ c │ d │ │ │ │ a │ b │ │ c │ d │ │ │ └───┴───┘ └───┴───┘ │ │ │ ╰──────────────────────────────────────────────────────────────────────────────╯ You'll note that the panel is taking up the full terminal width despite the fact that we're calling Panel.fit. I haven't found a solution to that issue, but it is possible to provide Panel with an explicit width: panel = Panel.fit( Columns([table1, table2]), width=40, title="My Panel", border_style="red", title_align="left", padding=(1, 2), ) Which produces: ╭─ My Panel ───────────────────────────╮ │ │ │ ┏━━━┳━━━┓ ┏━━━┳━━━┓ │ │ ┃ a ┃ b ┃ ┃ c ┃ d ┃ │ │ ┡━━━╇━━━┩ ┡━━━╇━━━┩ │ │ │ a │ b │ │ c │ d │ │ │ │ a │ b │ │ c │ d │ │ │ └───┴───┘ └───┴───┘ │ │ │ ╰──────────────────────────────────────╯
3
5
75,471,704
2023-2-16
https://stackoverflow.com/questions/75471704/masking-a-polars-dataframe-for-complex-operations
If I have a polars Dataframe and want to perform masked operations, I currently see two options: # create data df = pl.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], schema = ['a', 'b']).lazy() # create a second dataframe for added fun df2 = pl.DataFrame([[8, 6, 7, 5], [15, 16, 17, 18]], schema=["b", "d"]).lazy() # define mask mask = pl.col('a').is_between(2, 3) Option 1: create filtered dataframe, perform operations and join back to the original dataframe masked_df = df.filter(mask) masked_df = masked_df.with_columns( # calculate some columns [ pl.col("a").sin().alias("new_1"), pl.col("a").cos().alias("new_2"), (pl.col("a") / pl.col("b")).alias("new_3"), ] ).join( # throw a join into the mix df2, on="b", how="left" ) res = df.join(masked_df, how="left", on=["a", "b"]) print(res.collect()) Option 2: mask each operation individually res = df.with_columns( # calculate some columns - we have to add `pl.when(mask).then()` to each column now [ pl.when(mask).then(pl.col("a").sin()).alias("new_1"), pl.when(mask).then(pl.col("a").cos()).alias("new_2"), pl.when(mask).then(pl.col("a") / pl.col("b")).alias("new_3"), ] ).join( # we have to construct a convoluted back-and-forth join to apply the mask to the join df2.join(df.filter(mask), on="b", how="semi"), on="b", how="left" ) print(res.collect()) Output: shape: (4, 6) ┌─────┬─────┬──────────┬───────────┬──────────┬──────┐ │ a ┆ b ┆ new_1 ┆ new_2 ┆ new_3 ┆ d │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ f64 ┆ f64 ┆ f64 ┆ i64 │ ╞═════╪═════╪══════════╪═══════════╪══════════╪══════╡ │ 1 ┆ 5 ┆ null ┆ null ┆ null ┆ null │ │ 2 ┆ 6 ┆ 0.909297 ┆ -0.416147 ┆ 0.333333 ┆ 16 │ │ 3 ┆ 7 ┆ 0.14112 ┆ -0.989992 ┆ 0.428571 ┆ 17 │ │ 4 ┆ 8 ┆ null ┆ null ┆ null ┆ null │ └─────┴─────┴──────────┴───────────┴──────────┴──────┘ Most of the time, option 2 will be faster, but it gets pretty verbose and is generally harder to read than option 1 when any sort of complexity is involved. Is there a way to apply a mask more generically to cover multiple subsequent operations?
You can use a struct with an unnest Your dfs weren't consistent between being lazy and eager so I'm going to make them both lazy ( df .join(df2, on='b') .with_columns(pl.when(mask).then( pl.struct( pl.col("a").sin().alias("new_1"), pl.col("a").cos().alias("new_2"), (pl.col("a") / pl.col("b").cast(pl.Float64())) .alias("new_3") ) .alias('allcols') )) .unnest('allcols') .with_columns( [pl.when(mask).then(x) for x in df2.columns if x not in df] ) .collect() ) I think that's the heart of your question is how to write when then with multiple column outputs which is covered by the first with_columns and then the second with_columns covers the quasi-semi join value replacement behavior. Another way you can write it is to first create a list of the columns in df2 that you want to be subject to the mask and then put those in the struct. The unsightly thing is that you have to then exclude those columns before you do the unnest df2_mask_cols=[x for x in df2.columns if x not in df.columns] ( df .join(df2, on='b') .with_columns(pl.when(mask).then( pl.struct([ pl.col("a").sin().alias("new_1"), pl.col("a").cos().alias("new_2"), (pl.col("a") / pl.col("b").cast(pl.Float64())) .alias("new_3") ] + df2_mask_cols ) .alias('allcols')) ) .select(pl.exclude(df2_mask_cols)) .unnest('allcols') .collect() ) Surprisingly, this approach was fastest: ( df .join(df2, on='b') .with_columns( pl.col("a").sin().alias("new_1"), pl.col("a").cos().alias("new_2"), (pl.col("a") /pl.col("b").cast(pl.Float64())).alias("new_3") ) .with_columns(pl.when(mask).then(pl.exclude(df.columns))) .collect() )
3
2
75,483,708
2023-2-17
https://stackoverflow.com/questions/75483708/replicate-pandas-ngroup-behaviour-in-polars
I am currently trying to replicate ngroup behaviour in polars to get consecutive group indexes (the dataframe will be grouped over two columns). For the R crowd, this would be achieved in the dplyr world with dplyr::group_indices or the newer dplyr::cur_group_id. As shown in the repro, I've tried couple avenues without much succcess, both approaches miss group sequentiality and merely return row counts by group. Quick repro: import polars as pl import pandas as pd df = pd.DataFrame( { "id": ["a", "a", "a", "a", "b", "b", "b", "b"], "cat": [1, 1, 2, 2, 1, 1, 2, 2], } ) df_pl = pl.from_pandas(df) print(df.groupby(["id", "cat"]).ngroup()) # This is the desired behaviour # 0 0 # 1 0 # 2 1 # 3 1 # 4 2 # 5 2 # 6 3 # 7 3 print(df_pl.select(pl.len().over("id", "cat"))) # This is only counting observation by group # ┌─────┐ # │ len │ # │ --- │ # │ u32 │ # ╞═════╡ # │ 2 │ # │ 2 │ # │ 2 │ # │ 2 │ # │ 2 │ # │ 2 │ # │ 2 │ # │ 2 │ # └─────┘ print(df_pl.group_by("id", "cat").agg(pl.len().alias("test"))) # shape: (4, 3) # ┌─────┬─────┬──────┐ # │ id ┆ cat ┆ test │ # │ --- ┆ --- ┆ --- │ # │ str ┆ i64 ┆ u32 │ # ╞═════╪═════╪══════╡ # │ a ┆ 1 ┆ 2 │ # │ a ┆ 2 ┆ 2 │ # │ b ┆ 1 ┆ 2 │ # │ b ┆ 2 ┆ 2 │ # └─────┴─────┴──────┘
We can use rank for this: (df.with_row_index() .with_columns( pl.first("index").over("id", "cat").rank("dense") - 1 ) ) shape: (8, 3) ┌───────┬─────┬─────┐ │ index ┆ id ┆ cat │ │ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ i64 │ ╞═══════╪═════╪═════╡ │ 0 ┆ a ┆ 1 │ │ 0 ┆ a ┆ 1 │ │ 1 ┆ a ┆ 2 │ │ 1 ┆ a ┆ 2 │ │ 2 ┆ b ┆ 1 │ │ 2 ┆ b ┆ 1 │ │ 3 ┆ b ┆ 2 │ │ 3 ┆ b ┆ 2 │ └───────┴─────┴─────┘
3
6
75,495,451
2023-2-18
https://stackoverflow.com/questions/75495451/how-to-combine-two-mypy-disable-error-code-err-comments
I put these on the top of a module: # mypy: disable-error-code=misc # mypy: disable-error-code=attr-defined but only the last line is honoured, the first one is ignored. The same with reversed order or with three lines. In each case all lines except the last one are ignored. I was also trying to aggregate it into one line but failed. What is the correct syntax to suppress multiple mypy errors in the whole module?
Use both quotes and commas: # mypy: disable-error-code="misc,attr-defined" (Credit goes to STerliakov, who discovered this but did not post an answer.)
4
1
75,476,135
2023-2-16
https://stackoverflow.com/questions/75476135/how-can-i-fix-the-pathlib-package-is-an-obsolete-backport-of-a-standard-libra
I am using Python 3.9.16. When I try to build an application like so: (base) G:\>pyinstaller --onefile grp.py I get an error that says: The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\Users\alpha\anaconda3\lib\site-packages) using conda remove then try again. I tried uninstalling and reinstalling pyinstaller, as well as upgrading conda, but none of this helped. How can I fix the problem?
If conda remove pathlib can't find the package, go to the lib folder and delete a folder called path-list-.....
4
4
75,523,498
2023-2-21
https://stackoverflow.com/questions/75523498/python-polars-how-to-get-the-row-count-of-a-lazyframe
The CSV file I have is 70 Gb in size. I want to load the DF and count the number of rows, in lazy mode. What's the best way to do so? As far as I can tell, there is no function like shape in lazy mode according to the documentation. I found this answer which provide a solution not based on Polars, but I wonder if it is possible to do this in Polars as well.
For polars 0.20.5+ To get the row count using polars. First load it into a lazyframe... lzdf=pl.scan_csv("mybigfile.csv") Then count the rows and return the result lzdf.select(pl.len()).collect() If you just want a python scalar rather than a table as a result then just subset it lzdf.select(pl.len()).collect().item() For older versions To get the row count using polars. First load it into a lazyframe... lzdf=pl.scan_csv("mybigfile.csv") Then count the rows and return the result lzdf.select(pl.count()).collect() If you just want a python scalar rather than a table as a result then just subset it lzdf.select(pl.count()).collect().item()
17
26
75,476,288
2023-2-16
https://stackoverflow.com/questions/75476288/difference-between-2-polars-dataframes
What is the best way to find the differences between 2 Polars dataframes? The equals method tells me if there is a difference, I want to find where is the difference. Example: import polars as pl df1 = pl.DataFrame([ {'id': 1,'col1': ['a',None],'col2': ['x']}, {'id': 2,'col1': ['b'],'col2': ['y', None]}, {'id': 3,'col1': [None],'col2': ['z']}] ) ┌─────┬─────────────┬─────────────┐ │ id ┆ col1 ┆ col2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ list[str] ┆ list[str] │ ╞═════╪═════════════╪═════════════╡ │ 1 ┆ ["a", null] ┆ ["x"] │ │ 2 ┆ ["b"] ┆ ["y", null] │ │ 3 ┆ [null] ┆ ["z"] │ └─────┴─────────────┴─────────────┘ df2 = pl.DataFrame([ {'id': 1,'col1': ['a'],'col2': ['x']}, {'id': 2,'col1': ['b', None],'col2': ['y', None]}, {'id': 3,'col1': [None],'col2': ['z']}] ) ┌─────┬─────────────┬─────────────┐ │ id ┆ col1 ┆ col2 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ list[str] ┆ list[str] │ ╞═════╪═════════════╪═════════════╡ │ 1 ┆ ["a"] ┆ ["x"] │ │ 2 ┆ ["b", null] ┆ ["y", null] │ │ 3 ┆ [null] ┆ ["z"] │ └─────┴─────────────┴─────────────┘ The difference in the example is for id = 1 and id = 2. I can join the dataframes: df1.join(df2, on='id', suffix='_df2') ┌─────┬─────────────┬─────────────┬─────────────┬─────────────┐ │ id ┆ col1 ┆ col2 ┆ col1_df2 ┆ col2_df2 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ list[str] ┆ list[str] ┆ list[str] ┆ list[str] │ ╞═════╪═════════════╪═════════════╪═════════════╪═════════════╡ │ 1 ┆ ["a", null] ┆ ["x"] ┆ ["a"] ┆ ["x"] │ │ 2 ┆ ["b"] ┆ ["y", null] ┆ ["b", null] ┆ ["y", null] │ │ 3 ┆ [null] ┆ ["z"] ┆ [null] ┆ ["z"] │ └─────┴─────────────┴─────────────┴─────────────┴─────────────┘ Expected result I would like to either: add a boolean columns that shows True in the rows with a difference filter and only display rows with a difference. The example has only 2 columns, but there are more columns in the dataframe.
Here's the filter approach ( df1.join(df2, on='id', suffix='_df2') .filter(pl.any_horizontal( pl.col(x).ne_missing(pl.col(f"{x}_df2")) for x in df1.columns if x!='id' )) ) If you wanted the bool column then you just change the filter to with_columns and add an alias. ( df1.join(df2, on='id', suffix='_df2') .with_columns( has_diff = pl.any_horizontal( pl.col(x).ne_missing(pl.col(f"{x}_df2")) for x in df1.columns if x!='id' ) ) ) This assumes that each df has all the same columns other than 'id'.
3
4
75,458,300
2023-2-15
https://stackoverflow.com/questions/75458300/efficient-speaker-diarization
I am running a VM instance on google cloud. My goal is to apply speaker diarization to several .wav files stored on cloud buckets. I have tried the following alternatives with the subsequent problems: Speaker diarization on Google's API. This seems to go fast but the results make no sense at all. I've already seen similar issues and I opened a thread myself but I get no answer... The output of this only returns maximum of two speakers with random labels. Here is the code I tried in python: from google.cloud import speech_v1p1beta1 as speech from google.cloud import storage import os import json import sys storage_client = storage.Client() client = speech.SpeechClient() if "--channel" in sys.argv: index = sys.argv.index("--channel") + 1 if index < len(sys.argv): channel = sys.argv[index] print("Channel:", channel) else: print("--channel option requires a value") audio_folder=f'audio_{channel}' # channel='tve' transcript_folder=f'transcript_output' bucket = storage_client.bucket(audio_folder) bucket2 = storage_client.bucket(transcript_folder) wav_files=[i.name for i in bucket.list_blobs()] json_files=[i.name.split(f'{channel}/')[-1] for i in bucket2.list_blobs(prefix=channel)] for file in wav_files: if not file.endswith('.wav'): continue transcript_name=file.replace('.wav','.json') if transcript_name in json_files: continue gcs_uri = f"gs://{audio_folder}/{file}" # gcs_uri = f"gs://{audio_folder}/out2.wav" audio = speech.RecognitionAudio(uri=gcs_uri) diarization_config = speech.SpeakerDiarizationConfig( enable_speaker_diarization=True, min_speaker_count=2, #max_speaker_count=10, ) config = speech.RecognitionConfig( encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16, #sample_rate_hertz=8000, language_code="es-ES", diarization_config=diarization_config, #audio_channel_count = 2, ) print("Waiting for operation to complete...") operation = client.long_running_recognize(config=config, audio=audio) response=operation.result() result = response.results[-1] # print(result) # print(type(result)) with open(transcript_name,'w') as f: json.dump(str(result),f) # transcript_name=file.replace('.wav','.txt') # result = response.results[-1] # with open(transcript_name,'w') as f: # f.write(result) os.system(f'gsutil cp {transcript_name} gs://transcript_output/{channel}') os.remove(transcript_name) print(f'File {file} processed. ') No matter how the max_speaker or min are changed, results are the same. pyannote: As the above did not work, I decided to try with pyannote. The performance of it is very nice but there is one problem, it is extremely slow. For a wav file of 30 mins it takes more than 3 hours to finish the diarization. Here is my code: #import packages import os from datetime import datetime import pandas as pd from pyannote.audio import Pipeline from pyannote.audio import Model from pyannote.core.json import dump from pyannote.core.json import load from pyannote.core.json import loads from pyannote.core.json import load_from import subprocess from pyannote.database.util import load_rttm from google.cloud import speech_v1p1beta1 as speech from google.cloud import storage import sys # channel='a3' storage_client = storage.Client() if "--channel" in sys.argv: index = sys.argv.index("--channel") + 1 if index < len(sys.argv): channel = sys.argv[index] print("Channel:", channel) else: print("--channel option requires a value") audio_folder=f'audio_{channel}' transcript_folder=f'transcript_{channel}' bucket = storage_client.bucket(audio_folder) bucket2 = storage_client.bucket(transcript_folder) wav_files=[i.name for i in bucket.list_blobs()] rttm_files=[i.name for i in bucket2.list_blobs()] token="XXX" pipeline = Pipeline.from_pretrained("pyannote/[email protected]", use_auth_token=token) # this load the model model = Model.from_pretrained("pyannote/segmentation", use_auth_token=token) for file in wav_files: if not file.endswith('.wav'): continue rttm_name=file.replace('.wav','.rttm') if rttm_name in rttm_files: continue if '2023' not in file: continue print(f'Doing file {file}') gcs_uri = f"gs://{audio_folder}/{file}" os.system(f'gsutil cp {gcs_uri} {file}') diarization = pipeline(file) with open(rttm_name, "w") as rttm: diarization.write_rttm(rttm) os.system(f'gsutil cp {rttm_name} gs://transcript_{channel}/{rttm_name}') os.remove(file) os.remove(rttm_name) I am running this with python3.9 on a VM instance with GPU NVIDIA-T4. Is this normal? I've seen that pyannote.audio is kinda slow on the factor of 1x or so, this time is much more than that given that, in theory, it should be running on a dedicated GPU for it... Are there any faster alternatives? Any way to improve the code or design a VM that might increase speed?
In order to make this work quickly on a GPU (Google colab is used as an example): You need to first install pyannote: !pip install -qq https://github.com/pyannote/pyannote-audio/archive/refs/heads/develop.zip And then: from pyannote.audio import Pipeline import torch pipeline = Pipeline.from_pretrained( "pyannote/[email protected]", use_auth_token='your hugging face token here') pipeline.to(torch.device('cuda')) # switch to gpu diarization = pipeline(audio_file_path)
5
7
75,455,529
2023-2-15
https://stackoverflow.com/questions/75455529/specify-a-class-to-detect-using-yolov8-on-pre-trained-model
I'm new to YOLOv8, I just want the model to detect only some classes, not all the 80 classes the model trained on. How can I specify YOLOv8 model to detect only one class? For example only person. from ultralytics import YOLO model = YOLO('YOLOv8m.pt') I remember we can do this with YOLOv5, but I couldn't do same with YOLOv8: model = torch.hub.load("ultralytics/yolov5", 'custom', path='yolov5s.pt') model.classes = [0] # Only person model.conf = 0.6
Just specify classes in predict with the class IDs you want to predict from ultralytics.yolo.engine.model import YOLO model = YOLO("yolov8n.pt") model.predict(source="0", show=True, stream=True, classes=0) # [0, 3, 5] for multiple classes for i, (result) in enumerate(results): print('Do something with class 0')
4
6
75,464,271
2023-2-15
https://stackoverflow.com/questions/75464271/attributeerror-str-object-has-no-attribute-execute-on-connection
I have a problem with following code: from pandasql import sqldf import pandas as pd df = pd.DataFrame({'column1': [1, 2, 3], 'column2': [4, 5, 6]}) query = "SELECT * FROM df WHERE column1 > 1" new_dataframe = sqldf(query) print(new_dataframe) When I submit, I have this error: Traceback (most recent call last): File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\sqlalchemy\engine\base.py:1410 in execute meth = statement._execute_on_connection AttributeError: 'str' object has no attribute '_execute_on_connection' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ~\AppData\Local\Programs\Spyder\pkgs\spyder_kernels\py3compat.py:356 in compat_exec exec(code, globals, locals) File c:\users\yv663dz\downloads\untitled1.py:18 new_dataframe = sqldf(query) File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\pandasql\sqldf.py:156 in sqldf return PandaSQL(db_uri)(query, env) File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\pandasql\sqldf.py:61 in __call__ result = read_sql(query, conn) File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\pandas\io\sql.py:592 in read_sql return pandas_sql.read_query( File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\pandas\io\sql.py:1557 in read_query result = self.execute(*args) File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\pandas\io\sql.py:1402 in execute return self.connectable.execution_options().execute(*args, **kwargs) File ~\AppData\Local\Programs\Spyder\Python\lib\site-packages\sqlalchemy\engine\base.py:1412 in execute raise exc.ObjectNotExecutableError(statement) from err ObjectNotExecutableError: Not an executable object: 'SELECT * FROM df WHERE column1 > 1' I installed the latest versions of pandas, pandasql and sqlalchemy and I use Spyder as IDE. Could someone help me please?
SQLAlchemy 2.0 (released 2023-01-26) requires that raw SQL queries be wrapped by sqlalchemy.text. The general solution for this error message is to pass the query text to sqlalchemy.text() from sqlalchemy import text ... query = text("SELECT * FROM some_table WHERE column1 > 1") However in this case the OP is using pandasql, which expects a string. There does not seem to be a straightforward way to make pandasql compatible with SQLAlchemy >= 2.0, and the package seems to be unmaintained, so the only solutions are to find a fork that has fixed the problem (there are some), fork the project yourself and fix it, or downgrade your SQLAlchemy installation using your Python package manager. For example, if you use pip: python3 -m pip install --upgrade 'sqlalchemy<2.0'
20
54
75,500,135
2023-2-19
https://stackoverflow.com/questions/75500135/how-can-i-add-my-own-id-instead-of-the-already-given-id-in-mongodb-in-python
I have a class model using Pydantics. I try to supply my own ID but it gives me two id fields in the MongoDB database. The one I gave it and the one it makes automatically. Here is the result of my post method: here is my class in models/articleModel.py: class ArticleModel(BaseModel): _id: int title: str body: str tags: Optional[list] = None datetime: Optional[datetime] = None caption: Optional[str] = None link: Optional[str] = None class Config: orm_mode = True allow_population_by_field_name = True arbitrary_types_allowed = True here is my code for the post method in routers/article_router: @router.post("/article/", status_code=status.HTTP_201_CREATED) def add_article(article: articleModel.ArticleModel): article.datetime = datetime.utcnow() try: result = Articles.insert_one(article.dict()) pipeline = [ {'$match': {'_id': result.inserted_id}} ] new_article = articleListEntity(Articles.aggregate(pipeline))[0] return new_article except DuplicateKeyError: raise HTTPException(status_code=status.HTTP_409_CONFLICT, detail=f"Article with title: '{article.id}' already exists")
I had to change the articleModel to a dictionary and add a new key called _id. @router.post("/article/", status_code=status.HTTP_201_CREATED) def add_article(article: articleModel.ArticleModel): article.datetime = datetime.utcnow() article_new_id = article.dict() article_new_id['_id'] = article_new_id['id'] del article_new_id['id'] try: result = Articles.insert_one(article_new_id) pipeline = [ {'$match': {'_id': result.inserted_id}} ] new_article = articleListEntity(Articles.aggregate(pipeline))[0] return new_article except DuplicateKeyError: raise HTTPException(status_code=status.HTTP_409_CONFLICT, detail=f"Article with title: '{article.id}' already exists")
3
4
75,459,172
2023-2-15
https://stackoverflow.com/questions/75459172/loading-a-huggingface-model-on-multiple-gpus-using-model-parallelism-for-inferen
I have access to six 24GB GPUs. When I try to load some HuggingFace models, for example the following from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/ul2") model = AutoModelForSeq2SeqLM.from_pretrained("google/ul2") I get an out of memory error, as the model only seems to be able to load on a single GPU. However, while the whole model cannot fit into a single 24GB GPU card, I have 6 of these and would like to know if there is a way to distribute the model loading across multiple cards, to perform inference. HuggingFace seems to have a webpage where they explain how to do this but it has no useful content as of today.
When you load the model using from_pretrained(), you need to specify which device you want to load the model to. Thus, add the following argument, and the transformers library will take care of the rest: model = AutoModelForSeq2SeqLM.from_pretrained("google/ul2", device_map = 'auto') Passing "auto" here will automatically split the model across your hardware in the following priority order: GPU(s) > CPU (RAM) > Disk. Of course, this answer assumes you have cuda installed and your environment can see the available GPUs. Running nvidia-smi from a command-line will confirm this. Please report back if you run into further issues.
21
31
75,457,741
2023-2-15
https://stackoverflow.com/questions/75457741/dynamically-generating-marshmallow-schemas-for-sqlalchemy-fails-on-column-attrib
I am automatically deriving marshmallow schemas for SQLAlchemy objects using the approach described in How to dynamically generate marshmallow schemas for SQLAlchemy models. I am then decorating my model classes: @derive_schema class Foo(db.Model): id = db.Column(UUID(as_uuid=True), primary_key=True, server_default=sqlalchemy.text("uuid_generate_v4()")) name = db.Column(String, nullable=False) def __repr__(self): return self.name @derive_schema class FooSettings(db.Model): foo_id = Column(UUID(as_uuid=True), ForeignKey('foo.id'), primary_key=True, nullable=False) my_settings = db.Column(JSONB, nullable=True) foo = db.relationship('Foo', backref=db.backref('foo_settings')) Where my derive_schema decorator is defined as follows: import marshmallow from marshmallow_sqlalchemy import SQLAlchemyAutoSchema def derive_schema(cls): class Schema(SQLAlchemyAutoSchema): class Meta: include_fk = True include_relationships = True load_instance = True model = cls marshmallow.class_registry.register(f'{cls.__name__}.Schema', Schema) cls.Schema = Schema return cls This used to work fine with SQLAlchemy 1.4. While attempting to upgrade to 2.0.3, I am running into the following exception when changing my schema to inherit SQLAlchemyAutoSchema instead of ModelSchema: Traceback (most recent call last): from foo.model import Foo, FooSettings File "foo/model.py", line 21, in <module> class SourceSettings(db.Model): File "schema_generation/__init__.py", line 18, in derive_schema class Schema(SQLAlchemyAutoSchema): File "python3.10/site-packages/marshmallow/schema.py", line 121, in __new__ klass._declared_fields = mcs.get_declared_fields( File "python3.10/site-packages/marshmallow_sqlalchemy/schema/sqlalchemy_schema.py", line 91, in get_declared_fields fields.update(mcs.get_declared_sqla_fields(fields, converter, opts, dict_cls)) File "python3.10/site-packages/marshmallow_sqlalchemy/schema/sqlalchemy_schema.py", line 130, in get_declared_sqla_fields converter.fields_for_model( File "python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 141, in fields_for_model field = base_fields.get(key) or self.property2field(prop) File "python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 180, in property2field field_class = field_class or self._get_field_class_for_property(prop) File "python3.10/site-packages/marshmallow_sqlalchemy/convert.py", line 262, in _get_field_class_for_property column = prop.columns[0] File "python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1329, in __getattr__ return self._fallback_getattr(key) File "python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1298, in _fallback_getattr raise AttributeError(key) AttributeError: columns Looking internally in the stacktrace, it seems like the problem is here: def _get_field_class_for_property(self, prop): if hasattr(prop, "direction"): field_cls = Related else: column = prop.columns[0] field_cls = self._get_field_class_for_column(column) return field_cls There is a check for an attribute named direction which is specified on the SQLAlchemy Relationship object. However, this attribute seems to be dynamically loaded, which causes the conditional check to fail and fall back to prop.columns[0]. But since this object is a Relationship and not a ColumnProperty it has no columns attribute which causes the program to crash. However, I have found a way to force load the direction property by adding the following code to the derive_schema method, before creating the generating Schema class: import marshmallow from marshmallow_sqlalchemy import SQLAlchemyAutoSchema from sqlalchemy import inspect def derive_schema(cls): mapper = inspect(cls) _ = [_ for _ in mapper.relationships] class Schema(SQLAlchemyAutoSchema): class Meta: include_fk = True include_relationships = True load_instance = True model = cls marshmallow.class_registry.register(f'{cls.__name__}.Schema', Schema) cls.Schema = Schema return cls Enumerating the relationships and force loading them fixes the materialization of the direction property and thus the program loads fine. Am I missing something in the model definition to make this work without force loading the relationships?
This was a compatibility issue with SQLAlchemy 2.x. marshmallow-sqlalchemy 0.28 doesn't support SQLAlchemy 2.x but the "sqlalchemy<2.0" lock was only introduced in marshmallow-sqlalchemy 0.28.2 so before I released 0.28.2 people could end up with incompatible versions. marshmallow-sqlalchemy 0.29 supports SQLAlchemy 2.x and drops 1.3 support. TL;DR Update marshmallow-sqlalchemy and the issue should disappear.
3
3
75,519,932
2023-2-21
https://stackoverflow.com/questions/75519932/azure-function-python-model-2-in-docker-container
I am failing to get a minimal working example running with the following setup: azure function in docker container python as language, specifically the "new Python programming model V2" I followed the instructions from here but added the V2 flag, specifically: # init directory func init --worker-runtime python --docker -m V2 # build docker image docker build -t foo . # run functions locally docker run -p 80:80 foo Whatever I tried, the runtime seems to not pick up the auto generated http trigger function # function_app.py (autogenerated by func init ...) import azure.functions as func app = func.FunctionApp() @app.function_name(name="HttpTrigger1") @app.route(route="hello") # HTTP Trigger def test_function(req: func.HttpRequest) -> func.HttpResponse: return func.HttpResponse("HttpTrigger1 function processed a request!!!") I think the relevant part of the logs is: info: Host.Startup[327] 1 functions found info: Host.Startup[315] 0 functions loaded info: Host.Startup[0] Generating 0 job function(s) warn: Host.Startup[0] No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.). info: Microsoft.Azure.WebJobs.Script.WebHost.WebScriptHostHttpRoutesManager[0] Initializing function HTTP routes No HTTP routes mapped because when I use the "programming model V1", then the Microsoft.Azure.WebJobs.Script.WebHost.WebScriptHostHttpRoutesManager actually prints some info about the mapped routes. How can I fix this? Is this not supported at the moment?
It seems there are ongoing changes on this. I was able to get it working by changing the environment variables in the auto generated dockerfile: # To enable ssh & remote debugging on app service change the base image to the one below # FROM mcr.microsoft.com/azure-functions/python:4-python3.10-appservice FROM mcr.microsoft.com/azure-functions/python:4-python3.10 ENV AzureWebJobsScriptRoot=/home/site/wwwroot \ AzureFunctionsJobHost__Logging__Console__IsEnabled=true \ AzureWebJobsFeatureFlags=EnableWorkerIndexing \ # added by me AzureWebJobsStorage=UseDevelopmentStorage=true # added by me COPY requirements.txt / RUN pip install -r /requirements.txt COPY . /home/site/wwwroot
6
8
75,511,558
2023-2-20
https://stackoverflow.com/questions/75511558/video-capture-from-webcam-only-works-when-debugging
SYSTEM AND INSTALL INFORMATION System Information OpenCV Version 4.7.0 Operating System: Windows 10.0.17763 (Pro - Version 21H2) CMake: 3.24.2 Python Version: 3.8.6 OpenCV version Installed from pip (but also built from source as part of diagnostics, reverted back to pip version as no change) Description of issue I am unable to capture any frames from a webcam, as I always get the following error: [ WARN:[email protected]] global cap_msmf.cpp:1759 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638 The camera opens OK, as the light next to the webcam turns on, and vid.isOpened() returns true.I have tried with a secondary external USB Webcam (Logitec) and I have the same behaviour. If I run it with the cv2.CAP_DSHOW backend then the warning goes away, however the image is still no displayed. The camera works in other apps such as MS Teams, Google Meet, and the built-in camera app that comes with windows. I have tried removing and re-adding OpenCV and it did not help. I also Tried compiling it from source and still no luck. I also tried the contrib build but this had no effect. I believe this maybe related to my hardware in some way, as I can run the same script on a colleagues laptop without issue. However both cameras I tried work with other apps, so I don't know what could be causing it to fail. HOWEVER If I run it through the Python debugger (either with python -m pdb webcam.py or through VS Code runner), then strangely it works. The first thing I checked was that there were not multiple versions installed and that the debugger wasn't calling one version rather than the other, but their appears to be no change. Its the same version of Python and the same version of OpenCV for both. I found another person with exactly the same issue, but it doesn't appear to have been resolved: https://answers.opencv.org/question/220309/videocapture-from-camera-works-only-in-debug-python/ I have also tried it in C++ (OpenCV), and using a different library in Rust (Non-OpenCV), but neither were able to successfully capture a frame from the built-in webcam, or the external one. Steps to reproduce Code to replicate the issue, nothing fancy going on: # import the opencv library import cv2 import sys print(sys.version_info) print("OPEN CV: ", cv2.__version__) # define a video capture object vid = cv2.VideoCapture(0) # vid = cv2.VideoCapture(0, cv2.CAP_DSHOW) cv2.namedWindow('Test Window', cv2.WINDOW_NORMAL) print("Camera Opened: ", vid.isOpened()) while(True): # Capture the video frame ret, frame = vid.read() if ret: cv2.imshow('Test Window', frame) else: print("Error Drawing Frame") if cv2.waitKey(1) & 0xFF == ord('q'): break # After the loop release the cap object vid.release() # Destroy all the windows cv2.destroyAllWindows() Side Note/ Possible red herring... If I enable the OPENCV_VIDEOIO_DEBUG enviroment variable ( Set-Item -Path Env:OPENCV_VIDEOIO_DEBUG -Value ($Env:OPENCV_VIDEOIO_DEBUG + ";1" ) Then it gives a different error, however this may be a red herring. I have posted it here for completeness: Traceback (most recent call last): File "C:\Users\user1\AppData\Local\Programs\Python\Python38\lib\site-packages\cv2\__init__.py", line 181, in <module> bootstrap() File "C:\Users\user1\AppData\Local\Programs\Python\Python38\lib\site-packages\cv2\__init__.py", line 153, in bootstrap native_module = importlib.import_module("cv2") File "C:\Users\user1\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed while importing cv2: A dynamic link library (DLL) initialization routine failed. I Don't know which DLL is causing this issue though, and since I can get it to work if I run it in the debugger, this seems unrelated. Next Steps What other debugging can I do to help resolve this issue? Are there any core Windows libraries that OpenCV (for example) uses that I could perhaps try re-installing, like MSMF? If so, how? Has anyone else even seen this issue?
Usually this error occurred to me when some application like Antivirus interfering OpenCV, blocking it from accessing the camera. It might be the case that some antivirus software has tendency to block OpenCV when run in non debug mode. My current hypothesis is that in debug mode, the OpenCV libraries may be more transparent, which may not trigger the same behavior in your antivirus software. You can temporarily try disabling antivirus and if it works you can add an exception for OpenCV in your antivirus software to prevent it from being blocked in the future.
4
1
75,525,029
2023-2-21
https://stackoverflow.com/questions/75525029/msno-matrix-shows-an-error-when-i-use-any-venv-using-pyenv
I tried many times installing several virtual environments using pyenv, but the system shows a error in missingno library. This is : msno.matrix(df) `ValueError Traceback (most recent call last) Cell In[17], line 1 ----> 1 msno.matrix(df) File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\missingno\missingno.py:72, in matrix(df, filter, n, p, sort, figsize, width_ratios, color, fontsize, labels, sparkline, inline, freq, ax) 70 # Remove extraneous default visual elements. 71 ax0.set_aspect('auto') ---> 72 ax0.grid(b=False) 73 ax0.xaxis.tick_top() 74 ax0.xaxis.set_ticks_position('none') File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\matplotlib\axes\_base.py:3196, in _AxesBase.grid(self, visible, which, axis, **kwargs) 3194 _api.check_in_list(['x', 'y', 'both'], axis=axis) 3195 if axis in ['x', 'both']: -> 3196 self.xaxis.grid(visible, which=which, **kwargs) 3197 if axis in ['y', 'both']: 3198 self.yaxis.grid(visible, which=which, **kwargs) File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\matplotlib\axis.py:1655, in Axis.grid(self, visible, which, **kwargs) 1652 if which in ['major', 'both']: 1653 gridkw['gridOn'] = (not self._major_tick_kw['gridOn'] 1654 if visible is None else visible) -> 1655 self.set_tick_params(which='major', **gridkw) 1656 self.stale = True ... 1073 % (key, allowed_keys)) 1074 kwtrans.update(kw_) 1075 return kwtrans ValueError: keyword grid_b is not recognized; valid keywords are ['size', 'width', 'color', 'tickdir', 'pad', 'labelsize', 'labelcolor', 'zorder', 'gridOn', 'tick1On', 'tick2On', 'label1On', 'label2On', 'length', 'direction', 'left', 'bottom', 'right', 'top', 'labelleft', 'labelbottom', 'labelright', 'labeltop', 'labelrotation', 'grid_agg_filter', 'grid_alpha', 'grid_animated', 'grid_antialiased', 'grid_clip_box', 'grid_clip_on', 'grid_clip_path', 'grid_color', 'grid_dash_capstyle', 'grid_dash_joinstyle', 'grid_dashes', 'grid_data', 'grid_drawstyle', 'grid_figure', 'grid_fillstyle', 'grid_gapcolor', 'grid_gid', 'grid_in_layout', 'grid_label', 'grid_linestyle', 'grid_linewidth', 'grid_marker', 'grid_markeredgecolor', 'grid_markeredgewidth', 'grid_markerfacecolor', 'grid_markerfacecoloralt', 'grid_markersize', 'grid_markevery', 'grid_mouseover', 'grid_path_effects', 'grid_picker', 'grid_pickradius', 'grid_rasterized', 'grid_sketch_params', 'grid_snap', 'grid_solid_capstyle', 'grid_solid_joinstyle', 'grid_transform', 'grid_url', 'grid_visible', 'grid_xdata', 'grid_ydata', 'grid_zorder', 'grid_aa', 'grid_c', 'grid_ds', 'grid_ls', 'grid_lw', 'grid_mec', 'grid_mew', 'grid_mfc', 'grid_mfcalt', 'grid_ms']` I don't know the error, but I installed a similar virtualenv using conda doesn't show that error. I installed different python versions using pyenv (3.11.2, 3.7.6, 3.9.13, 3.9.5), but in each one shows the same error when I install missingno. I show the image on error, I use VS code as IDE. At beginning, I thought it was the VS code version, but installing libraries using conda the error doesn't appeear.
I believe argument b has been renamed to visible. An earlier version: matplotlib/axes/_base.py A recent version: matplotlib/axes/_base.py
4
5
75,523,057
2023-2-21
https://stackoverflow.com/questions/75523057/how-is-the-yolov8-best-loss-model-selected-by-the-trainer-class
From the YOLOv8 documentation, it is not clear to me which loss metric the YOLOv8 trainer class uses in determining the best loss model that is saved in a training run. Is it based on the validation or training loss? Specifically, when I look at the outputs from a YOLOv8 training run, I do not see any metadata indicating which epoch resulted in the best loss model saved at runs/train/weights/best.pt
In the save_model function you can see that it uses the maximum fitness to save the best model. The fitness is defined as the weighted combination of 4 metrics [P, R, [email protected], [email protected]:0.95]. P and R are disregarded for some reason. [email protected], [email protected]:0.95 are weighted 0.1 and 0.9 respectively. If fitness cannot be found, loss is used instead
4
8
75,480,002
2023-2-17
https://stackoverflow.com/questions/75480002/in-pytorch-how-can-i-avoid-an-expensive-broadcast-when-adding-two-tensors-then
I have two 2-d tensors, which align via broadcasting, so if I add/subtract them, I incur a huge 3-d tensor. I don't really need that though, since I'll be performing a mean on one dimension. In this demo, I unsqueeze the tensors to show how they align, but they are 2-d otherwise. x = torch.tensor(...) # (batch , 1, B) y = torch.tensor(...) # (1, , A, B) out = torch.cos(x - y).mean(dim=2) # (batch, B) Possible Solutions: An algebraic simplification, but for the life of me I haven't solved this yet. Some PyTorch primitive that'll help? This is cosine similarity, but, a bit different than torch.cosine_similarity. I'm applying it to complex numbers' .angle()s. Custom C/CPython code that loops efficiently. Other?
To save memory I recommend using torch.einsum: We can make use of the trigonometric identity cos(x-y) = cos(x)*cos(y) + sin(x)*sin(y) In this case we can apply einsum where the usual summing will be the averaging, and the + between the two produces will be another operation later, so in short xs, ys = torch.sin(x), torch.sin(y) xc, yc = torch.cos(x), torch.cos(y) # use einsum for sin/cos products and averaging sum, use + for sum of products: out = (torch.einsum('i k, j k -> i k', xs, ys) + torch.einsum('i k, j k -> i k', xc, yc)) / y.shape[1] While measuring the memory consumption is a little bit tedious, I resorted to just measuring time as a proxy. Here you can see your original method and my proposal for various sizes of inputs. (The script for generating these plots is attached below.) import matplotlib.pyplot as plt import torch import time def main(): ns = torch.logspace(1, 3.2, 20).to(torch.long) tns = []; tes = [] for n in ns: tn, te = compare(n) tns.append(tn); tes.append(te) plt.loglog(ns, tns, ':.'); plt.loglog(ns, tes, '.-'); plt.loglog(ns, 1e-6*ns**1, ':'); plt.loglog(ns, 1e-6*ns**2, ':'); plt.legend(['naive', 'einsum', 'x^1', 'x^2']); plt.show() def compare(n): batch = a = b = n x = torch.zeros((batch, b)) # (batch , 1, B) y = torch.zeros((a, b)) # (1, , A, B) t = time.perf_counter(); ra = af(x.unsqueeze(1), y.unsqueeze(0)); print('naive method', tn := time.perf_counter() - t) t = time.perf_counter(); rb = bf(x, y); print('einsum method', te := time.perf_counter() - t) print((ra-rb).abs().max()) # verify we have same results return tn, te def af(x, y): return torch.cos(x - y).mean(dim=2) def bf(x, y): xs, ys = torch.sin(x), torch.sin(y) xc, yc = torch.cos(x), torch.cos(y) return (torch.einsum('i k, j k -> i k', xs, ys) + torch.einsum('i k, j k -> i k', xc, yc)) / y.shape[1] main()
3
3
75,527,054
2023-2-21
https://stackoverflow.com/questions/75527054/python3-how-to-spawn-jobs-in-parallel
I am pretty new to multithreading and would like to explore. I have a json file, that provides some config. Based on this, i need to kick off some processing. Here is the config { "job1":{ "param1":"val1", "param2":"val2" }, "job2":{ "param3":"val3", "param4":"val4" } } and here is the python snippet config_file = open('config.json') config_data = json.load(config_file) for job_name,job_atts in metric_data.items(): perform_job(job_name,job_atts) so in this way, i can finish up the jobs one by one. Is there a way to run/kick off these jobs in parallel? Note that these jobs are completely independent of each other and do not need to be performed in a seqeuence. How can i achieve parallel runs via python? Update Here is what i tried >>> from multiprocessing import Pool >>> >>> config_data = json.loads(''' { ... "job1":{ ... "param1":"val1", ... "param2":"val2" ... }, ... "job2":{ ... "param3":"val3", ... "param4":"val4" ... } ... }''') >>> def perform_job(job_name,job_atts): ... print(job_name) ... print(job_atts) ... >>> args = [(name, attrs) ... for name, attrs in config_data.items()] >>> >>> with Pool() as pool: ... pool.starmap(perform_job, args) ... Process SpawnPoolWorker-27: Process SpawnPoolWorker-24: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/queues.py", line 368, in get return _ForkingPickler.loads(res) AttributeError: Can't get attribute 'perform_job' on <module '__main__' (built-in)> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/queues.py", line 368, in get return _ForkingPickler.loads(res) AttributeError: Can't get attribute 'perform_job' on <module '__main__' (built-in)> But i am still getting the error
You are looking for the multiprocessing module. Use a process pool to iterate over many jobs. Here is an example source file that runs correctly when executed as $ python spawn.py. Putting the main code within a def main(): function is nice but not critical. Protecting it with an "if name..." clause is quite important, since child interpreters will be re-parsing the source file, see "safe importing of main". (Notice that the "single core" test won't run within the children.) The most relevant lines are the last two. #! /usr/bin/env python from multiprocessing import Pool import json config_data = json.loads( """ { "job1":{ "param1":"val1", "param2":"val2" }, "job2":{ "param3":"val3", "param4":"val4" } } """ ) args = list(config_data.items()) def perform_job(job_name, job_atts): print(job_name, job_atts) if __name__ == "__main__": # Single core: perform_job(*args[0]) perform_job(*args[1]) print() # Multi core: with Pool() as pool: pool.starmap(perform_job, args)
3
1
75,525,312
2023-2-21
https://stackoverflow.com/questions/75525312/how-can-i-convert-a-polars-dataframe-to-a-python-list
I understand Polars Series can be exported to a Python list. However, is there any way I can convert a Polars Dataframe to a Python list? In addition, if there is a one single column in the Polars Dataframe, how can I convert that into a Polars Series? I tried to use the pandas commands but it didn't work. I also checked the official Polars website to see any related built-in functions, but I didn't see any.
How about the rows() method? df = pl.DataFrame( { "a": [1, 3, 5], "b": [2, 4, 6], } ) df.rows() [(1, 2), (3, 4), (5, 6)] df.rows(named=True) [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}, {'a': 5, 'b': 6}] Alternatively, you could get all the DataFrame's columns using the get_columns() method, which would give you a list of Series, which you can then convert into a list of lists using the to_list() method on each Series in the list. If that's not what you're looking for be sure to hit me up with a reply, maybe then I'll be able to be of help to you.
8
18
75,495,800
2023-2-18
https://stackoverflow.com/questions/75495800/error-unable-to-extract-uploader-id-youtube-discord-py
I have a very powerful bot in discord (discord.py, PYTHON) and it can play music in voice channels. It gets the music from youtube (youtube_dl). It worked perfectly before but now it doesn't want to work with any video. I tried updating youtube_dl but it still doesn't work I searched everywhere but I still can't find a answer that might help me. This is the Error: Error: Unable to extract uploader id After and before the error log there is no more information. Can anyone help? I will leave some of the code that I use for my bot... The youtube setup settings: youtube_dl.utils.bug_reports_message = lambda: '' ytdl_format_options = { 'format': 'bestaudio/best', 'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s', 'restrictfilenames': True, 'noplaylist': True, 'nocheckcertificate': True, 'ignoreerrors': False, 'logtostderr': False, 'quiet': True, 'no_warnings': True, 'default_search': 'auto', 'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes } ffmpeg_options = { 'options': '-vn', } ytdl = youtube_dl.YoutubeDL(ytdl_format_options) class YTDLSource(discord.PCMVolumeTransformer): def __init__(self, source, *, data, volume=0.5): super().__init__(source, volume) self.data = data self.title = data.get('title') self.url = data.get('url') self.duration = data.get('duration') self.image = data.get("thumbnails")[0]["url"] @classmethod async def from_url(cls, url, *, loop=None, stream=False): loop = loop or asyncio.get_event_loop() data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream)) #print(data) if 'entries' in data: # take first item from a playlist data = data['entries'][0] #print(data["thumbnails"][0]["url"]) #print(data["duration"]) filename = data['url'] if stream else ytdl.prepare_filename(data) return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data) Approximately the command to run the audio (from my bot): sessionChanel = message.author.voice.channel await sessionChannel.connect() url = matched.group(1) player = await YTDLSource.from_url(url, loop=client.loop, stream=True) sessionChannel.guild.voice_client.play(player, after=lambda e: print( f'Player error: {e}') if e else None)
This is a known issue, fixed in Master. For a temporary fix, python3 -m pip install --force-reinstall https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz This installs tha master version. Run it through the command-line yt-dlp URL where URL is the URL of the video you want. See yt-dlp --help for all options. It should just work without errors. If you're using it as a module, import yt_dlp as youtube_dl might fix your problems (though there could be API changes that break your code; I don't know which version of yt_dlp you were using etc).
93
117
75,523,569
2023-2-21
https://stackoverflow.com/questions/75523569/runtimeerror-a-sqlalchemy-instance-has-already-been-registered-on-this-flask
I am writing a test for my Flask app that uses Flask-SQLAlchemy. In models.py, I used db = SQLAlchemy(), and wrote a function to configure it with the app. But when I run my test, I get the error "RuntimeError: A 'SQLAlchemy' instance has already been registered on this Flask app". I'm not sure where the test file is creating a new instance of SQLAlchemy. # flaskr.py from flask import Flask from models import setup_db def create_app(test_config=None): app = Flask(__name__) setup_db(app) return app # models.py from flask_sqlalchemy import SQLAlchemy database_path = "postgresql://student:student@localhost/bookshelf" db = SQLAlchemy() def setup_db(app, database_path=database_path): app.config["SQLALCHEMY_DATABASE_URI"] = database_path db.init_app(app) with app.app_context(): db.create_all() # test_flaskr.py import unittest from flaskr import create_app from models import setup_db class BookTestCase(unittest.TestCase): def setUp(self): self.app = create_app() self.client = self.app.test_client() setup_db(self.app, "postgresql://student:student@localhost/bookshelf_test") with self.app.app_context(): self.db = SQLAlchemy() self.db.init_app(self.app) self.db.create_all() def test_get_paginated_books(self): res = self.client.get("/books") data = res.json self.assertEqual(res.status_code, 200) self.assertTrue(data["success"]) self.assertTrue(data["total_books"]) self.assertTrue(len(data["books"])) When I run the test, I get the following error: $ python -m unittest -v test_flaskr.py test_get_paginated_books (test_flaskr.BookTestCase) ... ERROR ====================================================================== ERROR: test_get_paginated_books (test_flaskr.BookTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\swilk\OneDrive\DOCS-Programming\udacity-demo-bookshelf\backend\test_flaskr.py", line 21, in setUp setup_db(self.app, self.database_path) File "C:\Users\swilk\OneDrive\DOCS-Programming\udacity-demo-bookshelf\backend\models.py", line 24, in setup_db db.init_app(app) File "C:\Users\swilk\OneDrive\DOCS-Programming\udacity-demo-bookshelf\env\lib\site-packages\flask_sqlalchemy\extension.py", line 253, in init_app raise RuntimeError( RuntimeError: A 'SQLAlchemy' instance has already been registered on this Flask app. Import and use that instance instead. ---------------------------------------------------------------------- Ran 1 tests in 0.226s FAILED (errors=1)
Flask-SQLAlchemy 3 raises an error for a common incorrect setup of the extension. You must only call db.init_app(app) once for a given pair of db and app instances. You defined db = SQLAlchemy() in models, then called init_app on it in create_app. You must use that db, not create another instance. You must not call db.init_app again, it was already called in create_app. Your test's setUp is calling create_app, but then it's also calling setup_db again even though that's already called as part of create_app. Then further down, you're creating a new self.db = SQLAlchemy() and calling init_app on it with the same app you've already set up the other instance on. This is all incorrect. It looks like you're doing this because you're trying to set a database configuration for the test. As explained in the Flask tutorial, that's what the create_app function's test_config parameter is for. Get rid of the setup_db function, it's unnecessary indirection. Pass your test configuration to the factory to override default config. def create_app(test_config=None): app = Flask(__name__) app.config.from_mapping( SQLALCHEMY_DATABASE_URI="postgresql://student:student@localhost/bookshelf" ) if test_config is not None: app.config.from_mapping(test_config) db.init_app(app) with app.app_context(): db.create_all() return app from flaskr import create_app, db class BookTestCase(TestCase): def setUp(self): self.app = create_app({ "SQLALCHEMY_DATABASE_URI": "postgresql://student:student@localhost/bookshelf_test" }) Note that nothing was done directly with db in setUp, it was only imported to be used during tests.
5
3
75,512,527
2023-2-20
https://stackoverflow.com/questions/75512527/python-click-determine-whether-argument-comes-from-default-or-from-user
How to tell whether an argument in click is coming from the user or is the default value? For example: import click @click.command() @click.option('--value', default=1, help='a value.') def hello(value): print(value) if __name__ == "__main__": hello() Now if I run python script.py --value 1, the value is now coming from the user input as opposed to the default value (which is set to 1). Is there any way to discern where this value is coming from?
You can use Context.get_parameter_source to get what you want. This returns an enum of 4 possible values (or None if the value does not exist), you can then use them to decide what you want to do. COMMANDLINE - The value was provided by the command line args. ENVIRONMENT - The value was provided with an environment variable. DEFAULT - Used the default specified by the parameter. DEFAULT_MAP - Used a default provided by :attr:`Context.default_map`. PROMPT - Used a prompt to confirm a default or provide a value. import click from click.core import ParameterSource @click.command() @click.option('--value', default=1, help='a value.') def hello(value): parameter_source = click.get_current_context().get_parameter_source('value') if parameter_source == ParameterSource.DEFAULT: print('default value') elif parameter_source == ParameterSource.COMMANDLINE: print('from command line') # other conditions go here print(value) if __name__ == "__main__": hello() In this case, the value is taken from default and can be seen in the output below. Output default value 1
6
7
75,521,662
2023-2-21
https://stackoverflow.com/questions/75521662/upsert-pandas-dataframe-into-snowflake-table
I'm upserting data in snowflake table by creating a Temp Table (from my dataframe) and then merging it to my Table. But is there a more efficient way of achieving it ? Like merging directly the dataframe on snowflake table without a temp Table ? Because I will do it on several tables having a few thousant rows. My Code: import pandas as pd from sqlalchemy import create_engine from snowflake.connector.pandas_tools import pd_writer engine = create_engine('snowflake://{user}:{password}@{account_identifier}/{database_name}/{schema_name}?warehouse={warehouse_name}&role={role_name}'.format( user='user', password=os.environ['SNOWFLAKE_PASSWORD'] , account_identifier='account_identifier', database_name='DB_NAME', schema_name='SHCEMA_NAME', warehouse_name='WH', role_name='ADMIN' ) ) conn=engine.connect() temp_table_name='source_table' df=pd.DataFrame({'id':[1,2,3],'description':['a','b','c']}) #create temp table in snowflake res_sql=df.to_sql(temp_table_name.lower(), engine, if_exists='replace',index=False, method=pd_writer, schema='SCHEMA_NAME') #MERGE TEMP TABLE TO EXISTING TABLE conn.cursor().execute( ''' MERGE INTO target_table USING source_table ON target_table.id = source_table.id WHEN MATCHED THEN UPDATE SET target_table.description = source_table.description WHEN NOT MATCHED THEN INSERT (ID, description) VALUES (source_table.id, source_table.description); ''' ) #Drop temp table conn.cursor().execute("DROP TABLE IF EXISTS DB_NAME.SCHEMA_NAME.source_table")
Using snowflake.snowpark.Table.merge: Merges this Table with DataFrame source on the specified join expression and a list of matched or not-matched clauses, and returns a MergeResult, representing the number of rows inserted, updated and deleted by this merge action. Standalone sample(table target exists at Snowflake): CREATE OR REPLACE TABLE target(key TEXT, value TEXT); INSERT INTO target VALUES (10, 'old'), (10, 'too_old'), (11, 'old'); SELECT * FROM target; Python code(here wrapped with WITH PROCEDURE ... CALL for simplicity): WITH proc AS PROCEDURE() RETURNS TEXT LANGUAGE PYTHON RUNTIME_VERSION = '3.8' PACKAGES = ('snowflake-snowpark-python') HANDLER = 'main' AS $$ from snowflake.snowpark.functions import when_matched, when_not_matched def main(session): source = session.create_dataframe([(10, "new"), (12, "new"), (13, "old")], schema=["key", "value"]) target = session.table("target") return target.merge(source, (target["key"] == source["key"]) & (target["value"] == "too_old"), [when_matched().update({"value": source["value"]}), when_not_matched().insert({"key": source["key"]})]) $$ CALL proc(); Output: MergeResult(rows_inserted=2, rows_updated=1, rows_deleted=0) Related: Snowpark API Snowpark Snowflake - invoking Python code without creating UDF/Stored Procedure - for this ad-hoc sample, normally called from client tool
3
3
75,508,198
2023-2-20
https://stackoverflow.com/questions/75508198/python-multithreading-is-faster-than-sequential-code-why
In many stack overflow Q&A about python multi-threading, I read that python has GIL so multi-threading is slower than sequential code. But in my code it doesn't look like This is multi-threading code code updated 02-21-2023 import threading import time global_v = 0 thread_lock = threading.Lock() def thread_test(num): thread_lock.acquire() global global_v for _ in range(num): global_v += 1 thread_lock.release() # thread run thread_1 = threading.Thread(target=thread_test, args=(9_000_000,)) thread_2 = threading.Thread(target=thread_test, args=(9_000_000,)) thread_3 = threading.Thread(target=thread_test, args=(9_000_000,)) thread_4 = threading.Thread(target=thread_test, args=(9_000_000,)) thread_5 = threading.Thread(target=thread_test, args=(9_000_000,)) thread_start = time.perf_counter() # start thread thread_1.start() thread_2.start() thread_3.start() thread_4.start() thread_5.start() thread_end = time.perf_counter() thread_1.join() thread_2.join() thread_3.join() thread_4.join() thread_5.join() print(f"multithread run takes {thread_end-thread_start:.5f} sec") # nomal run (sequential code) def increment(): global nomal_result for _ in range(45_000_000): nomal_result += 1 nomal_result = 0 start_time = time.perf_counter() increment() end_time = time.perf_counter() print(f"nomal run takes {end_time-start_time:.5f} sec") The result is multithread run takes 0.21226 sec nomal run takes 2.09347 sec Consequently my question is this Q1. Why threading is faster than sequential code in python? Q2. What is the different between multi-threading with lock and sequential code (I think if using a lock, the code works like sequential codes with blocking) Please let me know ! thanks you python version 3.8.10 ps. I move my 3rd question Python multi-threading with lock is much faster why?
It looks like it is caused by the way CPython treats globals. This sequential version is faster than your concurrent one using CPython 3.11 on my machine: def increment(): nomal_result = 0 for _ in range(5_000_000): nomal_result += 1 nomal_result = 0 start_time = time.perf_counter() increment() end_time = time.perf_counter() print(f"nomal run takes {end_time-start_time:.5f} sec") Your multithreaded code is thus not faster than your sequential one. The performance gap is likely due to different CPython optimizations between the two versions and it's mostly irrelevant. The GIL does not prevent all code to be efficiently multithreaded. Here is a simple counter-example: import threading import time def thread_test(): time.sleep(1.0) thread_start = time.perf_counter() thread_1 = threading.Thread(target=thread_test) thread_2 = threading.Thread(target=thread_test) thread_1.start() thread_2.start() thread_1.join() thread_2.join() thread_end = time.perf_counter() print(f"Task duration: {thread_end - thread_start:.5f} sec") compared to: import time def thread_test(): time.sleep(1.0) thread_start = time.perf_counter() thread_test() thread_test() thread_end = time.perf_counter() print(f"Task duration: {thread_end - thread_start:.5f} sec") The multithreaded version only takes 1 second while the sequential one takes 2 seconds. As a general rule, code that heavily call C functions and release the GIL (like as NumPy) and code that is IO-bound (network calls) will benefit from multithreading in Python. On the contrary, CPU-bound tasks such as your code won't benefit from it.
3
2
75,516,448
2023-2-21
https://stackoverflow.com/questions/75516448/python-pandas-groupby-to-calculate-differences-in-months
A data frame below and I want to calculate the intervals of months under the names. Lines so far: import pandas as pd from io import StringIO import numpy as np csvfile = StringIO( """Name Year - Month Score Mike 2022-11 31 Mike 2022-11 136 Lilly 2022-11 23 Lilly 2022-10 44 Kate 2023-01 1393 Kate 2022-10 2360 Kate 2022-08 1648 Kate 2022-06 543 Kate 2022-04 1935 Peter 2022-04 302 David 2023-01 1808 David 2022-12 194 David 2022-09 4077 David 2022-06 666 David 2022-03 3362""") df = pd.read_csv(csvfile, sep = '\t', engine='python') df['Year - Month'] = pd.to_datetime(df['Year - Month'], format='%Y-%m') df['Interval'] = (df.groupby(['Name'])['Year - Month'].transform(lambda x: x.diff())/ np.timedelta64(1, 'M')) df['Interval'] = df['Interval'].replace(np.nan, 1).astype(int) But the output seems something wrong (not calculating right). Where has this gone wrong, and how can I correct it? Name Year - Month Score Interval 0 Mike 2022-11 31 1 <- shall be 0 1 Mike 2022-11 136 0 2 Lilly 2022-11 23 1 3 Lilly 2022-10 44 1 <- shall be 0 4 Kate 2023-01 1393 1 <- shall be 3 5 Kate 2022-10 2360 3 <- shall be 2 6 Kate 2022-08 1648 2 7 Kate 2022-06 543 2 8 Kate 2022-04 1935 2 <- shall be 0 9 Peter 2022-04 302 1 <- shall be 0 10 David 2023-01 1808 1 <- shall be 1 11 David 2022-12 194 1 <- shall be 3 12 David 2022-09 4077 2 <- shall be 3 13 David 2022-06 666 3 14 David 2022-03 3362 3 <- shall be 0
You need to difference with next value instead of previous value. You can do so by setting -1 in diff(). ... df['Interval'] = df.groupby(['Name'])['Year - Month'].transform(lambda x: x.diff(-1)) / np.timedelta64(1, 'M') df['Interval'] = df['Interval'].fillna(0).round().astype(int) Result: Name Year - Month Score Interval 0 Mike 2022-11-01 31 0 1 Mike 2022-11-01 136 0 2 Lilly 2022-11-01 23 1 3 Lilly 2022-10-01 44 0 4 Kate 2023-01-01 1393 3 5 Kate 2022-10-01 2360 2 6 Kate 2022-08-01 1648 2 7 Kate 2022-06-01 543 2 8 Kate 2022-04-01 1935 0 9 Peter 2022-04-01 302 0 10 David 2023-01-01 1808 1 11 David 2022-12-01 194 3 12 David 2022-09-01 4077 3 13 David 2022-06-01 666 3 14 David 2022-03-01 3362 0
3
4
75,512,363
2023-2-20
https://stackoverflow.com/questions/75512363/reasoning-behind-high-latency-when-using-python-ctypes-during-process-interrupts
While investigating a critical path in our python codebase, we found out that the behaviour of ctypes in terms of latencies is quite unpredictable. A bit more background of our application. We have bunch of processes where each of them communicate through shared memory. We leverage python library multiprocessing.RawValue and multiprocessing.RawArray which internally uses ctypes for data management. While running this in production, we saw that even a simple get() access on these shared data types takes around 30-50 us and sometimes 100us and that's quite slow. Even for python. I have created this bare bone example which creates a ctype structure and exposes get() method import ctypes import sys import time import numpy as np import random from decimal import Decimal def get_time_ns(): return Decimal(str(time.time_ns())) class Point(ctypes.Structure): _fields_ = [("x", ctypes.c_int), ("y", ctypes.c_int)] def __init__(self, x, y): return super().__init__(x, y) def get(self): return self.x #return str(self.x) + "," + str(self.y) def benchmark(delay_mode): p = Point(10, 20) iters = 10 while iters: start_ts = get_time_ns() _ = p.get() end_ts = get_time_ns() print("Time: {} ns".format(end_ts - start_ts)) iters -= 1 if delay_mode == 1: time.sleep(random.uniform(0, 0.1)) benchmark(int(sys.argv[1])) When I run this in no sleep mode, The latency numbers are as follows [root@centos-s-4vcpu-8gb-fra1-01 experiments]# python3.9 simple_ctype.py 0 Time: 9556 ns Time: 2246 ns Time: 1124 ns Time: 1174 ns Time: 1091 ns Time: 1126 ns Time: 1081 ns Time: 1066 ns Time: 1077 ns Time: 1138 ns And when I run this in sleep mode, the latency numbers are as follows [root@centos-s-4vcpu-8gb-fra1-01 experiments]# python3.9 simple_ctype.py 1 Time: 27233 ns Time: 27592 ns Time: 31687 ns Time: 32817 ns Time: 26234 ns Time: 32651 ns Time: 29468 ns Time: 36981 ns Time: 31313 ns Time: 34667 ns The reason for using sleep is to simulate our production environment where application is doing more than just running this loop Can someone explain me the reason for this 10 - 20X increase in latency when there are interrupts compared to the above hot loop. My best guess is CPU cache miss but that still does not explain such latency increase. I am also quite confused on how ctypes actually manage memory. Is it just plain malloc or mmap and malloc. And last but not least, It would be really great if someone can help us optimise this. System Information: CentOS 7.9, 4 core CPU, 16 GB RAM. taskset to pin specific CPU core to the script FYI, We already know that C++/Rust is better for this high precision performance than high level language like python but considering the time sensitivity and other business reasons, we would like to optimise our python code for performance before we actually hit the language barrier
There are multiple reason for a code to be slower sleeping. Here, the 4 main reasons are the frequency scaling, the TLB/cache misses and the branch misses. All of them are due to context switches mixed with a long period of CPU inactivity. The problem is independent of ctypes. Frequency scaling When a mainstream modern processor does not have an intensive task to compute, it automatically reduces its frequency (with the agreement of the operating system which can be configured). It is similar to a human sleep: when you have nothing to do, you can sleep, and when you wake up, it takes some time before you can operating quickly (ie. dizzy state). This is the same thing for the processor: it takes some time for the processor to switch from a low frequency (used during the sleep call) to a high frequency (using during the computing code). AFAIK, this is mainly because the processor needs to adapt its voltage. This is an expected behavior because it is not energy efficient to switch to the highest frequency directly since the target code might not run for a long time (see hysteresis) and the power consumption grow with ~ frequency**3 (due to the voltage increase required by higher frequencies). There is a way to check that easily on Linux. You can use a fixed frequency and disable any turbo-like mode. On my i5-9600KF processor I used the following line: echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo You can check the state of your CPU using the following lines: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq cat /proc/cpuinfo | grep MHz # Current frequency for each core Here is results before and after the changes on my machine: # ---------- BEFORE ---------- $ python3 lat-test.py 0 Time: 12387 ns Time: 2023 ns Time: 1272 ns Time: 1096 ns Time: 1070 ns Time: 998 ns Time: 1022 ns Time: 956 ns Time: 1002 ns Time: 1378 ns $ python3 lat-test.py 1 Time: 6941 ns Time: 3772 ns Time: 3544 ns Time: 9502 ns Time: 25475 ns Time: 18734 ns Time: 23800 ns Time: 9503 ns Time: 19520 ns Time: 17306 ns # ---------- AFTER ---------- $ python3 lat-test.py 0 Time: 7512 ns Time: 2262 ns Time: 1488 ns Time: 1441 ns Time: 1413 ns Time: 1434 ns Time: 1426 ns Time: 1424 ns Time: 1443 ns Time: 1444 ns $ python3 lat-test.py 1 Time: 8659 ns Time: 5133 ns Time: 3720 ns Time: 4057 ns Time: 3888 ns Time: 4187 ns Time: 4136 ns Time: 3922 ns Time: 4456 ns Time: 3946 ns We can see that the gap is significantly smaller. Moreover, results are much more stable (and reproducible). Note that the performance is reduced when the latency is small because the turbo-boost has been disabled (so my processor do not run at its highest possible frequency). On my machine the factor between the minimum frequency (0.8 GHz) and the maximum one (4.6 GHz) is 5.75 which is pretty big and justify a significant part of the performance gap when the frequency scaling is enabled (default). Biased benchmark A significant part of the latency is lost in the execution of the get_time_ns. This is a critical point: CPython is a slow interpreter so you cannot measure the time very precisely with it. An empty function call in CPython takes ~45 ns on my machine! The expression Decimal(str('1676949210508126547')) takes ~250 ns. This is critical to consider this point since you are measuring a latency only 10 times bigger than this in a context where such a code can be significantly slower due to many overheads (including the cache being cold -- see later). To improve the accuracy of the benchmark, I removed the use of the Decimal module and the string conversions that are expensive and just used integers. Note that even basic integer are far from being cheap in CPython because they have a variable length and are dynamically allocated, not to mention CPython interpret the bytecode at runtime. A simple integer_1 - integer_2 takes ~35 ns on my machine while it should take less than 1 ns in a native compiled code. Fetching the function time_ns from the time module also takes about the same time, not to mention the timing function itself takes ~50 ns to execute (to ~85 ns in total for 1 fetch+execution). I also increased the number of iteration from 10 to 10_000 so for the effect to be more visible during the following profiling part. In the end, the latency was reduced to 200/1300 ns instead of 1400/4000. This is a huge difference. In fact, 200 ns is so small that at least half of the time is still lost timing overheads and is not the call to p.get()! That being said, the gap is still here. Cache and TLB misses A part of the remaining overhead is due to cache misses and TLB misses. Indeed, when a context switch happens (due to the call to sleep), the CPU caches can be flushed (somehow). In fact, AFAIK, they are indirectly flush on mainstream modern processors during a context-switch : the TLB CPU unit which is a cache responsible for the translations of virtual memory to physical memory is flushed causing cache lines to be reloaded when the thread is scheduled back. It has a significant impact on performance after the process has been scheduled back because data needs to typically be reloaded from the slow RAM or at least a higher-latency cache (eg. LLC). Note that even if it would not be the case, the thread can be scheduled back on a different core having its own private TLB unit so it would result in many cache misses. Regarding how memory is shared between processes, you may also experience a "TLB shootdown" which is also pretty expensive. See this post and this one for more information about this effect. On Linux, we can use the great perf tool so to track performance performance events of the CPU. Here are results for the two use-case for TLBs: # Low latency use-case 84 429 dTLB-load-misses # 0,02% of all dTLB cache hits 467 241 669 dTLB-loads 412 744 dTLB-store-misses 263 837 789 dTLB-stores 47 541 iTLB-load-misses # 39,53% of all iTLB cache hits 120 254 iTLB-loads 70 332 mem_inst_retired.stlb_miss_loads 8 435 mem_inst_retired.stlb_miss_stores # High latency use-case 1 086 543 dTLB-load-misses # 0,19% of all dTLB cache hits 566 052 409 dTLB-loads 598 092 dTLB-store-misses 321 672 512 dTLB-stores 834 482 iTLB-load-misses # 443,76% of all iTLB cache hits 188 049 iTLB-loads 986 330 mem_inst_retired.stlb_miss_loads 93 237 mem_inst_retired.stlb_miss_stores The dTLB is a per-core TLB used for storing the mapping of data pages. The sTLB is shared between cores. The iTLB is a per-core TLB used for storing the mapping of code pages. We can see a huge increase of the number of dTLB load misses and iTLB load misses as well as sTLB loads/stores. This confirms the performance issue is likely caused by TLB misses. TLB misses should result in more cache misses decreasing performance. This is what we can see in practice. Indeed, here are performance results of the caches: # Low latency use-case 463 214 319 mem_load_retired.l1_hit 4 184 769 mem_load_retired.l1_miss 2 527 800 mem_load_retired.l2_hit 1 659 963 mem_load_retired.l2_miss 1 568 506 mem_load_retired.l3_hit 96 549 mem_load_retired.l3_miss # High latency use-case 558 344 514 mem_load_retired.l1_hit 7 280 721 mem_load_retired.l1_miss 3 564 001 mem_load_retired.l2_hit 3 720 610 mem_load_retired.l2_miss 3 547 260 mem_load_retired.l3_hit 105 502 mem_load_retired.l3_miss Branch misses Another part of the overhead is due to conditional jumps being less well predicted after a long sleep. This is a complex topic, but one should know that branches are predicted by mainstream modern processors based on many parameters including the past result. For example, if a condition is always true, then the processor can speculatively execute the condition and revert it later if the prediction was actually wrong (expensive). Modern processors cannot predict a large set of conditional jumps simultaneously : they have a small cache for this and this can can quickly get flushed over time. The thing is CPython does a lot of conditional jumps like most interpreters. Thus, a context-switch can likely cause a flush of the branch jump cache, increasing the overhead of conditional jumps used in this case, resulting in a higher latency. Here are experimental results on my machine: # Low latency use-case 350 582 368 branch-instructions 4 629 149 branch-misses # 1,32% of all branches # High latency use-case 421 207 541 branch-instructions 8 392 124 branch-misses # 1,99% of all branches Note that a branch miss should takes ~14 cycles on my machine. This means a gap of 14 ms and so ~1400 ns per iteration. That being said, only a fraction of this time is measured between the two function calls to get_time_ns(). For more information about this topic, please read this post.
4
8
75,477,373
2023-2-16
https://stackoverflow.com/questions/75477373/sqlalchemy-is-slow-when-doing-query-the-first-time
I'm using Sqlalchemy(2.0.3) with python3.10 and after fresh container boot it takes ~2.2s to execute specific query, all consecutive calls of the same query take ~70ms to execute. I'm using PostgreSQL and it takes 40-70ms to execute raw query in DataGrip. Here is the code: self._Session = async_sessionmaker(self._engine, expire_on_commit=False) ... @property def session(self): return self._Session ... async with PostgreSQL().session.begin() as session: total_functions = aliased(db_models.Function) finished_functions = aliased(db_models.Function) failed_functions = aliased(db_models.Function) stmt = ( select( db_models.Job, func.count(distinct(total_functions.id)).label("total"), func.count(distinct(finished_functions.id)).label("finished"), func.count(distinct(failed_functions.id)).label("failed") ) .where(db_models.Job.project_id == project_id) .outerjoin(db_models.Job.packages) .outerjoin(db_models.Package.modules) .outerjoin(db_models.Module.functions.of_type(total_functions)) .outerjoin(finished_functions, and_( finished_functions.module_id == db_models.Module.id, finished_functions.progress == db_models.FunctionProgress.FINISHED )) .outerjoin(failed_functions, and_( failed_functions.module_id == db_models.Module.id, or_( failed_functions.state == db_models.FunctionState.FAILED, failed_functions.state == db_models.FunctionState.TERMINATED, )) ) .group_by(db_models.Job.id) ) start = time.time() yappi.set_clock_type("WALL") with yappi.run(): job_infos = await session.execute(stmt) yappi.get_func_stats().print_all() end = time.time() Things I have tried and discovered: Problem is not related to connection or querying the database database. On service boot I establish connection and make some other queries. Problem most likely not related to cache. I have disabled cache with query_cache_size=0, however I'm not 100% sure that it worked, since documentations says: ORM functions related to unit-of-work persistence as well as some attribute loading strategies will make use of individual per-mapper caches outside of the main cache. Profiler didn't show anything that caught my attention: ..urrency_py3k.py:130 greenlet_spawn 2/1 0.000000 2.324807 1.162403 ..rm/session.py:2168 Session.execute 1 0.000028 2.324757 2.324757 ..0 _UnixSelectorEventLoop._run_once 11 0.000171 2.318555 0.210778 ..syncpg_cursor._prepare_and_execute 1 0.000054 2.318187 2.318187 ..cAdapt_asyncpg_connection._prepare 1 0.000020 2.316333 2.316333 ..nnection.py:533 Connection.prepare 1 0.000003 2.316154 2.316154 ..nection.py:573 Connection._prepare 1 0.000017 2.316151 2.316151 ..n.py:359 Connection._get_statement 2/1 0.001033 2.316122 1.158061 ..ectors.py:452 EpollSelector.select 11 0.000094 2.315352 0.210487 ..y:457 Connection._introspect_types 1 0.000025 2.314904 2.314904 ..ction.py:1669 Connection.__execute 1 0.000027 2.314879 2.314879 ..ion.py:1699 Connection._do_execute 1 2.314095 2.314849 2.314849 ...py:2011 Session._execute_internal 1 0.000034 0.006174 0.006174 I have also seen that one may disable cache per connection: with engine.connect().execution_options(compiled_cache=None) as conn: conn.execute(table.select()) However I'm working with ORM layer and not sure how to apply this in my case. Any ideas where this delay might come from?
After hours of googling I have found this post. In short, problem is related to lack of dependencies(in some alpine docker images) that are required by JIT that is used by Postgres. For details I really recommend to read post and real-life impact author provides. Actual solution for Sqlalchemy is to switch off JIT: engine = create_async_engine( "postgresql+asyncpg://user:password@localhost/tmp", connect_args={"server_settings": {"jit": "off"}}, ) Reference to docs.
4
8
75,504,389
2023-2-20
https://stackoverflow.com/questions/75504389/how-do-i-find-the-smallest-surrounding-rectangle-of-a-set-of-2d-points-in-shapel
How do I find the msmallest surrounding rectangle (which is possibly rotated) of a set of 2D points in Shapely?
To create the smallest surrounding rectangle in Shapely, first construct a MultiPoint from a sequence of points then use the minimum_rotated_rectangle property (which is in the BaseGeometry class). from shapely.geometry import MultiPoint, Polygon points = [(0, 0), (2, 2), (10, 4), (5, 5), (8, 8)] # create a minimum rotated rectangle containing all the points polygon = MultiPoint(points).minimum_rotated_rectangle print(polygon) Output: POLYGON ((2.9999999999999996 -2.9999999999999996, 10.999999999999998 4.999999999999998, ... The example set of points are displayed below in red and the bounding box is in blue. If want to create an envelope around the points that is the smallest rectangle (with sides parallel to the coordinate axes) containing all the points then call the envelope property on the object. polygon = MultiPoint(points).envelope
4
4
75,514,573
2023-2-20
https://stackoverflow.com/questions/75514573/where-can-i-find-python-requests-library-functions-kwargs-parameters-documente
For example, from https://docs.python-requests.org/en/latest/api/#requests.cookies.RequestsCookieJar.set: set(name, value, **kwargs) Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. Where can I find information about what other arguments the function takes as **kwargs? I mean these arguments, domain, path, expires, max_age, secure, httponly. It is not documented there! All other functions are like this, I got confused what to pass as parameters. In php.net they describe all parameters properly. Where can I find all parameters that are hidden behind **kwargs?
In my experience reading the source code for many open source libraries solves this problem. For the example you posted the source code is the following: def set(self, name, value, **kwargs): """Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. """ # support client code that unsets cookies by assignment of a None value: if value is None: remove_cookie_by_name( self, name, domain=kwargs.get("domain"), path=kwargs.get("path") ) return if isinstance(value, Morsel): c = morsel_to_cookie(value) else: c = create_cookie(name, value, **kwargs) self.set_cookie(c) return c For python kwargs are viewed as a dictionary (that's what ** does). In this case the set function uses the "domain" and "path" directly. However, there is another function that takes **kwargs. This is the main purpose of using kwargs instead of fixing the arguments. If we dive into the source code of create_cookie we can see which keyword arguments are valid. def create_cookie(name, value, **kwargs): """Make a cookie from underspecified parameters. By default, the pair of `name` and `value` will be set for the domain '' and sent on every request (this is sometimes called a "supercookie"). """ result = { "version": 0, "name": name, "value": value, "port": None, "domain": "", "path": "/", "secure": False, "expires": None, "discard": True, "comment": None, "comment_url": None, "rest": {"HttpOnly": None}, "rfc2109": False, } badargs = set(kwargs) - set(result) if badargs: raise TypeError( f"create_cookie() got unexpected keyword arguments: {list(badargs)}" ) result.update(kwargs) result["port_specified"] = bool(result["port"]) result["domain_specified"] = bool(result["domain"]) result["domain_initial_dot"] = result["domain"].startswith(".") result["path_specified"] = bool(result["path"]) return cookielib.Cookie(**result) In this case the only allowed keywords are the ones described in the results dictionary.
3
4
75,504,654
2023-2-20
https://stackoverflow.com/questions/75504654/is-there-a-way-in-numpy-to-merge-two-arrays-using-the-part-that-appears-first-a
I have two same length timeline-like series and I want to merge the parts that appear first while not overlapping. For Example: long = [0,0,1,1,1,1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0] short = [0,0,0,1,1,1,1,0,1,1,1,1,1,1,0,0,0,1,1,1,0,0,1,1,1] wanted: result = [0,0,1,1,1,1,0,0,1,1,1,1,1,1,0,0,0,1,1,1,0,0,1,1,1] # ------- ----------- --------------- # from long from short from short I have a chunked iterative solution: def np_shift(arr, shift=1): arr = arr.astype(float) arr = np.roll(arr, shift) arr[:shift] = np.nan return arr def split_arr_to_block(arr, index): return np.split(arr.transpose(), index) def calc(zero, long, short): p_block = (zero!=(np_shift(zero))).cumsum() split_index = np.unique(p_block, return_index=True)[1][1:] blocks = split_arr_to_block(np.array([long, short]), split_index) def select(arr): dT = arr.transpose() return np.where( (dT[0,:][0]==1), dT[0,:], dT[1,:] ) result = np.array([]) for sec in blocks: result = np.append(result, select(sec)) return result long = np.array([0,0,1,1,1,1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0]) short = np.array([0,0,0,1,1,1,1,0,1,1,1,1,1,1,0,0,0,1,1,1,0,0,1,1,1]) zero = np.where( ((long==0) & (short==0)), 1, 0 ) result = calc(zero, long, short) This solutiontake about 46ms with a 160000+ dataset, is there a faster solution?
I do not think any pure-Numpy code can efficiently compute this. Thus, this is the perfect use-case for Numba or Cython. You can solve this using a few simple nested loops for loop that will be compiled to a very-fast native code: import numpy as np import numba as nb long = np.array([0,0,1,1,1,1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,0]) short = np.array([0,0,0,1,1,1,1,0,1,1,1,1,1,1,0,0,0,1,1,1,0,0,1,1,1]) # Assume the input arrays are contiguous and contains 32-bit/64-bit integers @nb.njit(['(int32[::1], int32[::1])', '(int64[::1], int64[::1])']) def compute(a, b): assert a.size == b.size n = a.size r = np.empty_like(a) i = 0 while i < n: if a[i] + b[i] == 0: while i < n and a[i] + b[i] == 0: r[i] = 0 i += 1 elif a[i] != 0: while i < n and a[i] + b[i] != 0: r[i] = a[i] i += 1 else: while i < n and a[i] + b[i] != 0: r[i] = b[i] i += 1 return r result = compute(long, short) Note a[i] + b[i] == 0 is a fast equivalent to a[i] == 0 and b[i] == 0. Performance results Here are results with two random arrays containing 160000 integers (32-bit) on my i5-9600KF processor: Initial pure-Numpy code: 4153.57 ms Provided Numba code: 0.51 ms The provided code is thus 8144 times faster!
4
1
75,512,205
2023-2-20
https://stackoverflow.com/questions/75512205/unexpected-behaviour-of-pandas-multiindex-set-levels
I have noticed an unexpected result when resetting the level values in a pandas.MultiIndex. The minimal working example I have found to reproduce the problem is as follows: import numpy as np import pandas as pd numbers = np.arange(11).astype(str) columns = pd.MultiIndex.from_product([['A'],numbers]) df = pd.DataFrame(index=[0], columns=columns, dtype=float) print(df.columns) returns MultiIndex([('A', '0'), ('A', '1'), ('A', '2'), ('A', '3'), ('A', '4'), ('A', '5'), ('A', '6'), ('A', '7'), ('A', '8'), ('A', '9'), ('A', '10')], ) Notice how all values on the second level are strings of integers. I have tried to replace them with the respective integers by using the set_levels method: numbers = df.columns.get_level_values(1).astype(int) df.columns = df.columns.set_levels(numbers, level=1) print(df.columns) To my surprise, the result looks as follows: MultiIndex([('A', 0), ('A', 1), ('A', 3), ('A', 4), ('A', 5), ('A', 6), ('A', 7), ('A', 8), ('A', 9), ('A', 10), ('A', 2)], ) The values on the second level now are in a different order. What am I missing here? How can I actually replace the integer strings with the respective integers?
It's a bit confusing but it's not a surprise and this is the expected behavior. I slightly modified your example: numbers = np.arange(11).astype(str) columns = pd.MultiIndex.from_product([['A'],numbers]) df = pd.DataFrame(columns.codes, index=['Lvl0', 'Lvl1'], columns=columns) print(df) # Output: A 0 1 2 3 4 5 6 7 8 9 10 Lvl0 0 0 0 0 0 0 0 0 0 0 0 # internal codes for level 0 Lvl1 0 1 3 4 5 6 7 8 9 10 2 # internal codes for level 1 When you create your MultiIndex (here, with numbers as string), Pandas creates internal codes (for indexing, sorting, etc) associated to the labels but in the lexicographical order in your case: # labels: '0' < '1' < '10' < '2' < '3' < '4' < '5' < '6' < '7' < '8' < '9' # codes: 0 < 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 < 10 >>> df.columns.codes FrozenList([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2]]) >>> df.columns.is_monotonic_increasing False # because it's not lexicographically ordered >>> df.sort_index(axis=1) # lexicographical sort A 0 1 10 2 3 4 5 6 7 8 9 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 2 3 4 5 6 7 8 9 10 As human, you use labels to access your columns but Pandas uses codes behind the scene to optimize indexing and maintain order. In the second part of your code, you convert the labels from string to int and set level. However, you don't sort the index so the previous order (given by codes) are still maintained: # labels: 0 < 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 < 10 # codes: 0 < 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 < 10 >> df.columns.codes FrozenList([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 2]]) >>> df.columns.is_monotonic_increasing False >>> df.sort_index(axis=1) # numbering sort A 0 1 2 3 4 5 6 7 8 9 10 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 2 3 4 5 6 7 8 9 10 As you can see, you lost correct values. You have to sort your columns before: >>> df # with '0', '1', '2', etc A 0 1 2 3 4 5 6 7 8 9 10 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 3 4 5 6 7 8 9 10 2 >>> df = df.sort_index(axis=1) A 0 1 10 2 3 4 5 6 7 8 9 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 2 3 4 5 6 7 8 9 10 # set_levels(...) >>> df # with 0, 1, 2, etc A 0 1 10 2 3 4 5 6 7 8 9 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 2 3 4 5 6 7 8 9 10 >>> df = df.sort_index(axis=1) A 0 1 2 3 4 5 6 7 8 9 10 Lvl0 0 0 0 0 0 0 0 0 0 0 0 Lvl1 0 1 3 4 5 6 7 8 9 10 2 # now values and columns are well aligned.
3
3
75,511,806
2023-2-20
https://stackoverflow.com/questions/75511806/running-c-program-on-heroku-no-such-file
I am trying to open a C program in a python script running on a Heroku dyno. The Python script works fine locally, but on the dyno it says that the executable cannot be found. The line to run the program in Python is: proc = subprocess.Popen(["./backend/test-print"], stdout=subprocess.PIPE, stderr=subprocess.PIPE), where backend is the folder that contains the program test-print and the Python script. I run the Python scripy from the folder root so it finds the script just fine. The Heroku logs say: FileNotFoundError: [Errno 2] No such file or directory: './backend/test-print'. If I run bash on the dyno and try to run the program manually, it gives the same error: Running bash on ⬢ ******** ... up, run.8123 (Eco) ~ $ cd backend ~/backend $ ls server.py test-print ~/backend $ ./test-print bash: ./test-print: No such file or directory Any ideas? Thanks in advance. I built the program test-print on my local machine (not the one I am running the program on). I tried putting the program in the root /app folder to see if it would be found then, but that did not work. EDIT: I should also add that when I cat test-print, it finds the file fine and prints its contents. EDIT: type test-print outputs ~/backend $ type test-print bash: type: test-print: not found EDIT: ~/backend $ ls -laQ total 28 drwx------ 2 u5587 dyno 4096 Feb 20 16:16 "." drwx------ 5 u5587 dyno 4096 Feb 20 17:08 ".." -rw------- 1 u5587 dyno 520 Feb 20 16:16 "server.py" -rwx------ 1 u5587 dyno 16176 Feb 20 16:16 "test-print" ~/backend $ ls -laq total 28 drwx------ 2 u5587 dyno 4096 Feb 20 16:16 . drwx------ 5 u5587 dyno 4096 Feb 20 17:08 .. -rw------- 1 u5587 dyno 520 Feb 20 16:16 server.py -rwx------ 1 u5587 dyno 16176 Feb 20 16:16 test-print ~/backend $ id uid=13747(u13747) gid=13747(dyno) groups=13747(dyno)
There are different user names u5587 vs u13747 in the output of ls and id. ~/backend $ ls -laq total 28 drwx------ 2 u5587 dyno 4096 Feb 20 16:16 . drwx------ 5 u5587 dyno 4096 Feb 20 17:08 .. -rw------- 1 u5587 dyno 520 Feb 20 16:16 server.py -rwx------ 1 u5587 dyno 16176 Feb 20 16:16 test-print ~/backend $ id uid=13747(u13747) gid=13747(dyno) groups=13747(dyno) User u13747 does not have the permission to list the contents of the current directory or to access anything in this directory because it is owned by a different user and has no permissions for the group dyno or others. (This does not explain why cat test-print would work.)
3
2
75,497,496
2023-2-19
https://stackoverflow.com/questions/75497496/why-is-0-1-faster-than-false-true-for-this-sieve-in-pypy
Similar to why use True is slower than use 1 in Python3 but I'm using pypy3 and not using the sum function. def sieve_num(n): nums = [0] * n for i in range(2, n): if i * i >= n: break if nums[i] == 0: for j in range(i*i, n, i): nums[j] = 1 return [i for i in range(2, n) if nums[i] == 0] def sieve_bool(n): nums = [False] * n for i in range(2, n): if i * i >= n: break if nums[i] == False: for j in range(i*i, n, i): nums[j] = True return [i for i in range(2, n) if nums[i] == False] sieve_num(10**8) takes 2.55 s, but sieve_bool(10**8) takes 4.45 s, which is a noticeable difference. My suspicion was that [0]*n is somehow smaller than [False]*n and fits into cache better, but sys.getsizeof and vmprof line profiling are unsupported for PyPy. The only info I could get is that <listcomp> for sieve_num took 116 ms (19% of total execution time) while <listcomp> for sieve_bool tool 450 ms (40% of total execution time). Using PyPy 7.3.1 implementing Python 3.6.9 on Intel i7-7700HQ with 24 GB RAM on Ubuntu 20.04. With Python 3.8.10 sieve_bool is only slightly slower.
The reason is that PyPy uses a special implementation for "list of ints that fit in 64 bits". It has got a few other special cases, like "list of floats", "list of strings that contain only ascii", etc. The goal is primarily to save memory: a list of 64-bit integers is stored just like an array.array('l') and not a list of pointers to actual integer objects. You save memory not in the size of the list itself---which doesn't change---but in the fact that you don't need a very large number of small additional integer objects all existing at once. There is no special case for "list of boolean", because there are only ever two boolean objects in the first place. So there would be no memory-saving benefit in using a strategy like "list of 64-bit ints" in this case. Of course, we could do better and store that list with only one bit per entry, but it is not a really common pattern in Python; we just never got around to implementing that. So why it is slower, anyway? The reason is that in the "list of general objects" case, the JIT compiler needs to produce extra code to check the type of objects every time it reads an item from the list, and extra GC logic every time it puts an item into the list. This is not a lot of code, but in your case, I guess it doubles the length of the (extremely short) generated assembly for the inner loop doing nums[j] = 1. Right now, both in PyPy and CPython(*), the fastest is probably to use array.array('B') instead of a list, which both avoids that PyPy-specific issue and also uses substantially less memory (always a performance win if your data structures contain 10**8 elements). EDIT: (*) no, turns out that CPython is probably too slow for the memory bandwidth to be a limit. On my machine, PyPy is maybe 30-35% faster when using bytes. See also comments for a hack that speeds up CPython from 9x to 3x slower than PyPy, but which as usual slows down PyPy.
6
7
75,504,084
2023-2-19
https://stackoverflow.com/questions/75504084/select-multiple-indices-in-an-axis-of-pytorch-tensor
My actual problem is in a higher dimension, but I am posting it in a smaller dimension to make it easy to visualize. I have a tensor of shape (2,3,4): x = torch.randn(2, 3, 4) tensor([[[-0.9118, 1.4676, -0.4684, -0.6343], [ 1.5649, 1.0218, -1.3703, 1.8961], [ 0.8652, 0.2491, -0.2556, 0.1311]], [[ 0.5289, -1.2723, 2.3865, 0.0222], [-1.5528, -0.4638, -0.6954, 0.1661], [-1.8151, -0.4634, 1.6490, 0.6957]]]) From this tensor, I need to select rows given by a list of indices along axis-1. Example, indices = torch.tensor([0, 2]) Expected Output: tensor([[[-0.9118, 1.4676, -0.4684, -0.6343]], [[-1.8151, -0.4634, 1.6490, 0.6957]]]) Output Shape: (2,1,4) Explanation: Select 0th row from x[0], select 2nd row from x[1]. (Came from indices) I tried using index_select like this: torch.index_select(x, 1, indices) But the problem is that it is selecting the 0th and 2nd row for each item in x. It looks like it needs some modification I could not figure it out at the moment.
In your case, this is quite straightforward. An easy way to navigate through two dimensions in parallel is to use a range on the first axis and your indexing tensor on the second: >>> x[range(len(indices)), indices] tensor([[-0.9118, 1.4676, -0.4684, -0.6343], [-1.8151, -0.4634, 1.6490, 0.6957]]) In more general cases though, this would require the use of torch.gather: First expand indices such that it has enough dimensions: index = indices[:,None,None].expand(x.size(0), -1, x.size(-1)) Then you can apply the function on x and index and squeeze dim=1: >>> x.gather(dim=-2, index=index)[:,0] tensor([[-0.9118, 1.4676, -0.4684, -0.6343], [-1.8151, -0.4634, 1.6490, 0.6957]])
3
4
75,501,247
2023-2-19
https://stackoverflow.com/questions/75501247/plotting-a-3-dimensional-superball-shape
I'm trying to plot a 3D superball in python matplotlib, where a superball is defined as a general mathematical shape that can be used to describe rounded cubes using a shape parameter p, where for p = 1 the shape is equal to that of a sphere. This paper claims that the superball is defined by using modified spherical coordinates with: x = r*cos(u)**1/p * sin(v)**1/p y = r*cos(u)**1/p * sin(v)**1/p z = r*cos(v)**1/p with u = phi and v = theta. I managed to get the code running, at least for p = 1 which generates a sphere - exactly as it should do: import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') r, p = 1, 1 # Make data u = np.linspace(0, 2 * np.pi, 100) v = np.linspace(0, np.pi, 100) u, v = np.meshgrid(u, v) x = r * np.cos(u)**(1/p) * np.sin(v)**(1/p) y = r * np.sin(u)**(1/p) * np.sin(v)**(1/p) z = r * np.cos(v)**(1/p) # Plot the surface ax.plot_surface(x, y, z) plt.show() This is a 3D plot of the code above for p = 1. However, as I put in any other value for p, e.g. 2, it's giving me only a partial shape, while it should actually give me a full superball. This is a 3D plot of the code above for p = 2. I believe the fix is more of mathematical nature, but how can this be fixed?
When plotting a regular sphere, we transform positive and negative coordinates differently: Positives: x**0.5 Negatives: -1 * abs(x)**0.5 For the superball variants, apply the same logic using np.sign and np.abs: power = lambda base, exp: np.sign(base) * np.abs(base)**exp x = r * power(np.cos(u), 1/p) * power(np.sin(v), 1/p) y = r * power(np.sin(u), 1/p) * power(np.sin(v), 1/p) z = r * power(np.cos(v), 1/p) Full example for p = 4: import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(subplot_kw={'projection': '3d'}) r, p = 1, 4 # Make the data u = np.linspace(0, 2 * np.pi) v = np.linspace(0, np.pi) u, v = np.meshgrid(u, v) # Transform the coordinates # Positives: base**exp # Negatives: -abs(base)**exp power = lambda base, exp: np.sign(base) * np.abs(base)**exp x = r * power(np.cos(u), 1/p) * power(np.sin(v), 1/p) y = r * power(np.sin(u), 1/p) * power(np.sin(v), 1/p) z = r * power(np.cos(v), 1/p) # Plot the surface ax.plot_surface(x, y, z) plt.show()
3
4
75,503,936
2023-2-19
https://stackoverflow.com/questions/75503936/or-condition-in-css-selector-with-selenium-python
I hope you're fine. I'm scraping the logos of some websites. I'm using the next code to localize them. I don't use a tag only the * because the class or attribute that contains the substring 'logo' there is not always in a <div> or <a> tags. driver.find_element(By.CSS_SELECTOR, "*[class*='logo']") I have obtained some of them but in some cases the 'class' doesn't have the substring 'logo'. I've checked some websites and the logo has attributes like 'id', 'alt' or 'name' that contains the substring 'logo'. So I want to know if is there some condition like OR to applied it and if there is no match with 'class' then check in 'id', etc. I tried with these options but both launch an error: driver.find_element(By.CSS_SELECTOR, "*[class*='logo'] | *[id*='logo']") driver.find_element(By.CSS_SELECTOR, "*[class*='logo'] || *[id*='logo']") In both cases the error is: selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
You can use , to group multiple CSS selectors. driver.find_element(By.CSS_SELECTOR, "[class*='logo'], [id*='logo']")
3
4
75,501,133
2023-2-19
https://stackoverflow.com/questions/75501133/unsupported-interpolation-type-using-env-variables-in-hydra
What I'm trying to do: use environment variables in a Hydra config. I worked from the following links: OmegaConf: Environment variable interpolation and Hydra: Job Configuration. This is my config.yaml: hydra: job: env_copy: - EXPNAME # I also tried hydra:EXPNAME and EXPNAME, # which return None test: ${env:EXPNAME} Then I set the environment variable (Ubuntu) with: export EXPNAME="123" The error I get is omegaconf.errors.UnsupportedInterpolationType: Unsupported interpolation type env full_key: test object_type=dict
Try this (env was removed in a long time ago in favor of oc.env). test: ${oc.env:EXPNAME} I don't think the rest is needed if all you need is to access environment variables on your local machine.
5
5
75,495,212
2023-2-18
https://stackoverflow.com/questions/75495212/type-hinting-numpy-arrays-and-batches
I'm trying to create a few array types for a scientific python project. So far, I have created generic types for 1D, 2D and ND numpy arrays: from typing import Any, Generic, Protocol, Tuple, TypeVar import numpy as np from numpy.typing import _DType, _GenericAlias Vector = _GenericAlias(np.ndarray, (Tuple[int], _DType)) Matrix = _GenericAlias(np.ndarray, (Tuple[int, int], _DType)) Tensor = _GenericAlias(np.ndarray, (Tuple[int, ...], _DType)) The first issue is that mypy says that Vector, Matrix and Tensor are not valid types (e.g. when I try myvar: Vector[int] = np.array([1, 2, 3])) The second issue is that I'd like to create a generic type Batch that I'd like to use like so: Batch[Vector[complex]] should be like Matrix[complex], Batch[Matrix[float]] should be like Tensor[float] and Batch[Tensor[int] should be like Tensor[int]. I am not sure what I mean by "should be like" I guess I mean that mypy should not complain. How to I get about this?
You should not be using protected members (names starting with an underscore) from the outside. They are typically marked this way to indicated implementation details that may change in the future, which is exactly what happened here between versions of numpy. For example in 1.24 your import line from numpy.typing fails at runtime because the members you try to import are no longer there. There is no need to use internal alias constructors because numpy.ndarray is already generic in terms of the array shape and its dtype. You can construct your own type aliases fairly easily. You just need to ensure you parameterize the dtype correctly. Here is a working example: from typing import Tuple, TypeVar import numpy as np T = TypeVar("T", bound=np.generic, covariant=True) Vector = np.ndarray[Tuple[int], np.dtype[T]] Matrix = np.ndarray[Tuple[int, int], np.dtype[T]] Tensor = np.ndarray[Tuple[int, ...], np.dtype[T]] Usage: def f(v: Vector[np.complex64]) -> None: print(v[0]) def g(m: Matrix[np.float_]) -> None: print(m[0]) def h(t: Tensor[np.int32]) -> None: print(t.reshape((1, 4))) f(np.array([0j+1])) # prints (1+0j) g(np.array([[3.14, 0.], [1., -1.]])) # prints [3.14 0. ] h(np.array([[3.14, 0.], [1., -1.]])) # prints [[ 3.14 0. 1. -1. ]] The issue currently is that shapes have almost no typing support, but work is underway to implement that using the new TypeVarTuple capabilities provided by PEP 646. Until then, there is little practical use in discriminating the types by shape. The batch issue should be a separate question. Try and ask one question at a time.
3
7
75,495,278
2023-2-18
https://stackoverflow.com/questions/75495278/how-to-prevent-vscode-from-reordering-python-imports-across-statements
This is the correct way to import Gtk3 into python: import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Gdk, GObject When I save such code in VSCode with "editor.formatOnSave": true, it gets reordered to: from gi.repository import Gtk, Gdk import gi gi.require_version('Gtk', '3.0') which makes Gtk to be loaded before I have the chance to specify the version I am using, which at very least leads to the following warning being displayed: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '4.0') before import to ensure that the right version gets loaded. or worse, get me an exception like: ValueError: Namespace Gtk is already loaded with version 4.0 Now, I like VSCode code formatting, but I don't want it to reorder my imports, specially not across statements (being that imports in python have side effects). How to properly use VSCode's Python code formatter with Gtk?
To prevent VSCode from reordering Python imports across statements, you can configure the editor to use a specific Python code formatter that maintains the order of the imports as they are in the original code. Here's how you can do it: Install the Python extension for VSCode if you haven't already done so. Install a Python code formatter that maintains the order of imports, such as isort or yapf. You can do this by running the command pip install isort or pip install yapf in your terminal. In VSCode, open the settings by clicking on the gear icon in the bottom left corner and selecting "Settings". Search for "Python › Formatting: Provider" in the search bar. Select "Edit in settings.json" to edit the settings.json file. Add the following line to the file, depending on which code formatter you installed: For isort: "python.formatting.provider": "isort" For yapf: "python.formatting.provider": "yapf" Save the file and close the settings.
4
6
75,491,056
2023-2-18
https://stackoverflow.com/questions/75491056/how-does-sympy-handle-exponents-to-the-0-5-power
((gamma-(gamma**2-omega**2)**0.5)*(gamma+(gamma**2-omega**2)**0.5)).simplify() The output is: gamma^2 - (gamma^2 -omega^2)^{1.0} $ However, I expected the result to be omega^2. I know in the sympy docs, it warns about being careful with floating point numbers, but I was under the impression that integers and also fractional powers of 2 (which can be represented exactly) were fine. The following code correctly reproduces omega^2: ((gamma-(gamma**2-omega**2)**sym.Rational(1,2))*(gamma+(gamma**2-omega**2)**sym.Rational(1,2))).simplify() Why does the first code not produce the expected result?
SymPy considers that there is a distinction between exact and inexact numbers. In this context floats like 0.5 and 1.0 are considered to be inexact and therefore it is not clear that x**1.0 is really equal to x or equal to something slightly different like say x**1.00000000000000000000001. That is because floats usually arise from floating point calculations which can have rounding errors. In your example the result is: In [5]: from sympy import * In [6]: gamma, omega = symbols('gamma, omega') In [7]: e = ((gamma-(gamma**2-omega**2)**0.5)*(gamma+(gamma**2-omega**2)**0.5)).simplify() In [8]: e Out[8]: 1.0 2 ⎛ 2 2⎞ γ - ⎝γ - ω ⎠ If you want to tell SymPy that the 1.0 should be treated as an exact 1 then you can use SymPy's nsimplify function: In [9]: nsimplify(e) Out[9]: 2 ω
3
6
75,486,790
2023-2-17
https://stackoverflow.com/questions/75486790/sending-a-word-document-without-saving-it-on-the-flask-server
Good day. Today I'm trying to send a document generated on the server to the user on the click of a button using Flask. My task is this: Create a document (without saving it on the server). And send it to the user. However, using a java script, I track the button click on the form and use fetch to make a request to the server. The server retrieves the necessary data and creates a Word document based on it. How can I form a response to a request so that the file starts downloading? Code since the creation of the document. (The text of the Word document has been replaced) python Falsk: document = Document() document.add_heading("Some head-title") document.add_paragraph('Some text') f = BytesIO() document.save(f) f.seek(0) return send_file(f, as_attachment=True, download_name='some.docx') However, the file does not start downloading. How can I send a file from the server to the user? Edits This is my js request. fetch('/getData', { method : 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ someData: someData, }) }) .then(response => response.text() ) .then(response =>{ console.log(response); }); This is my html <form action="" name="getData" method="post" enctype="multipart/form-data"> <button type = "submit" name = "Download">Download</button> </form>
You need to specify the mimetype, It tries to detect the mimetype from the filename but since we are not saving it we need to specify the mimetype. return send_file(f, mimetype='application/msword', as_attachment=True, download_name='output.doc')
3
4
75,488,271
2023-2-17
https://stackoverflow.com/questions/75488271/modulenotfounderror-no-module-named-app-lang-app-is-not-a-package
I have this file structure in my python project |__src |__main.py |__gen.py |__app |__ __init__.py |__ app.py |__ lang.py Intention I want to use the Language class from sibling module lang. So I tried with this import statement in app.py: from app.lang import Language Issue But when I run app.py I get a ModuleNotFoundError error saying 'app' is not a package: Which doesn't make sense since app has __init__.py. How can I solve this?
Because both app.py and lang.py are in the same directory try to import like this : from .lang import Language or you can use from app.lang import Language from another file located outside app folder
3
2
75,462,344
2023-2-15
https://stackoverflow.com/questions/75462344/make-pip-install-e-build-cython-extensions-with-pyproject-toml
With the move to the new pyproject.toml system, I was wondering whether there was a way to install packages in editable mode while compiling extensions (which pip install -e . does not do). So I want pip to: run the build_ext I configured for Cython and generate my .so files put them in the local folder do the rest of the normal editable install I found some mentions of build_wheel_for_editable on the pip documentation but I could not find any actual example of where this hook should be implemented and what it should look like. (to be honest, I'm not even completely sure this is what I'm looking for) So would anyone know how to do that? I'd also happy about any additional explanation as to why pip install . runs build_ext but the editable command does not. Details: I don't have a setup.py file anymore; the pyproject.toml uses setuptools and contains [build-system] requires = ["setuptools>=61.0", "numpy>=1.17", "cython>=0.18"] build-backend = "setuptools.build_meta" [tool.setuptools] package-dir = {"" = "."} [tool.setuptools.packages] find = {} [tool.setuptools.cmdclass] build_ext = "_custom_build.build_ext" The custom build_ext looks like from setuptools import Extension from setuptools.command.build_ext import build_ext as _build_ext from Cython.Build import cythonize class build_ext(_build_ext): def initialize_options(self): super().initialize_options() if self.distribution.ext_modules is None: self.distribution.ext_modules = [] extensions = Extension(...) self.distribution.ext_modules.extend(cythonize(extensions)) def build_extensions(self): ... super().build_extensions() It builds a .pyx into .cpp, then adds it with another cpp into a .so.
I created a module that looks like this: $ tree . . ├── pyproject.toml ├── setup.py └── test └── helloworld.pyx 1 directory, 3 files My pyproject.toml looks like: [build-system] requires = ["setuptools>=61.0", "numpy>=1.17", "cython>=0.18"] build-backend = "setuptools.build_meta" [tool.setuptools] py-modules = ["test"] [project] name = "test" version = "0.0.1"% My setup.py: from setuptools import setup from Cython.Build import cythonize setup(ext_modules=cythonize("test/helloworld.pyx")) And helloworld.pyx just contains print("Hello world"). When I do pip install -e ., it builds the cython file as expected. If you really don't want to have a setup.py at all, I think you'll need to override build_py instead of build_ext, but IMO just having the simple setup.py file isn't a big deal.
5
4
75,486,472
2023-2-17
https://stackoverflow.com/questions/75486472/flask-teardown-request-equivalent-in-fastapi
I am building a rest api with fastapi. I implemented the data layer separately from the fastapi application meaning I do not have direct access to the database session in my fastapi application. I have access to the storage object which have method like close_session which allow me to close the current session. Is there a equivalent of flask teardown_request in fastapi? Flask Implementation from models import storage ..... ..... @app.teardown_request def close_session(exception=None): storage.close_session() I have looked at fastapi on_event('shutdown') and on_event('startup'). These two only runs when the application is shutting down or starting up.
We can do this by using dependency. credit to williamjemir: Click here to read the github discussion from fastapi import FastAPI, Depends from models import storage async def close_session() -> None: """Close current after every request.""" print('Closing current session') yield storage.close() print('db session closed.') app = FastAPI(dependencies=[Depends(close_session)]) @app.get('/') def home(): return "Hello World" if __name__ == '__main__': import uvicorn uvicorn.run(app)
3
3
75,480,406
2023-2-17
https://stackoverflow.com/questions/75480406/remembering-the-previous-conversation-of-a-chatbot
I have created a basic ChatBot using OpenAI with the following code: import openai openai.api_key = "sk-xxx" while True: prompt = input("User:") response = openai.Completion.create( model="text-davinci-003", prompt=prompt, max_tokens=50, temperature=0, ) print(response.choices[0].text) This is the input and output: And in text form: User:What is python Python is a high-level, interpreted, general-purpose programming language. It is a powerful and versatile language that is used for a wide range of applications, from web development and software development to data science and machine learning.` User:What is the Latest Version of it? The latest version of Microsoft Office is Microsoft Office 2019. As you can see i am asking questions related to python and asking version of it gives answer related to Microsoft Office, whereas when asking the same question to ChatGpt it remembers the previous conservation and acts according to it. Is there any solution for remembering the conversation?
One possibility would be to store the inputs and outputs somewhere, and then include them in subsequent inputs. This is very rudimentary but you could do something like the following: inputs, outputs = [], [] while True: prompt = input("Enter input (or 'quit' to exit):") if prompt == 'quit': break if len(inputs) > 0: inputs.append(prompt) last_input, last_output = inputs[-1], outputs[-1] prompt = f"{prompt} (based on my previous question: {last_input}, and your previous answer {last_output}" else: inputs.append(prompt) response = openai.Completion.create( model="text-davinci-003", prompt=prompt, max_tokens=200, temperature=0, ) output = response.choices[0].text outputs.append(output) print(output) This program would be able to recall the last input and output, and provide that information along with the current prompt. You could include more lines of input and output depending on how much "memory" you want your program to have, and there is also a text limit (max_tokens), so you may need to adjust the wording so that the entire prompt makes sense. And to avoid an infinite loop, we can have a condition to exit the while loop.
3
3
75,479,046
2023-2-16
https://stackoverflow.com/questions/75479046/how-can-i-combine-a-scatter-plot-with-a-density-heatmap
I have a series of scatterplots (one example below), but I want to modify it so that the colors of the points in the plot become more red (or "hot") when they are clustered more closely with other points, while points that are spread out further are colored more blue (or "cold"). Is it possible to do this? Currently, my code is pretty basic in its set up. import plotly.express as px fig = px.scatter(data, x='A', y='B', trendline='ols')
Using scipy.stats.gaussian_kde you can calculate the density and then use this to color the plot: import pandas as pd import plotly.express as px from scipy import stats df = pd.DataFrame({ 'x':[0,0,1,1,2,2,2.25,2.5,2.5,3,3,4,2,4,8,2,2.75,3.5,2.5], 'y':[0,2,3,2,1,2,2.75,2.5,3,3,4,1,5,4,8,4,2.75,1.5,3.25] }) kernel = stats.gaussian_kde([df.x, df.y]) df['z'] = kernel([df.x, df.y]) fig = px.scatter(df, x='x', y='y', color='z', trendline='ols', color_continuous_scale=px.colors.sequential.Bluered) output:
4
4
75,480,456
2023-2-17
https://stackoverflow.com/questions/75480456/detecting-handwritten-boxes-using-opencv
I have the following image: I want to extract the boxed diagrams as so: Here's what I've attempted: import cv2 import matplotlib.pyplot as plt # Load the image image = cv2.imread('diagram.jpg') # Convert to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Apply thresholding to create a binary image _, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV) # Find contours contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Draw the contours cv2.drawContours(image, contours, -1, (0, 0, 255), 2) # Show the final image plt.imshow(image), plt.show() However, I've realized it'll be difficult to extract the diagrams because the contours aren't closed: I've tried using morphological closing to close the gaps in the box edges: # Define a rectangular kernel for morphological closing kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) # Perform morphological closing to close the gaps in the box edges closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) But this changes almost nothing. How should I approach this problem?
We may replace morphological closing with dilate then erode, but filling the contours between the dilate and erode. For filling the gaps, the kernel size should be much larger than 5x5 (I used 51x51). Assuming the handwritten boxes are colored, we may convert from BGR to HSV, and apply the threshold on the saturation channel of HSV: hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # Convert from BGR to HSV color space gray = hsv[:, :, 1] # Use saturation from HSV channel as "gray". _, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU) # Apply automatic thresholding (use THRESH_OTSU). Apply dilate with large kernel, and use drawContours for filling the contours: kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (51, 51)) # Use relatively large kernel for closing the gaps dilated = cv2.dilate(thresh, kernel) # Dilate with large kernel contours, hierarchy = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(dilated, contours, -1, 255, -1) Apply erode after filling the contours Erode after dilate is equivalent to closing, but here we are closing after filling. closed = cv2.erode(dilated, kernel) Code sample: import cv2 import numpy as np # Load the image image = cv2.imread('diagram.png') hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # Convert from BGR to HSV color space # Convert to grayscale #gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) gray = hsv[:, :, 1] # Use saturation from HSV channel as "gray". # Apply thresholding to create a binary image _, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU) # Apply automatic thresholding (use THRESH_OTSU). thresh = np.pad(thresh, ((100, 100), (100, 100))) # Add zero padding (required due to large dilate kernels). # Define a rectangular kernel for morphological operations. kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (51, 51)) # Use relatively large kernel for closing the gaps dilated = cv2.dilate(thresh, kernel) # Dilate with large kernel # Fill the contours, before applying erode. contours, hierarchy = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(dilated, contours, -1, 255, -1) closed = cv2.erode(dilated, kernel) # Apply erode after filling the contours. closed = closed[100:-100, 100:-100] # Remove the padding. # Find contours contours, hierarchy = cv2.findContours(closed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Draw the contours cv2.drawContours(image, contours, -1, (255, 0, 0), 2) # Show images for testing # plt.imshow(image), plt.show() cv2.imshow('gray', gray) cv2.imshow('thresh', thresh) cv2.imshow('dilated', dilated) cv2.imshow('closed', closed) cv2.imshow('image', image) cv2.waitKey() cv2.destroyAllWindows() Result: gray (saturation channel): thresh: dilated (after filling): closed:
5
3
75,472,350
2023-2-16
https://stackoverflow.com/questions/75472350/how-to-resolve-error-could-not-build-wheels-for-matplotlib-which-is-required
I am encountering the following error when attempting to install matplotlib in an alpine Docker image: error: Failed to download any of the following: ['http://www.qhull.org/download/qhull-2020-src-8.0.2.tgz']. Please download one of these urls and extract it into 'build/' at the top-level of the source repository. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for matplotlib Failed to build matplotlib ERROR: Could not build wheels for matplotlib, which is required to install pyproject.toml-based projects My Python version is 3.9.12. How can I resolve this error?
This solved my problem: pip install matplotlib==3.2.1
6
4
75,479,969
2023-2-17
https://stackoverflow.com/questions/75479969/how-to-get-all-data-when-show-more-button-clicked-with-scrapy-playwright
Currently, I've had trouble getting all data on this page: https://www.espn.com/nba/stats/player/_/season/2023/seasontype/2 so if scrape right now it only gets 50 of the data, this is not what I want, what I want is to scrape all data, to show all table data must have to click the "show more" button until there is no "show more" button. I'm using scrapy+playwright as a scraping tool def start_requests(self): yield scrapy.Request( url='https://www.espn.com/nba/stats/player/_/season/2023/seasontype/2', meta=dict( playwright=True, playwright_include_page=True, playwright_page_coroutines=[ PageMethod('click', '//a[@class="AnchorLink loadMore__link"]'), PageMethod('wait_for_selector', "//table[@class='Table Table--align-right Table--fixed Table--fixed-left']//tbody//tr"), ] ), callback=self.parse, ) async def parse(self, response): page = response.meta["playwright_page"] button = page.locator("xpath=//a[@class='AnchorLink loadMore__link']") resp = response.body sel = Selector(text=resp) player_list = sel.xpath( "//table[@class='Table Table--align-right Table--fixed Table--fixed-left']//tbody//tr") stats_list = sel.xpath( "//div[@class='Table__ScrollerWrapper relative overflow-hidden']/div[@class='Table__Scroller']/table/tbody/tr") for player, stat in zip(player_list, stats_list): player_name = player.xpath(".//a/text()").get() position = stat.xpath(".//td/div/text()").get() team_name = player.xpath(".//span/text()").get() game_played = stat.xpath(".//td[2]/text()").get() minutes_per_minute = stat.xpath(".//td[3]/text()").get() points_per_game = stat.xpath(".//td[4]/text()").get() fields_goal_made = stat.xpath(".//td[5]/text()").get() fields_goal_attempted = stat.xpath(".//td[6]/text()").get() field_goal_percentage = stat.xpath(".//td[7]/text()").get() three_point_goal_made = stat.xpath(".//td[8]/text()").get() yield { "player_name": player_name, "player_position": position, "team_name": team_name, "game_played": game_played, "minutes_per_minute": minutes_per_minute, "points_per_game": points_per_game, "fields_goal_made": fields_goal_made, "fields_goal_attempted": fields_goal_attempted, "field_goal_percentage": field_goal_percentage, "three_point_goal_made": three_point_goal_made, } I already define the button = page.locator("xpath=//a[@class='AnchorLink loadMore__link']") so what I want this button clicked until no show more button available. playwright_page_coroutines=[ PageMethod('click', '//a[@class="AnchorLink loadMore__link"]'), PageMethod('wait_for_selector', "//table[@class='Table Table--align-right Table--fixed Table--fixed-left']//tbody//tr"), ] I believe I'm wrong here, I think the first PageMethod handle clicks so it will click until no show more button then the second PageMethod will wait until loading all data, but the result same only gets 50 of the data. So how to achieve all of that, kinda stuck here. Also, I don'I know if this issue but if the page reloaded also back to 50 tables of data.
Since your goal is to to continously find the same element until it no longer exists, you could handle all of the logic in the parse method itself. Their could be better ways to handle this, but this does provide the desired full table of results in the output. def start_requests(self): yield scrapy.Request( url='https://www.espn.com/nba/stats/player/_/season/2023/seasontype/2', meta=dict( playwright=True, playwright_include_page=True), callback=self.parse, ) async def parse(self, response): page = response.meta["playwright_page"] page.set_default_timeout(1000) try: while button := page.locator("//div[contains(@class,'loadMore')]/a"): await button.scroll_into_view_if_needed() await button.click() except: pass content = await page.content() sel = Selector(text=content) player_list = sel.xpath( "//table[@class='Table Table--align-right Table--fixed Table--fixed-left']//tbody//tr") stats_list = sel.xpath( "//div[@class='Table__ScrollerWrapper relative overflow-hidden']/div[@class='Table__Scroller']/table/tbody/tr") for player, stat in zip(player_list, stats_list): player_name = player.xpath(".//a/text()").get() position = stat.xpath(".//td/div/text()").get() team_name = player.xpath(".//span/text()").get() game_played = stat.xpath(".//td[2]/text()").get() minutes_per_minute = stat.xpath(".//td[3]/text()").get() points_per_game = stat.xpath(".//td[4]/text()").get() fields_goal_made = stat.xpath(".//td[5]/text()").get() fields_goal_attempted = stat.xpath(".//td[6]/text()").get() field_goal_percentage = stat.xpath(".//td[7]/text()").get() three_point_goal_made = stat.xpath(".//td[8]/text()").get() yield { "player_name": player_name, "player_position": position, "team_name": team_name, "game_played": game_played, "minutes_per_minute": minutes_per_minute, "points_per_game": points_per_game, "fields_goal_made": fields_goal_made, "fields_goal_attempted": fields_goal_attempted, "field_goal_percentage": field_goal_percentage, "three_point_goal_made": three_point_goal_made, }
3
3
75,480,225
2023-2-17
https://stackoverflow.com/questions/75480225/using-if-else-in-with-statement-in-python
I want to open a file that may be gzipped or not. To open the file, I use either with open(myfile, 'r') as f: some_func(f) # arbitrary function or import gzip with gzip.open(myfile, 'r') as f: some_func(f) I want to check if myfile has a gz extension or not, and then from there decide which with statement to use. Here's what I have: # myfile_gzipped is a Boolean variable that tells me whether it's gzipped or not if myfile_gzipped: with gzip.open(myfile, 'rb') as f: some_func(f) else: with open(myfile, 'r') as f: some_func(f) How should I go about it, without having to repeat some_func(f)?
if myfile_gzipped: f = gzip.open(myfile, 'rb') else: f = open(myfile, 'r') with f: some_func(f) The result of open and gzip.open is a context manager. with invokes the entry and exit methods on context managers. There is nothing special in calling those functions inside the with statement itself.
3
6
75,478,554
2023-2-16
https://stackoverflow.com/questions/75478554/fill-gaps-between-1s-in-pandas-dataframe-column-with-increment-values-that-rese
Apparently this is a more complicated problem than I thought. All I want to do is fill the zeros with ++1 increments until the next 1 My dataset is 1m+ rows, so I'm trying to vectorize this operation if possible. Here's a sample column: # Define the input dataframe df = pd.DataFrame({'col': [1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0]}) 0 1 1 0 2 1 3 0 4 1 5 1 6 0 7 0 8 0 9 0 10 1 11 0 12 1 13 1 14 0 Goal Result: 0 1 1 2 2 1 3 2 4 1 5 1 6 2 7 3 8 4 9 5 10 1 11 2 12 1 13 1 14 2 I've tried a number of different methods with ffill() and cumsum(), but the issue with cumsum() tends to be that it doesn't reset the increment.
Group by cumulative sums of column col and apply cumcount: df['col'] = df.groupby(df['col'].cumsum())['col'].cumcount() + 1 col 0 1 1 2 2 1 3 2 4 1 5 1 6 2 7 3 8 4 9 5 10 1 11 2 12 1 13 1 14 2
3
2
75,467,411
2023-2-16
https://stackoverflow.com/questions/75467411/conda-what-difference-does-it-make-if-we-set-pip-interop-enabled-true
There are many posts on this site which reference, typically in passing, the idea of setting pip_interop_enabled=True within some environment. This makes conda and pip3 somehow interact better, I am told. To be precise, people say conda will search PyPI for packages that don't exist in the main channels if this is true. They also say it's "experimental." Here is conda's documentation about this. It notes that much of conda's behavior in recent versions has also improved even with pip_interop_enabled=False, leading to questions about what this setting even does. Here is my question: in real terms, what does all of this mean? Is the only difference that conda will search PyPI if this is True and not if it's False? Are there other things that it does? For instance, if I need to install some package from pip, will conda know better not to clobber it if this setting is True? What, to be precise, goes wrong if I set this to True? Are there known edge cases that somehow break things if this "experimental" setting is set to True? Why would I ever not want to set this?
Not a PyPI Searching Feature First, let's clarify: Conda will not "search PyPI" - that is not what the pip_interop_enabled configuration option adds. Rather, it enables the solver to allow a package already installed with pip to satisfy a dependency requirement of a Conda package. Note that the option is about Pip interoperability (as distinct from PyPI) and it doesn't matter whether the package was sourced from PyPI, GitHub, local, etc.. Example: scipy -> numpy Let's consider a simple example to illustrate the behavior. Start with the following environment that has Python 3.10 and numpy installed from PyPI. pip_interop.yaml name: pip_interop channels: - conda-forge dependencies: - python=3.10 - pip ## PyPI packages - pip: - numpy which we can create with conda env create -n pip_interop -f pip_interop.yaml and verify that the numpy is from PyPI: $ conda list -n pip_interop numpy # packages in environment at /Users/user/mambaforge/envs/pip_interop: # # Name Version Build Channel numpy 1.24.2 pypi_0 Let's see what would happen installing scipy and in particular, how it satisfies its numpy dependency. Installing without Pip interoperability In default mode, we see the following behavior $ conda install -n pip_interop scipy Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /Users/user/mambaforge/envs/pip_interop added / updated specs: - scipy The following packages will be downloaded: package | build ---------------------------|----------------- cryptography-39.0.1 | py310hdd0c95c_0 1.1 MB numpy-1.24.2 | py310h788a5b3_0 6.1 MB scipy-1.10.0 | py310h240c617_2 20.2 MB ------------------------------------------------------------ Total: 27.4 MB The following NEW packages will be INSTALLED: appdirs conda-forge/noarch::appdirs-1.4.4-pyh9f0ad1d_0 brotlipy conda-forge/osx-64::brotlipy-0.7.0-py310h90acd4f_1005 certifi conda-forge/noarch::certifi-2022.12.7-pyhd8ed1ab_0 cffi conda-forge/osx-64::cffi-1.15.1-py310ha78151a_3 charset-normalizer conda-forge/noarch::charset-normalizer-2.1.1-pyhd8ed1ab_0 cryptography conda-forge/osx-64::cryptography-39.0.1-py310hdd0c95c_0 idna conda-forge/noarch::idna-3.4-pyhd8ed1ab_0 libblas conda-forge/osx-64::libblas-3.9.0-16_osx64_openblas libcblas conda-forge/osx-64::libcblas-3.9.0-16_osx64_openblas libcxx conda-forge/osx-64::libcxx-14.0.6-hccf4f1f_0 libgfortran conda-forge/osx-64::libgfortran-5.0.0-11_3_0_h97931a8_27 libgfortran5 conda-forge/osx-64::libgfortran5-11.3.0-h082f757_27 liblapack conda-forge/osx-64::liblapack-3.9.0-16_osx64_openblas libopenblas conda-forge/osx-64::libopenblas-0.3.21-openmp_h429af6e_3 llvm-openmp conda-forge/osx-64::llvm-openmp-15.0.7-h61d9ccf_0 numpy conda-forge/osx-64::numpy-1.24.2-py310h788a5b3_0 packaging conda-forge/noarch::packaging-23.0-pyhd8ed1ab_0 pooch conda-forge/noarch::pooch-1.6.0-pyhd8ed1ab_0 pycparser conda-forge/noarch::pycparser-2.21-pyhd8ed1ab_0 pyopenssl conda-forge/noarch::pyopenssl-23.0.0-pyhd8ed1ab_0 pysocks conda-forge/noarch::pysocks-1.7.1-pyha2e5f31_6 python_abi conda-forge/osx-64::python_abi-3.10-3_cp310 requests conda-forge/noarch::requests-2.28.2-pyhd8ed1ab_0 scipy conda-forge/osx-64::scipy-1.10.0-py310h240c617_2 urllib3 conda-forge/noarch::urllib3-1.26.14-pyhd8ed1ab_0 Proceed ([y]/n)? Observe that despite numpy already being installed in the environment, Conda is proposing to replace it with a Conda version. That is, only considers the information in conda-meta/ to determine whether a package is installed and won't check the environment's lib/python3.10/site-packages/. Installing with Pip interoperability Now we try it with the pip_interop_enabled turned on: $ CONDA_PIP_INTEROP_ENABLED=1 conda install -n foo scipy Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /Users/user/mambaforge/envs/pip_interop added / updated specs: - scipy The following packages will be downloaded: package | build ---------------------------|----------------- cryptography-39.0.1 | py310hdd0c95c_0 1.1 MB scipy-1.10.0 | py310h240c617_2 20.2 MB ------------------------------------------------------------ Total: 21.3 MB The following NEW packages will be INSTALLED: appdirs conda-forge/noarch::appdirs-1.4.4-pyh9f0ad1d_0 brotlipy conda-forge/osx-64::brotlipy-0.7.0-py310h90acd4f_1005 certifi conda-forge/noarch::certifi-2022.12.7-pyhd8ed1ab_0 cffi conda-forge/osx-64::cffi-1.15.1-py310ha78151a_3 charset-normalizer conda-forge/noarch::charset-normalizer-2.1.1-pyhd8ed1ab_0 cryptography conda-forge/osx-64::cryptography-39.0.1-py310hdd0c95c_0 idna conda-forge/noarch::idna-3.4-pyhd8ed1ab_0 libblas conda-forge/osx-64::libblas-3.9.0-16_osx64_openblas libcblas conda-forge/osx-64::libcblas-3.9.0-16_osx64_openblas libcxx conda-forge/osx-64::libcxx-14.0.6-hccf4f1f_0 libgfortran conda-forge/osx-64::libgfortran-5.0.0-11_3_0_h97931a8_27 libgfortran5 conda-forge/osx-64::libgfortran5-11.3.0-h082f757_27 liblapack conda-forge/osx-64::liblapack-3.9.0-16_osx64_openblas libopenblas conda-forge/osx-64::libopenblas-0.3.21-openmp_h429af6e_3 llvm-openmp conda-forge/osx-64::llvm-openmp-15.0.7-h61d9ccf_0 packaging conda-forge/noarch::packaging-23.0-pyhd8ed1ab_0 pooch conda-forge/noarch::pooch-1.6.0-pyhd8ed1ab_0 pycparser conda-forge/noarch::pycparser-2.21-pyhd8ed1ab_0 pyopenssl conda-forge/noarch::pyopenssl-23.0.0-pyhd8ed1ab_0 pysocks conda-forge/noarch::pysocks-1.7.1-pyha2e5f31_6 python_abi conda-forge/osx-64::python_abi-3.10-3_cp310 requests conda-forge/noarch::requests-2.28.2-pyhd8ed1ab_0 scipy conda-forge/osx-64::scipy-1.10.0-py310h240c617_2 urllib3 conda-forge/noarch::urllib3-1.26.14-pyhd8ed1ab_0 Proceed ([y]/n)? Note that now the numpy is not proposed to be replaced and this is because the existing pip-installed version is consider able to satisfy the dependency. Why is this experimental? There may be multiple reasons why this remains experimental after several years. One important reason is that Conda only tests its package builds against Conda builds of the dependencies. So, it cannot guarantee that the packages are functionally exchangeable. Furthermore, Conda packages often bring in non-Python dependencies. There has been a rise in wheel deployments, which is the PyPI approach to this, but isn't ubiquitous. There are still many "wrapper" packages out there where the PyPI version assumes some binary is on PATH, whereas the installation of the Conda package guarantees the binary is also installed. Another important issue is that the PyPI-Conda name mapping is not well-defined. That is, the name of a package in PyPI may not correspond to its Conda package name. This can directly lead to cryptic issues when the names diverge. Specifically, Conda will not correctly recognize that a pip-installed package satisfies the requirement when the names don't match. Hence, the is some unexpected heterogeneity in how the interoperability applies. Example: torch vs pytorch In the Python ecosystem, the torch module is provided by the PyPI package torch. However, the package torch in PyPI goes by pytorch on Conda channels. Here's how this can lead to inconsistent behavior. Let's begin with torch installed from PyPI: pip_interop.yaml name: pip_interop channels: - conda-forge dependencies: - python=3.10 - pip ## PyPI packages - pip: - torch Creating with: conda env create -n pip_interop -f pip_interop.yaml Now if we install torchvision from Conda, even with the pip_interop_enabled on, we get: $ CONDA_PIP_INTEROP_ENABLED=1 conda install -n pip_interop torchvision Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /Users/user/mambaforge/envs/pip_interop added / updated specs: - torchvision The following packages will be downloaded: package | build ---------------------------|----------------- cryptography-39.0.1 | py310hdd0c95c_0 1.1 MB jpeg-9e | hb7f2c08_3 226 KB libprotobuf-3.21.12 | hbc0c0cd_0 1.8 MB mkl-2022.2.1 | h44ed08c_16952 113.1 MB numpy-1.24.2 | py310h788a5b3_0 6.1 MB pillow-9.4.0 | py310h306a057_1 44.1 MB pytorch-1.13.1 |cpu_py310h2bbf33f_1 56.9 MB sleef-3.5.1 | h6db0672_2 1.0 MB torchvision-0.14.1 |cpu_py310hd5ee960_0 5.9 MB ------------------------------------------------------------ Total: 230.1 MB The following NEW packages will be INSTALLED: brotlipy conda-forge/osx-64::brotlipy-0.7.0-py310h90acd4f_1005 certifi conda-forge/noarch::certifi-2022.12.7-pyhd8ed1ab_0 cffi conda-forge/osx-64::cffi-1.15.1-py310ha78151a_3 charset-normalizer conda-forge/noarch::charset-normalizer-2.1.1-pyhd8ed1ab_0 cryptography conda-forge/osx-64::cryptography-39.0.1-py310hdd0c95c_0 freetype conda-forge/osx-64::freetype-2.12.1-h3f81eb7_1 idna conda-forge/noarch::idna-3.4-pyhd8ed1ab_0 jpeg conda-forge/osx-64::jpeg-9e-hb7f2c08_3 lcms2 conda-forge/osx-64::lcms2-2.14-h29502cd_1 lerc conda-forge/osx-64::lerc-4.0.0-hb486fe8_0 libblas conda-forge/osx-64::libblas-3.9.0-16_osx64_openblas libcblas conda-forge/osx-64::libcblas-3.9.0-16_osx64_openblas libcxx conda-forge/osx-64::libcxx-14.0.6-hccf4f1f_0 libdeflate conda-forge/osx-64::libdeflate-1.17-hac1461d_0 libgfortran conda-forge/osx-64::libgfortran-5.0.0-11_3_0_h97931a8_27 libgfortran5 conda-forge/osx-64::libgfortran5-11.3.0-h082f757_27 liblapack conda-forge/osx-64::liblapack-3.9.0-16_osx64_openblas libopenblas conda-forge/osx-64::libopenblas-0.3.21-openmp_h429af6e_3 libpng conda-forge/osx-64::libpng-1.6.39-ha978bb4_0 libprotobuf conda-forge/osx-64::libprotobuf-3.21.12-hbc0c0cd_0 libtiff conda-forge/osx-64::libtiff-4.5.0-hee9004a_2 libwebp-base conda-forge/osx-64::libwebp-base-1.2.4-h775f41a_0 libxcb conda-forge/osx-64::libxcb-1.13-h0d85af4_1004 llvm-openmp conda-forge/osx-64::llvm-openmp-15.0.7-h61d9ccf_0 mkl conda-forge/osx-64::mkl-2022.2.1-h44ed08c_16952 numpy conda-forge/osx-64::numpy-1.24.2-py310h788a5b3_0 openjpeg conda-forge/osx-64::openjpeg-2.5.0-h13ac156_2 pillow conda-forge/osx-64::pillow-9.4.0-py310h306a057_1 pthread-stubs conda-forge/osx-64::pthread-stubs-0.4-hc929b4f_1001 pycparser conda-forge/noarch::pycparser-2.21-pyhd8ed1ab_0 pyopenssl conda-forge/noarch::pyopenssl-23.0.0-pyhd8ed1ab_0 pysocks conda-forge/noarch::pysocks-1.7.1-pyha2e5f31_6 python_abi conda-forge/osx-64::python_abi-3.10-3_cp310 pytorch conda-forge/osx-64::pytorch-1.13.1-cpu_py310h2bbf33f_1 requests conda-forge/noarch::requests-2.28.2-pyhd8ed1ab_0 sleef conda-forge/osx-64::sleef-3.5.1-h6db0672_2 tbb conda-forge/osx-64::tbb-2021.7.0-hb8565cd_1 torchvision conda-forge/osx-64::torchvision-0.14.1-cpu_py310hd5ee960_0 typing_extensions conda-forge/noarch::typing_extensions-4.4.0-pyha770c72_0 urllib3 conda-forge/noarch::urllib3-1.26.14-pyhd8ed1ab_0 xorg-libxau conda-forge/osx-64::xorg-libxau-1.0.9-h35c211d_0 xorg-libxdmcp conda-forge/osx-64::xorg-libxdmcp-1.1.3-h35c211d_0 zstd conda-forge/osx-64::zstd-1.5.2-hbc0c0cd_6 Proceed ([y]/n)? That is, Conda still tries to install pytorch and this means that it will lead to clobbering of the existing torch package installed from PyPI. This has the potential to having residual files from the clobbered version of the package intermixed with the clobbering version. Basically, this is undefined behavior and the Conda software may not give you any warning about potential problems.
4
7
75,478,267
2023-2-16
https://stackoverflow.com/questions/75478267/how-to-use-pandas-groupby-in-a-for-loop-futurewarning
I have the following pandas dataframe: d2 = {'col1': [0, 0, 1, 1, 2], 'col2': [10, 11, 12, 13, 14]} df2 = pd.DataFrame(data=d2) df2 Output: col1 col2 0 0 10 1 0 11 2 1 12 3 1 13 4 2 14 And I need to run the following: for i, g in df2.groupby(['col1']): col1_val = g["col1"].iloc[0] print(col1_val) The original code is more complex but writing so for the purpose of illustration. And the part for i, g in df2.groupby(['col1']): gives the following warning: FutureWarning: In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning. How am I supposed to run the for loop to get rid of this warning?
This means that you should use a string instead of the list with a unique string: for i, g in df2.groupby('col1'): col1_val = g["col1"].iloc[0] print(col1_val) If you keep the original code, in the future i will have the value (0,)/(1,)/(2,) instead of 0/1/2
3
5
75,477,485
2023-2-16
https://stackoverflow.com/questions/75477485/how-do-i-type-hint-for-enums-in-python
I have a python function for which I want to use type hinting. There are two arguments. The first is any Enum class, the second optional arg is an element of that Enum. For example, say I have: class Foo(Enum): ALPHA = 1 BETA = 2 GAMMA = 3 The first arg would be, e.g. Foo, the second would be e.g. Foo.ALPHA What would be the correct way of type hinting this? What I have so far is: def switch(options: Enum, selected: Optional[Enum] = None) -> Enum: # Rest of fn... but that doesn't seem right.
Define a TypeVar with Enum as a bound, and then specify that your function takes the Type of that typevar and returns an instance of it: from enum import Enum from typing import Optional, Type, TypeVar _E = TypeVar('_E', bound=Enum) def switch( options: Type[_E], selected: Optional[_E] = None ) -> _E: ... Testing it in mypy with an actual Enum subclass: class Foo(Enum): ALPHA = 1 BETA = 2 GAMMA = 3 reveal_type(switch(Foo, Foo.ALPHA)) # Revealed type is "Foo"
3
3
75,471,388
2023-2-16
https://stackoverflow.com/questions/75471388/sentry-rate-limit-errors-sent-to-prevent-depletion-of-error-quota
When an infrastructure incident happens, the application will start to generate thousands of occurrences of the same error. Is it possible to configure some kind of rate limiting or anything like that on the sentry client (or server) to avoid depleting the error quota? I'm using Python, Django and Celery mostly.
Yes, there are multiple solutions to this common problem. Sentry is an amazing tool, but when there's that annoying bug it can cause quite the issue. Sentry has great documentation that covers this problem. There are two solutions I see as a good use case for you: Rate limiting - This enables you to control the number of incidents Sentry receives per project over a given period of time, thus some redundant events won't be measured and counted against quota. Spike protection - Maybe the best way to protect your quota is by enabling the spike protection. Sentry measures your average usage and if there's a abnormal number of events coming in a short period of time they will get ignored. Rate limiting is a good protection system, but kinda hardcoded. Spike protection is dynamic and adjusts more to your personal needs. Combination of both could provide a great way to preserve your quota until the end of the month.
3
3
75,471,318
2023-2-16
https://stackoverflow.com/questions/75471318/readonlyworksheet-object-has-no-attribute-defined-names
Whenever I try to read Excel using part=pd.read_excel(path,sheet_name = mto_sheet) I get this exception: <class 'Exception'> 'ReadOnlyWorksheet' object has no attribute 'defined_names' This is if I use Visual Studio Code and Python 3.11. However, I don't have this problem when using Anaconda. Any reason for that?
The error seems to be caused by the latest version of openpyxl. You can fix it by downgrading to a lower version pip install --force-reinstall -v "openpyxl==3.1.0"
25
49
75,468,967
2023-2-16
https://stackoverflow.com/questions/75468967/extracting-and-replacing-a-particular-string-from-a-sentence-in-python
Say I have a string, s1="Hey Siri open call up duty" and another string s2="call up duty". Now I know that "call up duty" should be replaced by "call of duty". Say s3="call of duty". So what I want to do is that from s1 delete s2 and place s3 in its location. I am not sure how this can be done. Can anyone please guide me as I am new to python. The answer should be "Hey siri open call of duty" Note--> s2 can be anywhere within the string s1 and need not be at the last everytime
In python, Strings have a replace() method which you can easily use to replace the sub-string s2 with s3. s1 = "Hey Siri open call up duty" s2 = "call up duty" s3 = "call of duty" s1 = s1.replace(s2, s3) print(s1) This should do it for you. For more complex substitutions the re module can be of help.
3
3
75,454,731
2023-2-15
https://stackoverflow.com/questions/75454731/python-opc-ua-client-write-variable-using-browsename
I can't find the correct syntax for assigning a value to a variable using its BrowseName. I am testing with the 'flag1' boolean variable because it is easier to debug. But my goal is be able to write in all variables, including the arrays. If I try to use the index number it works fine. import pyOPCClient as opc client = opc.opcConnect('192.168.5.10') opc.write_value_bool(client, 'ns=4;s="opcData"."flag1"', True) client.disconnect() Here is my function to write boolean ##### Function to WRITE a Boolean into Bool Object Variable - Requires Object Name ##### def write_value_bool(client, node_id, value): client_node = client.get_node(node_id) # get node client_node_value = value client_node_dv = ua.DataValue(ua.Variant(client_node_value, ua.VariantType.Boolean)) client_node.set_value(client_node_dv) print("Value of : " + str(client_node) + ' : ' + str(client_node_value)) I am getting this error: PS C:\Users\ALEMAC\Documents\Python Scripts> & C:/ProgramData/Anaconda3/python.exe "c:/Users/ALEMAC/Documents/Python Scripts/opctest.py" Requested session timeout to be 3600000ms, got 30000ms instead Traceback (most recent call last): File "c:\Users\ALEMAC\Documents\Python Scripts\opctest.py", line 5, in <module> opc.write_value_bool(client, 'ns=4;s="opcData"."flag1"', True) File "c:\Users\ALEMAC\Documents\Python Scripts\pyOPCClient.py", line 49, in write_value_bool client_node.set_value(client_node_dv) File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\common\node.py", line 217, in set_value self.set_attribute(ua.AttributeIds.Value, datavalue) File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\common\node.py", line 263, in set_attribute result[0].check() File "c:\Users\ALEMAC\Documents\Python Scripts\opcua\ua\uatypes.py", line 218, in check raise UaStatusCodeError(self.value) opcua.ua.uaerrors._auto.BadNodeIdUnknown: "The node id refers to a node that does not exist in the server address space."(BadNodeIdUnknown)
I see you use the pyOPCClient package. I´m not sure if this is maintaned anymore (Last Update: 2014-01-09 see here). You can switch to opcua-asyncio which can address nodes with the browse services like this: myvar = await client.nodes.root.get_child(["0:Objects",..., "4:flag1"]) And here is the complete example
3
3
75,454,425
2023-2-15
https://stackoverflow.com/questions/75454425/access-blocked-project-has-not-completed-the-google-verification-process
I am building a simple script which polls some data and then updates a spreadsheet that I am giving to my client. (It is a small project and I don't need anything fancy.) So I created a Google Cloud project, enabled the Sheets API, and got a credential for a Desktop app. When I try to run the quickstart sample, I get an error: Access blocked: <my project name> has not completed the Google verification process I have tried googling and all the solutions seem to be oriented toward what a user should do if they see this, but I am the developer. I only need to grant my own self access to this spreadsheet, since my script is the only thing that will be changing it (I will also share it with the client). What do I do?
You need to add the account as a test user under the OAuth consent screen: 1.) From the dashboard go to APIs & Services and click OAuth concent screen 2.) Under the Test users, click +Add Users. A menu will prompt on the right panel. 3.) Input the users email 4.) Reload the URL provided. Reference: https://www.youtube.com/watch?v=bkZns_VOB6I Note: I am not affiliated with the video nor the owner of the youtube channel
42
90
75,463,473
2023-2-15
https://stackoverflow.com/questions/75463473/why-are-the-balls-so-unstable
This is a physics simulation constraining balls in a circular area. I made the original code in Scratch and converted it to Python in Pygame. When I run the simulation, all the balls were shaking, compared to the original code. I constrained the velocity to be maximum 20, but it didn't help. I created substeps for each frame, but that wasn't helping either. import pygame import math import random screen = pygame.display.set_mode((16*120,9*120)) pygame.display.set_caption('') clock = pygame.time.Clock() mx = 16*60 my = 9*60 global x global y x = [] y = [] xv = [] yv = [] prevx = [] prevy = [] for i in range(20): x.append(random.randint(-200,200)) y.append(random.randint(-200,200)) xv.append(0) yv.append(0) prevx.append(0) prevy.append(0) r = 25 size = 300 sub = 20 maxvel = 20 global dist global dx global dy global d #points at something def pointat(px,py): global d dx = px-x[i] dy = py-y[i] if dy == 0: if dx < 0: d = -90 else: d = 90 else: if dy < 0: d = 180+math.atan(dx/dy) else: d = math.atan(dx/dy) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.fill((10,10,10)) pygame.draw.circle(screen,(100,100,100),(mx,my),size) for i in range(len(x)): pygame.draw.circle(screen,(255,255,255),(mx+x[i],my+y[i]*-1),r) for i in range(len(x)): prevx[i] = x[i] prevy[i] = y[i] for j in range(sub): x[i] += xv[i]/sub y[i] += yv[i]/sub y[i] -= 1/sub a = 0 for k in range(len(x)-1): if a == i: a += 1 dist = math.sqrt(((x[i]-x[a])*(x[i]-x[a]))+((y[i]-y[a])*(y[i]-y[a]))) if dist < r*2: pointat(x[a],y[a]) x[i] += (math.sin(d)*(dist-(r*2))) y[i] += (math.cos(d)*(dist-(r*2))) x[a] -= (math.sin(d)*(dist-(r*2))) y[a] -= (math.cos(d)*(dist-(r*2))) dist = math.sqrt((x[i]*x[i])+(y[i]*y[i])) if dist > size-r: pointat(0,0) x[i] += (math.sin(d)*(dist-(size-r))) y[i] += (math.cos(d)*(dist-(size-r))) a += 1 xv[i] = x[i]-prevx[i] yv[i] = y[i]-prevy[i] if xv[i] > maxvel: xv[i] = maxvel if xv[i] < -maxvel: xv[i] = -maxvel if yv[i] > maxvel: yv[i] = maxvel if yv[i] < -maxvel: yv[i] = -maxvel pygame.display.update() clock.tick(60)
The angle of the trigonometric functions in the math module is measured in Radian, but not in Degrees. d should not be a global variable, you can just return the angle from pointat: def pointat(px,py): dx = px-x[i] dy = py-y[i] if dy == 0: if dx < 0: d = -math.pi/2 else: d = math.pi/2 else: if dy < 0: d = math.pi+math.atan(dx/dy) else: d = math.atan(dx/dy) return d while True: # [...] for i in range(len(x)): # [...] for j in range(sub): # [...] for k in range(len(x)-1): if a == i: a += 1 dist = math.sqrt(((x[i]-x[a])*(x[i]-x[a]))+((y[i]-y[a])*(y[i]-y[a]))) if dist < r*2: d = pointat(x[a],y[a]) x[i] += (math.sin(d)*(dist-(r*2))) y[i] += (math.cos(d)*(dist-(r*2))) x[a] -= (math.sin(d)*(dist-(r*2))) y[a] -= (math.cos(d)*(dist-(r*2))) dist = math.sqrt((x[i]*x[i])+(y[i]*y[i])) if dist > size-r: d = pointat(0,0) x[i] += (math.sin(d)*(dist-(size-r))) y[i] += (math.cos(d)*(dist-(size-r))) a += 1 Overall, the code in the loops can be simplified a lot. I suggest to refactor your code as follows: while True: # [...] for i in range(len(x)): prevx[i] = x[i] prevy[i] = y[i] for j in range(sub): x[i] += xv[i]/sub y[i] += yv[i]/sub y[i] -= 1/sub for a in range(len(x)): if a == i: continue dx = x[i]-x[a] dy = y[i]-y[a] dist = math.sqrt(dx*dx + dy*dy) if dist < r*2 and dist != 0: x[i] -= dx/dist * (dist-r*2) y[i] -= dy/dist * (dist-r*2) x[a] += dx/dist * (dist-r*2) y[a] += dy/dist * (dist-r*2) dist = math.sqrt(x[i]*x[i] + y[i]*y[i]) if dist > size-r: x[i] -= x[i]/dist * (dist-(size-r)) y[i] -= y[i]/dist * (dist-(size-r)) xv[i] = max(-maxvel, min(maxvel, x[i]-prevx[i])) yv[i] = max(-maxvel, min(maxvel, y[i]-prevy[i]))
3
3
75,461,236
2023-2-15
https://stackoverflow.com/questions/75461236/dynamically-updating-type-hints-for-all-attributes-of-subclasses-in-pydantic
I am writing some library code where the purpose is to have a base data model that can be subclassed and used to implement data objects that correspond to the objects in a database. For this base model I am inheriting from pydantic.BaseModel. There is a bunch of stuff going on but for this example essentially what I have is a base model class that looks something like this: class Model(pydantic.BaseModel, metaclass=custom_complicated_metaclass): some_base_attribute: int some_other_base_attribute: str I'll get back to what this metaclass does in a moment. This would then be subclassed by some user of this library like this: class User(Model): age: int name: str birth_date: datetime.datetime Now, the metaclass that I am using hooks in to getattr and allows the following syntax: User.age > 18 Which then returns a custom Filter object that can be used to filter in a database, so essentially its using attribute access on the class directly (but not on its instances) as a way to create some syntactic sugar for user in filters and sorting. Now, the issue comes when I would like to allow a number of attributes to sort the results of a database query by. I can do something like the following: db = get_some_database(...) db.query(..., order_by=[User.age, User.birth_date] And this works fine, however I would like to be able to specify for each attribute in the order_by list if its ascending or descending order. The simplest syntax I could think of for this is to allow the use of - to invert the sort order like this: db = get_some_database(...) db.query(..., order_by=[User.age, -User.birth_date] This works, I just implement __neg__ on my custom filter class and its all good. Now finally, the issue I have is that because I defined User.birth_date to be a datetime object, it does not support the - operator which pycharm and mypy will complain about (and they will complain about it for any type that does not support -). They are kind of wrong since when accessing the attribute on the class like this instead of on an instance it actually will return an object that does support the - operator, but obviously they don't know this. If this would only be a problem inside my library code I wouldn't mind it so much, I could just ignore it or add a disable comment etc but since this false positive complaint will show up in end-user code I would really like to solve it. So my actual question essentially is, can I in any way (that the type checkers would also understand) force all the attributes that are implemented on subclasses of my baseclass to have whatever type they are assigned but also union with my custom type, so that these complaints dont show up? Or is there another way I can solve this?
Plugin required Not without a custom plugin I'm afraid (see e.g. Pydantic). You explicitly annotate the birth_date name within the class' scope to be of the type datetime, so the type checkers are correct to say that it does not support __neg__ (instance or not is irrelevant). Your metaclass magic will likely not be understandable by type checkers, at least not today. This simple example demonstrates it: class MyMeta(type): def __getattr__(self, _: object) -> int: return 1 class Model(metaclass=MyMeta): x: str print(Model.x) The output is 1 as expected, but adding reveal_type(Model.x) and running it through mypy reveals the type as builtins.str. This shows that mypy will choose the annotation in the class body regardless of what type MyMeta.__getattr__ returns. A union annotation is possible of course, but that will just cause problems on "the other end" because now mypy & Co. will never be sure, if the x attribute holds that special query-construct-type or the type of the actual value. Alternative A: Negation function If you are not willing to write your own type checker plugins (totally understandable), you could go with a less elegant workaround of a short-named negation function (in your query-sense). Something like this: from typing import Any class Special: val: int = 1 def __repr__(self) -> str: return f"Special<{self.val}>" class MyMeta(type): def __getattr__(self, _: object) -> Special: return Special() class Model(metaclass=MyMeta): x: str def neg(obj: Any) -> Any: obj.val *= -1 return obj print(Model.x, neg(Model.x)) Output: Special<1> Special<-1> Not as nice as just prepending a -, but much less headache with type checkers. Note that Any here is actually important because otherwise you would just move the problem to the neg call site. Alternative B: Custom subtypes Alternatively you could of course subclass things like datetime and tell users to use your custom class for annotations instead. Then within the class body you only add a type stub for type checkers to satisfy their __neg__ complaints. Something like this: from __future__ import annotations from datetime import datetime from typing import TYPE_CHECKING class Special: val: int = 1 def __repr__(self) -> str: return f"Special<{self.val}>" def __neg__(self) -> Special: obj = self.__class__() obj.val *= -1 return obj class DateTime(datetime): if TYPE_CHECKING: def __neg__(self) -> Special: pass class MyMeta(type): def __getattr__(self, _: object) -> Special: return Special() class Model(metaclass=MyMeta): x: DateTime print(Model.x, -Model.x) Same output as before and type checkers don't complain about the - even though at runtime the DateTime and datetime classes are the same (though not identical). Again, not ideal because you would have to ask users to use a non-standard type here, but depending on the situation that may be a better option for you. Another drawback of this option compared to the negation function is that you would have to define a subtype for each type you want to allow in annotations. I don't know, if there will be many of those that don't support __neg__, but you would have to do this for all of them and users would have to use your types.
4
4
75,458,034
2023-2-15
https://stackoverflow.com/questions/75458034/no-module-named-pil-after-installing-pillow-latest-version
I am installed pillow,following the documentation here, python3 -m pip install --upgrade pip python3 -m pip install --upgrade Pillow and import Image like this: from PIL import Image Even though I upgraded Pillow to 9.4.0, I am getting the following error in vscode No module named 'PIL' I am using Python 3.9.7. I am not sure what I am doing wrong here, is it my python version or is it the vscode. Can someone please enlighten me about this issue. I can see them installed in my venv folder, but cannot access it in the file I am working on (which is highlighted by yellow)
Add this code to your script. import sys print(sys.path) Ensure that your sys.path contains the path "$PROJECT/venv/lib/python3.9/site-packages" If it doesn't, your virtual environment is broken. Try this instead: Use this command to remove the current environment. rm -rf venv Create it again. python -m venv venv Install all your dependencies and run pip install --no-cache-dir Pillow Make sure your environment is functioning properly right now.
5
1
75,387,339
2023-2-8
https://stackoverflow.com/questions/75387339/how-can-i-edit-modify-replace-text-in-an-existing-pdf-file
I am working on my final year project, so I working on a website where a user can come and read PDF. I am adding some features such as converting currency to their country currency. I am using flask and pymuPDF for my project and I don't know how I can modify the text at a pdf anyone can help me with this problem? I heard here that using pymuPDF or pypdf can work, but I didn't find any solution for replacing text.
Using the redaction facility of PyMuPDF is probably the adequate thing to do. The approach: Identify the location of the text to replace Erase the text and replace it using redactions Care must be taken to get hold of the original font, and whether or not the new text is longer / short than the original. import fitz # import PyMuPDF doc = fitz.open("myfile.pdf") page = doc[number] # page number 0-based # suppose you want to replace all occurrences of some text disliked = "delete this" better = "better text" hits = page.search_for("delete this") # list of rectangles where to replace for rect in hit: page.add_redact_annot(rect, better, fontname="helv", fontsize=11, align=fitz.TEXT_ALIGN_CENTER, ...) # more parameters page.apply_redactions(images=fitz.PDF_REDACT_IMAGE_NONE) # don't touch images doc.save("replaced.pdf", garbage=3, deflate=True) This works well with short text and medium quality expectations. With some more effort, the original font properties, color, font size, etc. can be identified to produce a close-to-perfect result. This code works well with PyMuPDF==1.24.12 (Python 3.12.5) when lastly tested..
4
13
75,438,152
2023-2-13
https://stackoverflow.com/questions/75438152/how-to-convert-time-durations-to-numeric-in-polars
Is there any built-in function in polars or a better way to convert time durations to numeric by defining the time resolution (e.g.: days, hours, minutes)? import polars as pl df = pl.DataFrame({ "from": ["2023-01-01", "2023-01-02", "2023-01-03"], "to": ["2023-01-04", "2023-01-05", "2023-01-06"], }) My current approach: # Convert to date and calculate the time difference df = ( df.with_columns( pl.col("to", "from").str.to_date().name.suffix("_date") ) .with_columns((pl.col("to_date") - pl.col("from_date")).alias("time_diff")) ) # Convert the time difference to int (in days) df = df.with_columns( ((pl.col("time_diff") / (24 * 60 * 60 * 1000)).cast(pl.Int8)).alias("time_diff_int") ) Output: shape: (3, 6) ┌────────────┬────────────┬────────────┬────────────┬──────────────┬───────────────┐ │ from ┆ to ┆ to_date ┆ from_date ┆ time_diff ┆ time_diff_int │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ date ┆ date ┆ duration[ms] ┆ i8 │ ╞════════════╪════════════╪════════════╪════════════╪══════════════╪═══════════════╡ │ 2023-01-01 ┆ 2023-01-04 ┆ 2023-01-04 ┆ 2023-01-01 ┆ 3d ┆ 3 │ │ 2023-01-02 ┆ 2023-01-05 ┆ 2023-01-05 ┆ 2023-01-02 ┆ 3d ┆ 3 │ │ 2023-01-03 ┆ 2023-01-06 ┆ 2023-01-06 ┆ 2023-01-03 ┆ 3d ┆ 3 │ └────────────┴────────────┴────────────┴────────────┴──────────────┴───────────────┘
The dt accessor lets you obtain individual components, is that what you're looking for? df.select( total_days = pl.col.time_diff.dt.total_days(), total_hours = pl.col.time_diff.dt.total_hours(), total_minutes = pl.col.time_diff.dt.total_minutes() ) shape: (3, 3) ┌────────────┬─────────────┬───────────────┐ │ total_days ┆ total_hours ┆ total_minutes │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞════════════╪═════════════╪═══════════════╡ │ 3 ┆ 72 ┆ 4320 │ │ 3 ┆ 72 ┆ 4320 │ │ 3 ┆ 72 ┆ 4320 │ └────────────┴─────────────┴───────────────┘ docs: Temporal API reference
14
7