question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
74,562,541
2022-11-24
https://stackoverflow.com/questions/74562541/append-new-column-to-a-snowpark-dataframe-with-simple-string
I've started using python Snowpark and no doubt missing obvious answers based on being unfamiliar to the syntax and documentation. I would like to do a very simple operation: append a new column to an existing Snowpark DataFrame and assign with a simple string. Any pointers to the documentation to what I presume is readily achievable would be appreciated.
You can do this by using the function with_column in combination with the lit function. The with_column function needs a Column expression and for a literal value this can be made with the lit function. see documentation here: https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/api/snowflake.snowpark.functions.lit.html from snowflake.snowpark.functions import lit snowpark_df = snowpark_df.with_column('NEW_COL', lit('your_string'))
3
4
74,607,741
2022-11-28
https://stackoverflow.com/questions/74607741/how-to-create-an-array-of-na-or-null-values-in-python
This is easy to do in R and I am wondering if it is straight forward in Python and I am just missing something, but how do you create a vector of NaN values and Null values in Python? I am trying to do this using the np.full function. R Code: vec <- vector("character", 15) vec[1:15] <- NA vec Python Code unknowns = np.full(shape = 5, fill_value = ???, dtype = 'str') '''test if fill value worked or not''' random.seed(1177) categories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True) categories = np.concatenate([categories, unknowns]) example = pd.DataFrame(data = {'categories': categories}) example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] print(example['transformed'].value_counts()) This should lead to 5 counts of unknown in the value counts total. Ideally I would like to know how to write this fill_value for NaN and Null and know whether it differs for variable types. I have tried np.nan with and without the string data type. I have tried None and Null with and without quotes. I cannot think of anything else to try and I am starting to wonder if it is possible. Thank you in advance and I apologize if this question is already addressed and for my lack of knowledge in this area.
you could use either None or np.nan to create an array of just missing values in Python like so: np.full(shape=5, fill_value=None) np.full(shape=5, fill_value=np.nan) back to your example, this works just fine: import numpy as np import pandas as pd unknowns = np.full(shape=5, fill_value=None) categories = np.random.choice(['web', 'software', 'hardware', 'biotech'], size = 15, replace = True) categories = np.concatenate([categories, unknowns]) example = pd.DataFrame(data = {'categories': categories}) example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] print(example['transformed'].value_counts()) Lastly, this line is inefficient. example['transformed'] = [ x if pd.isna(x) == False else 'unknown' for x in example['categories']] You do want to avoid loops & list comprehensions when using pandas on large data, this is going to run much faster: example['transformed'] = example.categories.apply(lambda s: s if s else 'unknown')
3
5
74,591,552
2022-11-27
https://stackoverflow.com/questions/74591552/hypothesis-create-column-with-pd-datetime-dtype-in-given-test-dataframe
I want to test whether a certain method can handle different dates in a pandas dataframe, which it takes as an argument. The following example should clarify what kind of setup I want. In the example column('Date', dtype=pd.datetime) does not work for creating a date column in the test dataframe: from hypothesis import given from hypothesis.extra.pandas import column, data_frames import pandas as pd from unittest import TestCase class TestExampleClass(TestCase): @given(data_frames([column('A', dtype=str), column('B', dtype=int),column('Date', dtype=pd.datetime)])) def test_example_test_method(self, dataframe): self.assertTrue(True) Any ideas? I am aware of How to create a datetime indexed pandas DataFrame with hypothesis library?, but it did not help for my specific case.
Use dtype="datetime64[ns]" instead of dtype=pd.datetime. I've opened an issue to look into this in more detail and give helpful error messages when passed pd.datetime, datetime, or a unitless datetime dtype; this kind of confusion isn't the user experience we want to offer!
3
5
74,589,665
2022-11-27
https://stackoverflow.com/questions/74589665/how-to-print-rgb-colour-to-the-terminal
Can ANSI escape code SGR 38 - Set foreground color with argument 2;r;g;b be used with print function? Example of use with code 33 is of course OKBLUE = '\033[94m' I would like to use 038 instead to be able to use any RGB color. Is that posible? I tried GREEN = '\038[2;0;153;0m' ENDC = '\033[0m' print(f"{GREEN} some text {ENDC}") Expected to change the color of "some text" in green
To use an RGB color space within the terminal* the following escape sequence can be used: # Print Hello! in lime green text. print('\033[38;2;146;255;12mHello!\033[0m') # ^ # | # \ The 38 goes here, to indicate a foreground colour. # Print Hello! in white text on a fuschia background. print('\033[48;2;246;45;112mHello!\033[0m') Explanation: \033[38;2;146;255;12mHello!\033[0m ^ ^ ^ ^ ^ ^ ^ ^ ^ | | | R G B | | | | | | | | | \ Reset the colour to default | | | | | | | | | | | \ Escape character | | | | | | | | \ R;G;B \ Text to print | | | | | \ Indicate the following sequence is RGB | | | \ Code to instruct the setting of an 8 or 24-bit foreground (text) colour | \ Escape character The use of 38;2 indicates an RGB (foreground) sequence is to follow. However, the use of 38;5 indicates the following (foreground) value comes from the 256-colour table. To clarify what appears to be a misconception, \033 (octal) or \x1b (hexidecimal) corresponds to the ASCII table's ESC character, which is used here to introduce an escape sequence of terminal text colouring. Whereas the 38 is used to instruct the following 8 or 24-bit colour to be set as foreground, (after the escape sequence has been introduced). Additionally, 48 can be used to set the background colour, as demonstrated in the code example above. *Providing the terminal emulator supports 24-bit colour sequences. (e.g. Xterm, GNOME terminal, etc.) Link to the Wikipedia article which explains this topic of 24-colour (RGB) in greater depth.
5
9
74,563,548
2022-11-24
https://stackoverflow.com/questions/74563548/selenium-driver-hanging-on-os-alert
I'm using Selenium in Python (3.11) with a Firefox (107) driver. With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run. I have tried driver.quit(), as well as using os.system("taskkill /F /pid " + str(process.ProcessId)) with the driver's PID, with no luck. I have managed to prevent the pop-up from popping up with options.set_preference("security.external_protocol_requires_permission", False) but the code still hangs the same way at the point where the popup would have popped up. I don't care whether the program launches or not, I just need my code to not require human intervention at this key point. here is a minimal example of what I currently have: from selenium.webdriver import ActionChains, Keys from selenium.webdriver.firefox.options import Options from seleniumwire import webdriver options = Options() options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe' options.set_preference("security.external_protocol_requires_permission", False) driver = webdriver.Firefox(options=options) # Go to the page driver.get(url) user_field = driver.find_element("id", "UserName") user_field.send_keys(username) pass_field = driver.find_element("id", "Password") pass_field.send_keys(password) pass_field.send_keys(Keys.ENTER) #this is the point where the pop up appears reqs = driver.requests print("Success!") driver.quit()
There are some prefs you can try profile = webdriver.FirefoxProfile() profile.set_preference('dom.push.enabled', False) # or profile = webdriver.FirefoxProfile() profile.set_preference('dom.webnotifications.enabled', False) profile.set_preference('dom.webnotifications.serviceworker.enabled', False)
5
3
74,599,665
2022-11-28
https://stackoverflow.com/questions/74599665/how-to-use-template-when-creating-new-python-file-on-vscode
In my python file I always start with the following lines import sys import matplotlib as mpl sys.append('C:\\MyPackages') rc_fonts = { "text.usetex": True, 'font.size': 20, 'text.latex.preamble': r"\usepackage{bm}", } mpl.rcParams.update(rc_fonts) Is there a way to indicate to VScode that each time I create a new file.py, it will start with the previous lines ? For now, I copy/paste a 'template.py' but this is not really convenient. And because I work with Windows, I also tried to add 'C:\MyPackages' to the user variables Path but it didn't work.
For Doing this kind of repetitive task we can use snippets in VSCode. Step 1 : Hit > shift+ctrl+p open command palette. Step 2 : Select Snippets: Configure User Snippets Step 3 : Select Python Step 4 : paste below code in python.json file. change prefix value. like "prefix": "hedwin" so now when you type hedwin vscode will paste our code snippet "": { "prefix": "", "body": [ "import sys", "import matplotlib as mpl", "sys.append('C:\\\\MyPackages')", "", "rc_fonts = {", " \"text.usetex\": True,", " 'font.size': 20,", " 'text.latex.preamble': r\"\\usepackage{bm}\",", "}", "mpl.rcParams.update(rc_fonts)" ], "description": "" } For making snippet : snippet generator
5
6
74,599,713
2022-11-28
https://stackoverflow.com/questions/74599713/merge-two-dictionaries-in-python
I'm trying to merge two dictionaries based on key value. However, I'm not able to achieve it. Below is the way I tried solving. dict1 = {4: [741, 114, 306, 70], 2: [77, 325, 505, 144], 3: [937, 339, 612, 100], 1: [52, 811, 1593, 350]} dict2 = {1: 'A', 2: 'B', 3: 'C', 4: 'D'} My resultant dictionary should be output = {'D': [741, 114, 306, 70], 'B': [77, 325, 505, 144], 'C': [937, 339, 612, 100], 'A': [52, 811, 1593, 350]} My code def mergeDictionary(dict_obj1, dict_obj2): dict_obj3 = {**dict_obj1, **dict_obj2} for key, value in dict_obj3.items(): if key in dict_obj1 and key in dict_obj2: dict_obj3[key] = [value , dict_obj1[key]] return dict_obj3 dict_3 = mergeDictionary(dict1, dict2) But I'm getting this as output dict_3={4: ['D', [741, 114, 306, 70]], 2: ['B', [77, 325, 505, 144]], 3: ['C', [937, 339, 612, 100]], 1: ['A', [52, 811, 1593, 350]]}
Use a simple dictionary comprehension: output = {dict2[k]: v for k,v in dict1.items()} Output: {'D': [741, 114, 306, 70], 'B': [77, 325, 505, 144], 'C': [937, 339, 612, 100], 'A': [52, 811, 1593, 350]}
15
14
74,599,116
2022-11-28
https://stackoverflow.com/questions/74599116/pandas-dataframe-assign-how-to-refer-to-newly-created-columns
I'm trying to use pandas.DataFrame.assign in Pandas 1.5.2. Let's consider this code, for instance: df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df.assign( test1="hello", test2=df.test1 + " world" ) I'm facing this error: AttributeError: 'DataFrame' object has no attribute 'test1' However, it's explicitly stated in the documentation that: Assigning multiple columns within the same assign is possible. Later items in **kwargs may refer to newly created or modified columns in df; items are computed and assigned into df in order. So I don't understand: how can I refer to newly created or modified columns in df when calling assign?
You can pass a callable to assign. Here use a lambda to reference the DataFrame. Parameters **kwargsdict of {str: callable or Series} The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned. df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df.assign( test1="hello", test2=lambda d: d.test1 + " world" ) Output: col1 col2 test1 test2 0 1 4 hello hello world 1 2 5 hello hello world 2 3 6 hello hello world
3
3
74,567,219
2022-11-24
https://stackoverflow.com/questions/74567219/how-do-i-get-python-to-send-as-many-concurrent-http-requests-as-possible
I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work. The input should be an array or URLs and the output an array of the html string.
This works, getting around 250+ requests a second. This solution does work on Windows 10. You may have to pip install for concurrent and requests. import time import requests import concurrent.futures start = int(time.time()) # get time before the requests are sent urls = [] # input URLs/IPs array responses = [] # output content of each request as string in an array # create an list of 5000 sites to test with for y in range(5000):urls.append("https://example.com") def send(url):responses.append(requests.get(url).content) with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls:futures.append(executor.submit(send, url)) end = int(time.time()) # get time after stuff finishes print(str(round(len(urls)/(end - start),0))+"/sec") # get average requests per second Output: 286.0/sec Note: If your code requires something extremely time dependent, replace the middle part with this: with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls: futures.append(executor.submit(send, url)) for future in concurrent.futures.as_completed(futures): responses.append(future.result()) This is a modified version of what this site showed in an example. The secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.
7
4
74,579,273
2022-11-26
https://stackoverflow.com/questions/74579273/indexerror-tuple-index-out-of-range-when-creating-pyspark-dataframe
I want to create test data in a pyspark dataframe but I always get the same "tuple index out of range" error. I do not get this error when reading a csv. Would appreciate any thoughts on why I'm getting this error. The first thing I tried was create a pandas dataframe and convert it to a pyspark dataframe: columns = ["id","col_"] data = [("1", "blue"), ("2", "green"), ("3", "purple"), ("4", "red"), ("5", "yellow")] df = pd.DataFrame(data=data, columns=columns) sparkdf = spark.createDataFrame(df) sparkdf.show() output: PicklingError: Could not serialize object: IndexError: tuple index out of range I get the same error if I try to create the dataframe from RDD per SparkbyExamples.com instructions: rdd = spark.sparkContext.parallelize(data) sparkdf = spark.createDataFrame(rdd).toDF(*columns) sparkdf.show() I also tried the following and got the same error: import pyspark.pandas as ps df1 = ps.from_pandas(df) Here is the full error when running the above code: IndexError Traceback (most recent call last) File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\serializers.py:458, in CloudPickleSerializer.dumps(self, obj) 457 try: --> 458 return cloudpickle.dumps(obj, pickle_protocol) 459 except pickle.PickleError: File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:73, in dumps(obj, protocol, buffer_callback) 70 cp = CloudPickler( 71 file, protocol=protocol, buffer_callback=buffer_callback 72 ) ---> 73 cp.dump(obj) 74 return file.getvalue() File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:602, in CloudPickler.dump(self, obj) 601 try: --> 602 return Pickler.dump(self, obj) 603 except RuntimeError as e: File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:692, in CloudPickler.reducer_override(self, obj) 691 elif isinstance(obj, types.FunctionType): --> 692 return self._function_reduce(obj) 693 else: 694 # fallback to save_global, including the Pickler's 695 # dispatch_table File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:565, in CloudPickler._function_reduce(self, obj) 564 else: --> 565 return self._dynamic_function_reduce(obj) File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:546, in CloudPickler._dynamic_function_reduce(self, func) 545 newargs = self._function_getnewargs(func) --> 546 state = _function_getstate(func) 547 return (types.FunctionType, newargs, state, None, None, 548 _function_setstate) File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:157, in _function_getstate(func) 146 slotstate = { 147 "__name__": func.__name__, 148 "__qualname__": func.__qualname__, (...) 154 "__closure__": func.__closure__, 155 } --> 157 f_globals_ref = _extract_code_globals(func.__code__) 158 f_globals = {k: func.__globals__[k] for k in f_globals_ref if k in 159 func.__globals__} File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in _extract_code_globals(co) 331 # We use a dict with None values instead of a set to get a 332 # deterministic order (assuming Python 3.6+) and avoid introducing 333 # non-deterministic pickle bytes as a results. --> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)} 336 # Declaring a function inside another one using the "def ..." 337 # syntax generates a constant code object corresponding to the one 338 # of the nested function's As the nested function may itself need 339 # global variables, we need to introspect its code, extract its 340 # globals, (look for code object in it's co_consts attribute..) and 341 # add the result to code_globals File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in <dictcomp>(.0) 331 # We use a dict with None values instead of a set to get a 332 # deterministic order (assuming Python 3.6+) and avoid introducing 333 # non-deterministic pickle bytes as a results. --> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)} 336 # Declaring a function inside another one using the "def ..." 337 # syntax generates a constant code object corresponding to the one 338 # of the nested function's As the nested function may itself need 339 # global variables, we need to introspect its code, extract its 340 # globals, (look for code object in it's co_consts attribute..) and 341 # add the result to code_globals IndexError: tuple index out of range During handling of the above exception, another exception occurred: PicklingError Traceback (most recent call last) Cell In [67], line 2 1 rdd = spark.sparkContext.parallelize(data) ----> 2 df1 = ps.from_pandas(df) 3 sparkdf = spark.createDataFrame(rdd).toDF(*columns) 4 #Create a dictionary from each row in col_ File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\namespace.py:153, in from_pandas(pobj) 151 return Series(pobj) 152 elif isinstance(pobj, pd.DataFrame): --> 153 return DataFrame(pobj) 154 elif isinstance(pobj, pd.Index): 155 return DataFrame(pd.DataFrame(index=pobj)).index File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\frame.py:450, in DataFrame.__init__(self, data, index, columns, dtype, copy) 448 else: 449 pdf = pd.DataFrame(data=data, index=index, columns=columns, dtype=dtype, copy=copy) --> 450 internal = InternalFrame.from_pandas(pdf) 452 object.__setattr__(self, "_internal_frame", internal) ... 466 msg = "Could not serialize object: %s: %s" % (e.__class__.__name__, emsg) 467 print_exec(sys.stderr) --> 468 raise pickle.PicklingError(msg) PicklingError: Could not serialize object: IndexError: tuple index out of range
After doing some reading I checked https://pyreadiness.org/3.11 and it looks like the latest version of python is not supported by pyspark. I was able to resolve this problem by downgrading to python 3.9
7
17
74,595,035
2022-11-28
https://stackoverflow.com/questions/74595035/filling-nan-on-conditions
I have the following input data: df = pd.DataFrame({"ID" : [1, 1, 1, 2, 2, 2, 2], "length" : [0.7, 0.7, 0.7, 0.8, 0.6, 0.6, 0.7], "height" : [7, 9, np.nan, 4, 8, np.nan, 5]}) df ID length height 0 1 0.7 7 1 1 0.7 9 2 1 0.7 np.nan 3 2 0.8 4 4 2 0.6 8 5 2 0.6 np.nan 6 2 0.7 5 I want to be able to fill the NaN if a group of "ID" all have the same "length", fill with the maximum "height" in that group of "ID", else fill with the "height" that correspond to the maximum length in that group. Required Output: ID length height 0 1 0.7 7 1 1 0.7 9 2 1 0.7 9 3 2 0.8 4 4 2 0.6 8 5 2 0.6 4 6 2 0.7 5 Thanks.
You could try with sort_value then we use groupby find the last #last will find the last not NaN value df.height.fillna(df.sort_values(['length','height']).groupby(['ID'])['height'].transform('last'),inplace=True) df Out[296]: ID length height 0 1 0.7 7.0 1 1 0.7 9.0 2 1 0.7 9.0 3 2 0.8 4.0 4 2 0.6 8.0 5 2 0.6 4.0 6 2 0.7 5.0
4
2
74,585,126
2022-11-26
https://stackoverflow.com/questions/74585126/no-output-in-vs-code-using-python-logging-module
I'm on Windows 10 using VS Code 1.73.1 and am retrofitting my program with the Python logging module. My program is generally functioning. The main thing I did is change all the print statements to logger.debug and I know the variable formatting needs to be changed from {} to %s. I also added the encoding flag to my file handler. A couple more things: When I run it from the VS Code command line, it does create a file with debug statements but does not display any output to the Terminal, Debug Console or Output windows. When I use the F5 function to run it, it does NOT create a file or display any console output anywhere. print('something') works and displays in either the Terminal or Debug Console depending on the launch.json setting, but logger.debug('something') does not display in either console. My request/question: Using logger.debug, why is nothing printing to console, and no file is even created? I also tried running the code below in its own separate .py file with the same problem: print displays to the Debug Console, but logger.debug does not. Logging code block: import logging # Create a custom logger logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # Streaming Handler c_handler = logging.StreamHandler() c_handler.setLevel(logging.INFO) c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') c_handler.setFormatter(c_format) logger.addHandler(c_handler) # File Handler f_handler = logging.FileHandler('download_shelf.log', encoding='utf-8') f_handler.setLevel(logging.DEBUG) f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') f_handler.setFormatter(f_format) logger.addHandler(f_handler) logger.debug(f"Log messages:") ### `print` statement displays in either the Terminal ### or Debug Console depending on `launch.json` setting I made sure my launch.json file was set up in the local .vscode folder (see bottom code section). However, I did have to manually create this file, then restarted VS Code. Information I found said there was a link in the Debug panel, which I know I've seen before. It wasn't there for this file. My directory structure: toplevel/ .vscode/ launch.json src/ subdirectory/ myfile.py My launch.json file contains "console": "internalConsole" { "configurations": [ {"name":"Python: Current File", "type":"python", "request":"launch", "program":"${file}", // "console":"integratedTerminal", "console": "internalConsole", "justMyCode":true }, ] }
After using a different logger example, I realized the problem was the Level setting in the first example I used had "INFO" and not "DEBUG" so nothing was showing up. Oops... import logging logger = logging.getLogger('simple_example') logger.setLevel(logging.DEBUG) console = logging.StreamHandler() console.setLevel(level=logging.DEBUG) formatter = logging.Formatter('%(levelname)s : %(message)s') console.setFormatter(formatter) logger.addHandler(console) logger.debug('simple message')
4
5
74,546,287
2022-11-23
https://stackoverflow.com/questions/74546287/overlaying-the-ground-truth-mask-on-an-image
In my project, I extracted frames from a video and in another folder I have ground truth for each frame. I want to map the ground truth image of each frame of a video (in my case, it is saliency prediction ground truth) on its related frame image. As an example I have the following frame: And the following is ground truth mask: and the following is the mapping of ground truth on the frame. How can I do that. Also, I have two folders that inside each of them, there are several folders that inside each of them the there are stored frames. How can I do this operation with these batch data? This is the hierarchy of my folders: frame_folder: folder_1, folder_2, ...... ├── frames │ ├── 601 (601 and 602 and etc are folders that in the inside there are image frames that their name is like 0001.png,0002.png, ...) │ ├── 602 . . . │ └── 700 ├── ground truth │ ├── 601 (601 and 602 and etc are folders that in the inside there are ground truth masks that their name is like 0001.png,0002.png, ...) │ ├── 602 . . . │ └── 700 Update: Using the answer proposed by @hkchengrex , I faced with an error. When there is only one folder in the paths, it works well but when I put several folders (frames of different videos) based on the question I face with the following error. the details are in below: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) TypeError: process_video() takes 1 positional argument but 6 were given """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/Video_processing/Saliency_mapping.py", line 69, in <module> pool.apply(process_video, videos) File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 357, in apply return self.apply_async(func, args, kwds).get() File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 771, in get raise self._value TypeError: process_video() takes 1 positional argument but 6 were given
I need to do similar things pretty often. In my favorite StackOverflow fashion, here is a script that you can copy and paste. I hope the code itself is self-explanatory. There are a few things that you can tune and try (e.g., color maps, overlay styles). It uses multiprocessing.Pool for faster batch-processing, resizes the mask to match the shape of the image, assumes the mask is in .png format, and depends on the file structure that you posted. import os from os import path import cv2 import numpy as np from argparse import ArgumentParser from multiprocessing import Pool def create_overlay(image, mask): """ image: H*W*3 numpy array mask: H*W numpy array If dimensions do not match, the mask is upsampled to match that of the image Returns a H*W*3 numpy array """ h, w = image.shape[:2] mask = cv2.resize(mask, dsize=(w,h), interpolation=cv2.INTER_CUBIC) # color options: https://docs.opencv.org/4.x/d3/d50/group__imgproc__colormap.html mask_color = cv2.applyColorMap(mask, cv2.COLORMAP_HOT).astype(np.float32) mask = mask[:, :, None] # create trailing dimension for broadcasting mask = mask.astype(np.float32)/255 # different other options that you can use to merge image/mask overlay = (image*(1-mask)+mask_color*mask).astype(np.uint8) # overlay = (image*0.5 + mask_color*0.5).astype(np.uint8) # overlay = (image + mask_color).clip(0,255).astype(np.uint8) return overlay def process_video(video_name): """ Processing frames in a single video """ vid_image_path = path.join(image_path, video_name) vid_mask_path = path.join(mask_path, video_name) vid_output_path = path.join(output_path, video_name) os.makedirs(vid_output_path, exist_ok=True) frames = sorted(os.listdir(vid_image_path)) for f in frames: image = cv2.imread(path.join(vid_image_path, f)) mask = cv2.imread(path.join(vid_mask_path, f.replace('.jpg','.png')), cv2.IMREAD_GRAYSCALE) overlay = create_overlay(image, mask) cv2.imwrite(path.join(vid_output_path, f), overlay) parser = ArgumentParser() parser.add_argument('--image_path') parser.add_argument('--mask_path') parser.add_argument('--output_path') args = parser.parse_args() image_path = args.image_path mask_path = args.mask_path output_path = args.output_path if __name__ == '__main__': videos = sorted( list(set(os.listdir(image_path)).intersection( set(os.listdir(mask_path)))) ) print(f'Processing {len(videos)} videos.') pool = Pool() pool.map(process_video, videos) print('Done.') Output: EDIT: Made it work on Windows; changed pool.apply to pool.map.
4
8
74,592,062
2022-11-27
https://stackoverflow.com/questions/74592062/why-rust-hashmap-is-slower-than-python-dict
I wrote scripts performing the same computations with dict and hashmap in Rust and in Python. Somehow the Python version is more than 10x faster. How is that happens? Rust script: ` use std::collections::HashMap; use std::time::Instant; fn main() { let now = Instant::now(); let mut h = HashMap::new(); for i in 0..1000000 { h.insert(i, i); } let elapsed = now.elapsed(); println!("Elapsed: {:.2?}", elapsed); } Output: Elapsed: 828.73ms Python script: import time start = time.time() d = dict() for i in range(1000000): d[i] = i print(f"Elapsed: {(time.time() - start) * 1000:.2f}ms") Output: Elapsed: 64.93ms The same is for string keys. I searched for different workarounds about different hashers for HashMap but none of them is giving more than 10x speed up
Rust by default doesn't have optimizations enabled, because it's annoying for debugging. That makes it, in debug mode, slower than Python. Enabling optimizations should fix that problem. This behaviour is not specific to Rust, it's the default for most compilers (like gcc/g++ or clang for C/C++, both require the -O3 flag for maximum performance). You can enable optimizations in Rust by adding the --release flag to the respective cargo commands, like: cargo run --release Python doesn't distinguish between with or without optimizations, because Python is an interpreted language and doesn't have an (explicit) compilation step. This is the behavior of your code on my machine: Python: ~180ms Rust: ~1750 ms Rust (with --release): ~160ms Python's dict implementation is most likely written in C and heavily optimized, so it should be similar in performance to Rust's HashMap, which is exactly what I see on my machine.
4
6
74,586,892
2022-11-27
https://stackoverflow.com/questions/74586892/no-module-named-keras-saving-hdf5-format
After pip3 installing tensorflow and the transformers library, I'm receiving the titular error when I try loading this from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion') The error traceback looks like: RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' I have ensured keras got installed with transformers, so I'm not sure why it isn't working
If you are using the latest version of TensorFlow and Keras then you have to try this code and you have got this error as shown below RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' Now, expand this error traces as I have shown below Now click on the 14 frames and select as shown below Now comment this line as shown in the picture below Now, try this and your error will gone. The problem is that this is in the older version of keras and you are using the latest version of keras. So, you can skip all these steps and go back to the older version and it will work eventually.
9
13
74,578,175
2022-11-25
https://stackoverflow.com/questions/74578175/getting-video-links-from-youtube-channel-in-python-selenium
I am using Selenium in Python to scrape the videos from Youtube channels' websites. Below is a set of code. The line videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer') repeatedly returns no links to the videos (a.k.a. the print(videos) after it outputs an empty list). How would you modify it to find all the videos on the loaded page? from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get('https://www.youtube.com/wendoverproductions/videos') videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer') print(videos) urls = [] titles = [] dates = [] for video in videos: video_url = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').get_attribute('href') urls.append(video_url) video_title = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').text titles.append(video_title) video_date = video.find_element(by=By.XPATH, value='.//*[@id="metadata-line"]/span[2]').text dates.append(video_date)
Implementation using Selenium: First of all, I wanna solve the problem meaning wanted to pull data with the help of YouTube API and I'm about to reach the goal but for some API's restrictions like API KEY's rquests restrict and some other complexities, I couldn't grab complete data that's why I go with super powerful Selenium engine as my last resort and it works like a charm. Full working code as an example: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By import time import pandas as pd from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() #All are optional options.add_experimental_option("detach", True) options.add_argument("--disable-extensions") options.add_argument("--disable-notifications") options.add_argument("--disable-Advertisement") options.add_argument("--disable-popup-blocking") options.add_argument("start-maximized") s=Service('./chromedriver') driver= webdriver.Chrome(service=s,options=options) driver.get('https://www.youtube.com/wendoverproductions/videos') time.sleep(3) item = [] SCROLL_PAUSE_TIME = 1 last_height = driver.execute_script("return document.documentElement.scrollHeight") item_count = 180 while item_count > len(item): driver.execute_script("window.scrollTo(0,document.documentElement.scrollHeight);") time.sleep(SCROLL_PAUSE_TIME) new_height = driver.execute_script("return document.documentElement.scrollHeight") if new_height == last_height: break last_height = new_height data = [] try: for e in WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div#details'))): title = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('title') vurl = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('href') views= e.find_element(By.XPATH,'.//*[@id="metadata"]//span[@class="inline-metadata-item style-scope ytd-video-meta-block"][1]').text date_time = e.find_element(By.XPATH,'.//*[@id="metadata"]//span[@class="inline-metadata-item style-scope ytd-video-meta-block"][2]').text data.append({ 'video_url':vurl, 'title':title, 'date_time':date_time, 'views':views }) except: pass item = data print(item) print(len(item)) # df = pd.DataFrame(item) # print(df) OUTPUT: [{'video_url': 'https://www.youtube.com/watch?v=oL0umpPPe-8', 'title': 'Samsung’s Dangerous Dominance over South Korea', 'date_time': '10 days ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GBp_NgrrtPM', 'title': 'China’s Electricity Problem', 'date_time': '3 weeks ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=YBNcYxHJPLE', 'title': 'How the World’s Wealthiest People Travel', 'date_time': '1 month ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=iIpPuJ_r8Xg', 'title': 'The US Military’s Massive Global Transportation System', 'date_time': '1 month ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=MY8AB1wYOtg', 'title': 'The Absurd Logistics of Concert Tours', 'date_time': '2 months ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=8xzINLykprA', 'title': 'Money’s Mostly Digital, So Why Is Moving It So Hard?', 'date_time': '2 months ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=f66GfsKPTUg', 'title': 'How This Central African City Became the World’s Most Expensive', 'date_time': '3 months ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=IDLkOWW0_xg', 'title': 'The Simple Genius of NYC’s Water Supply System', 'date_time': '3 months ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U9jirFqex6g', 'title': 'Europe’s Experiment: Treating Trains Like Planes', 'date_time': '3 months ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=eoWcQUjNM8o', 'title': 'How the YouTube Creator Economy Works', 'date_time': '4 months ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=V0Xx0E8cs7U', 'title': 'The Incredible Logistics Behind Weather Forecasting', 'date_time': '4 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=v0aGGOK4kAM', 'title': 'Australia Had a Mass-Shooting Problem. Here’s How it Stopped', 'date_time': '5 months ago', 'views': '947K views'}, {'video_url': 'https://www.youtube.com/watch?v=AW3gaelBypY', 'title': 'The Carbon Offset Problem', 'date_time': '5 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=xhYl7Jjefo8', 'title': 'Jet Lag: The Game - A New Channel by Wendover Productions', 'date_time': '6 months ago', 'views': '328K views'}, {'video_url': 'https://www.youtube.com/watch?v=oESoI6XxZTg', 'title': 'How to Design a Theme Park (To Take Tons of Your Money)', 'date_time': '6 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=AQbmpecxS2w', 'title': 'Why Gas Got So Expensive (It’s Not the War)', 'date_time': '6 months ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=U_7CGl6VWaQ', 'title': 'How Cyberwarfare Actually Works', 'date_time': '7 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=R9pxFgJwxFE', 'title': 'The Incredible Logistics Behind Corn Farming', 'date_time': '7 months ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=SrTrpwzVt4g', 'title': 'The Sanction-Fueled Destruction of the Russian Aviation Industry', 'date_time': '8 months ago', 'views': '4.3M 'https://www.youtube.com/watch?v=b1JlYZQG3lI', 'title': "Why There are Now So Many Shortages (It's Not COVID)", 'date_time': '1 year ago', 'views': '7.7M 'https://www.youtube.com/watch?v=N4dOCfWlgBw', 'title': 'The Insane Logistics of Shutting Down the Cruise Industry', 'date_time': '1 year ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3CuPqeIJr3U', 'title': "China's Vaccine Diplomacy", 'date_time': '1 year ago', 'views': '916K views'}, {'video_url': 'https://www.youtube.com/watch?v=DlTq8DbRs4k', 'title': "The UK's Failed Experiment in Rail Privatization", 'date_time': '1 year ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VjiH3mpxyrQ', 'title': 'How to Start an Airline', 'date_time': '1 year ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=pLcqJ2DclEg', 'title': 'The Electric Vehicle Charging Problem', 'date_time': '1 year ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=3gdCH1XUIlE', 'title': "How Air Ambulances (Don't) Work", 'date_time': '1 year ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=2qanMpnYsjk', 'title': "How Amazon's Super-Complex Shipping System Works", 'date_time': '1 year ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=7R7jNWHp0D0', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 2)', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GIFV_Z7Y9_w', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 1)', 'date_time': '1 year ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=KXRtNwUju5g', 'title': "How China Broke the World's Recycling", 'date_time': '1 year ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=fTyUE162lrw', 'title': 'Why Long-Haul Low-Cost Airlines Always Go Bankrupt', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZAEydOjNWyQ', 'title': 'How Living at the South Pole Works', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=_BCY0SPOFpE', 'title': "Egypt's Dam Problem: The Geopolitics of the Nile", 'date_time': '2 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_rXhuaI0W8', 'title': 'The 8 Flights That Show How COVID-19 Reinvented Aviation', 'date_time': '2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ongqf93rAcM', 'title': "How to Beat the Casino, and How They'll Stop You", 'date_time': '2 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=byW1GExQB84', 'title': 'Distributing the COVID Vaccine: The Greatest Logistics Challenge Ever', 'date_time': '2 years ago', 'views': '911K views'}, {'video_url': 'https://www.youtube.com/watch?v=DTIDCA7mjZs', 'title': 'How to Illegally Cross the Mexico-US Border', 'date_time': '2 years ago', 'views': '1.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=H_akzwzghWQ', 'title': 'How COVID-19 Broke the Airline Pricing Model', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=7C1fPocIFgU', 'title': 'The Broken Economics of Organ Transplants', 'date_time': '2 years ago', 'views': '600K views'}, {'video_url': 'https://www.youtube.com/watch?v=3J06af5xHD0', 'title': 'The Final Years of Majuro [Documentary]', 'date_time': '2 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=YgiMqePRp0Y', 'title': 'The Logistics of Covid-19 Testing', 'date_time': '2 years ago', 'views': '533K views'}, {'video_url': 'https://www.youtube.com/watch?v=Rtmhv5qEBg0', 'title': "Airlines' Protocol for After a Plane Crash", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6GMoUmvw8kU', 'title': 'Why Taiwan and China are Battling over Tiny Island Countries', 'date_time': '2 years ago', 'views': '855K views'}, {'video_url': 'https://www.youtube.com/watch?v=QlPrAKtegFQ', 'title': 'How Long-Haul Trucking Works', 'date_time': '2 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=uAG4zCsiA_w', 'title': 'Why Helicopter Airlines Failed', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=NtX-Ibi21tU', 'title': 'The Five Rules of Risk', 'date_time': '2 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=r2oPk20OHBE', 'title': "Air Cargo's Coronavirus Problem", 'date_time': '2 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ABIkWS_YavM', 'title': 'How Offshore Oil Rigs Work', 'date_time': '2 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-pNBAxx4IRo', 'title': "How the US' Hospital Ships Work", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VX2e2iEg_pM', 'title': 'COVID-19: How Aviation is Fighting for Survival', 'date_time': '2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ppjv0H-Yt5Q', 'title': 'The Logistics of the US Census', 'date_time': '2 years ago', 'views': '887K views'}, {'video_url': 'https://www.youtube.com/watch?v=QvUpSFGRqEo', 'title': 'How Boeing Will Get the 737 MAX Flying Again', 'date_time': '2 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3Sh7hghljuQ', 'title': 'How China Built a Hospital in 10 Days', 'date_time': '2 years ago', 'views': '2.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=vpcUVOjUrKk', 'title': 'The Business of Ski Resorts', 'date_time': '2 years ago', 'views': '2.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=FCEwPio2bkg', 'title': "American Sports' Battle for China", 'date_time': '2 years ago', 'views': '494K views'}, {'video_url': 'https://www.youtube.com/watch?v=5-QejUTDCWw', 'title': "The World's Most Useful Airport [Documentary]", 'date_time': '2 years ago', 'views': '9M views'}, {'video_url': 'https://www.youtube.com/watch?v=RyG7nzteG64', 'title': 'The Logistics of the US Election', 'date_time': '2 years ago', 'views': '838K views'}, {'video_url': 'https://www.youtube.com/watch?v=dSw7fWCrDk0', 'title': 'Amtrak’s Grand Plan for Profitability', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=5SDUm1bx7Zc', 'title': "Australia's China Problem", 'date_time': '3 years ago', 'views': '6.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U1a73gdNs0M', 'title': 'The US Government Program That Pays For Your Flights', 'date_time': '3 years ago', 'views': '886K views'}, {'video_url': 'https://www.youtube.com/watch?v=tLS9IK693KI', 'title': 'The Logistics of Disaster Response', 'date_time': '3 years ago', 'views': '938K views'}, {'video_url': 'https://www.youtube.com/watch?v=cnfoTAxhpzQ', 'title': 'Why So Many Airlines are Going Bankrupt', 'date_time': '3 years ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=erS2YMYcZO8', 'title': 'The Logistics of Filming Avengers', 'date_time': '3 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=XjbYloKJX7c', 'title': "Boeing's China Problem", 'date_time': '3 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=A0qt0hdCQtg', 'title': "The US' Overseas Military Base Strategy", 'date_time': '3 years ago', 'views': '3.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=I9ttpHvK6yw', 'title': 'How to Stop an Epidemic', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=pJ_LUFBSoqM', 'title': "The NFL's Logistics Problem", 'date_time': '3 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=jYPrH4xANpU', 'title': 'The Economics of Private Jets', 'date_time': '3 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=KgsxapE27NU', 'title': "The World's Shortcut: How the Panama Canal Works", 'date_time': '3 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=C1f2GwWLB3k', 'title': 'How Air Traffic Control Works', 'date_time': '3 years ago', 'views': '4M views'}, {'video_url': 'https://www.youtube.com/watch?v=u2-ehDQM6TM', 'title': 'Extremities: A New Scripted Podcast from Wendover', 'date_time': '3 years ago', 'views': '193K views'}, {'video_url': 'https://www.youtube.com/watch?v=17oZPYcpPnQ', 'title': "Iceland's Tourism Revolution", 'date_time': '3 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=SUsqnD9-42g', 'title': 'Mini Countries Abroad: How Embassies Work', 'date_time': '3 years ago', 'views': '5.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=BfNEOfEGe3I', 'title': 'The Economics That Made Boeing Build the 737 Max', 'date_time': '3 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=EkRRo5DN9lI', 'title': 'The Logistics of the International Space Station', 'date_time': '3 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=E3jfvncofiA', 'title': 'How Airlines Decide Where to Fly', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=xX0ozxrZlEQ', 'title': 'How Rwanda is Becoming the Singapore of Africa', 'date_time': '3 years ago', 'views': '6.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=9poImReDFeY', 'title': 'How Freight Trains Connect the World', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=69EVxLLhciQ', 'title': 'How Hong Kong Changed Countries', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=gdy0gBVWAzE', 'title': 'Living Underwater: How Submarines Work', 'date_time': '3 years ago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=bnoUBfLxZz0', 'title': 'The Super-Fast Logistics of Delivering Blood By Drone', 'date_time': '3 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=msjuRoZ0Vu8', 'title': 'The New Economy of the Warming Arctic', 'date_time': '3 years ago', 'views': '844K views'}, {'video_url': 'https://www.youtube.com/watch?v=c0pS3Zx7Fc8', 'title': 'Cities at Sea: How Aircraft Carriers Work', 'date_time': '3 years ago', 'views': '13M views'}, {'video_url': 'https://www.youtube.com/watch?v=TNUomfuWuA8', 'title': 'The Rise of 20-Hour Long Flights', 'date_time': '3 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=0JDoll8OEFE', 'title': 'Why China Is so Good at Building Railways', 'date_time': '4 years ago', 'views': '8.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=7cjIWMUgPtY', 'title': 'The Magic Economics of Gambling', 'date_time': '4 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=cognzTud3Wg', 'title': 'Why the World is Running Out of Pilots', 'date_time': '4 years ago', 'views': '4.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=EodxubsO8EI', 'title': 'How Fighting Wildfires Works', 'date_time': '4 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=30XpSozOZII', 'title': 'How to Build a $100 Million Satellite', 'date_time': '4 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=zQV_DKQkT8o', 'title': "How Africa is Becoming China's China", 'date_time': '4 years ago', 'views': '8.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=wdU1WTBJMl0', 'title': 'How Airports Make Money', 'date_time': '4 years ago', 'views': '4.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6OLVFa8YRfM', 'title': 'The Insane Logistics of Formula 1', 'date_time': '4 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=jdNDYBt9e_U', 'title': 'The Most Valuable Airspace in the World', 'date_time': '4 years ago', 'views': '5.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=5r90DYjZ76g', 'title': "Guam: Why America's Most Isolated Territory Exists", 'date_time': '4 years ago', 'views': '6M views'}, {'video_url': 'https://www.youtube.com/watch?v=1Y1kJpHBn50', 'title': 'How to Design Impenetrable Airport Security', 'date_time': '4 years ago', 'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=hiRBQxHrxNw', 'title': 'Space: The Next Trillion Dollar Industry', 'date_time': '4 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-s3j-ptJD10', 'title': 'The Logistics of Living in Antarctica', 'date_time': '4 years ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=y3qfeoqErtY', 'title': 'How Overnight Shipping Works', 'date_time': '4 years ago', 'views': '7.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=IvAvHjYoLUU', 'title': 'Why Cities Exist', 'date_time': '4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=72hlr-E7KA0', 'title': 'How Airlines Price Flights', 'date_time': '4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=voozHXadYYE', 'title': 'The Gene Patent Question', 'date_time': '4 years ago', 'views': '587K views'}, {'video_url': 'https://www.youtube.com/watch?v=uU3kLBo_ruo', 'title': 'The Nuclear Waste Problem', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=V1YMPk3XhCc', 'title': 'The Little Plane War', 'date_time': '5 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=h97fXhDN5qE', 'title': "Elon Musk's Basic Economics", 'date_time': '5 years ago', 'views': '5.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=GiBF6v5UAAE', 'title': "China's Geography Problem", 'date_time': '5 years ago', 'views': '12M views'}, {'video_url': 'https://www.youtube.com/watch?v=-cjfTG8DbwA', 'title': 'Why Public Transportation Sucks in the US', 'date_time': '5 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ql0Op1VcELw', 'title': "What's Actually the Plane of the Future", 'date_time': '5 years ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=RaGG50laHgI', 'title': 'TWL is back! (But not here...)', 'date_time': '5 years ago', 'views': '303K views'}, {'video_url': 'https://www.youtube.com/watch?v=yT9bit2-1pg', 'title': 'How to Stop a Riot', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=e-WO-c9xHms', 'title': 'How Geography Gave the US Power', 'date_time': '5 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=E7Jfrzkmzyc', 'title': 'Why Chinese Manufacturing Wins', 'date_time': '5 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=dGXahSnA_oA', 'title': 'How Airlines Schedule Flights', 'date_time': '5 years ago', 'views': '4M views'}, {'video_url': 'https://www.youtube.com/watch?v=N4PW66_g6XA', 'title': 'How to Fix Traffic Forever', 'date_time': '5 years ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=j48Z3W35FI0', 'title': 'How the US Government Will Survive Doomsday', 'date_time': '5 years ago', 'views': '4.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=MP1OAm7Pzps', 'title': 'Why the Northernmost Town in America Exists', 'date_time': '5 years ago', 'views': '8.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=lkCeKc1GTMs', 'title': 'Which Country Are International Airports In?', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=Nu2WOxXxsHw', 'title': 'How the Post Office Made America', 'date_time': '5 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=HSxSgbNQi-g', 'title': 'Small Planes Over Big Oceans (ETOPS Explained)', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZcDwtO4RWmo', 'title': "Canada's New Shipping Shortcut", 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=EqWksuyry5w', 'title': 'Why Airlines Sell More Seats Than They Have', 'date_time': '5 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=fwjwePe-HmA', 'title': 'Why Trains are so Expensive', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v3C_5bsdQWg', 'title': "Russia's Geography Problem", 'date_time': '5 years ago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=BzB5xtGGsTc', 'title': 'The Economics of Airline Class', 'date_time': '5 years ago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=n1QEj09Pe6k', 'title': "Why Planes Don't Fly Faster", 'date_time': '5 years ago', 'views': '7.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=YJRqB1xtIxg', 'title': "The US President's $2,614 Per Minute Transport System", 'date_time': '5 years ago', 'views': '11M views'}, {'video_url': 'https://www.youtube.com/watch?v=ancuYECRGN8', 'title': 'Every State in the US', 'date_time': '5 years ago', 'views': '6.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=bL2WPDtLYNU', 'title': 'How to Make First Contact', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=3PWWtqfwacQ', 'title': 'Why Cities Are Where They Are', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=_lj127TKu4Q', 'title': 'Is the European Union a Country?', 'date_time': '5 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=d1CVVoAihBc', 'title': 'Every Country in the World (Part 2)', 'date_time': '5 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=P-b4SUOfn_4', 'title': 'Every Country in the World (Part 1)', 'date_time': '5 years ago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=thqbjA2DC-E', 'title': 'The Five Freedoms of Aviation', 'date_time': '6 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=_rk2hPrEnk8', 'title': "Why Chicken Sandwiches Don't Cost $1500", 'date_time': '6 years ago', 'views': '3M views'}, {'video_url': 'https://www.youtube.com/watch?v=aQSxPzafO_k', 'title': 'Urban Geography: Why We Live Where We Do', 'date_time': '6 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=NlIdzF1_b5M', 'title': 'Big Plane vs Little Plane (The Economics of Long-Haul Flights)', 'date_time': '6 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=HpfvOc8HJdg', 'title': 'TWL #10.5: Argleton (+ Patreon)', 'date_time': '6 years ago', 'views': '202K views'}, {'video_url': 'https://www.youtube.com/watch?v=7ouiTMXuDAQ', 'title': 'How Much Would it Cost to Live on the Moon?', 'date_time': '6 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=-aQ2E0mlRQI', 'title': 'The Plane Highway in the Sky', 'date_time': '6 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=mbEfzuCLoAQ', 'title': 'Why Trains Suck in America', 'date_time': '6 years ago', 'views': '5.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=N7CvEt51fz4', 'title': 'How Maritime Law Works', 'date_time': '6 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=7PsmkAxVHdM', 'title': 'Why College is so Expensive', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=JoYNhX15w4k', 'title': 'TWL #10: The Day Sweden Switched Driving Directions', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=1c-jBfZPVv4', 'title': 'TWL #9: The Secret Anti-Counterfeit Symbol', 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=069y1MpOkQY', 'title': 'How Budget Airlines Work', 'date_time': '6 years ago', 'views': '9.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=n7RHv_MIIT0', 'title': 'TWL #8: Immortality Through Quantum Suicide', 'date_time': '6 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=6Oe8T3AvydU', 'title': 'Why Flying is So Expensive', 'date_time': '6 years ago', 'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=LnEyjwdoj7g', 'title': 'TWL #7: This Number is Illegal', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=5XdYbmova_s', 'title': 'TWL #6: Big Mac Economics', 'date_time': '6 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=F53TA37Mqck', 'title': 'TWL #5: Timekeeping on Mars', 'date_time': '6 years ago', 'views': '810K views'}, {'video_url': 'https://www.youtube.com/watch?v=O6o5C-i02c8', 'title': 'TWL #4: Which Way Should Toilet Paper Face?', 'date_time': '6 years ago', 'views': '739K views'}, {'video_url': 'https://www.youtube.com/watch?v=JllpzZgAAl8', 'title': 'TWL #3: Paper Towns- Fake Places Made to Catch Copyright Thieves', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_iaurPRhCs', 'title': 'TWL #2: Bir Tawil- The Land Without a Country', 'date_time': '6 years ago', 'views': '852K views'}, {'video_url': 'https://www.youtube.com/watch?v=_ugvJi2pIck', 'title': 'TWL #1: Presidents Are 4x More Likely to be Lefties', 'date_time': '6 years ago', 'views': '546K views'}, {'video_url': 'https://www.youtube.com/watch?v=RRWggYusSds', 'title': "Why Time is One of Humanity's Greatest Inventions", 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=h6AWAoc_Lr0', 'title': 'Space Law-What Laws are There in Space?', 'date_time': '6 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=2H9KEcb74aA', 'title': "A Map Geek's Tour of the World #2", 'date_time': '6 years ago', 'views': '248K views'}, {'video_url': 'https://www.youtube.com/watch?v=ThNeIT7aceI', 'title': 'How the Layouts of Grocery Stores are Secretly Designed to Make You Spend More Money', 'date_time': '6 years ago', 'views': '953K views'}, {'video_url': 'https://www.youtube.com/watch?v=bg2VZIPfX0U', 'title': 'How to Create a Country', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=pVB4TEeMcgA', 'title': "A Map Geek's Tour of the World", 'date_time': '6 years ago', 'views': '405K views'}, {'video_url': 'https://www.youtube.com/watch?v=avh7ez858xM', 'title': 'The Messy Ethics of Self Driving Cars', 'date_time': '6 years ago', 'views': '461K views'}, {'video_url': 'https://www.youtube.com/watch?v=Y3jlFtBg0Y8', 'title': 'The Surprising Easternmost Point in the US', 'date_time': '6 years ago', 'views': '213K views'}, {'video_url': 'https://www.youtube.com/watch?v=F-ZskaqBshs', 'title': "Containerization: The Most Influential Invention That You've Never Heard Of", 'date_time': '6 years ago', 'views': '684K views'}, {'video_url': 'https://www.youtube.com/watch?v=Nn-ym8y1_kw', 'title': 'The World is Shrinking', 'date_time': '7 years ago', 'views': '183K views'}, {'video_url': 'https://www.youtube.com/watch?v=8LqqVfPduTs', 'title': 'How Marketers Manipulate Us: Psychological Manipulation in Advertising', 'date_time': '7 years ago', 'views': '458K views'}] Total: 189
4
5
74,585,622
2022-11-26
https://stackoverflow.com/questions/74585622/pyfirmata-gives-error-module-inspect-has-no-attribute-getargspec
I'm trying to use pyFirmata, but I can't get it to work. Even the most basic of the library does not work. I guess there is something wrong with the library code. from pyfirmata import Arduino,util import time port = 'COM5' board = Arduino(port) I get this error: Traceback (most recent call last): File "c:\Users\Public\pythonpublic\arduino.py", line 5, in <module> board = Arduino(port) ^^^^^^^^^^^^^ File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\__init__.py", line 19, in __init__ super(Arduino, self).__init__(*args, **kwargs) File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 101, in __init__ self.setup_layout(layout) File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 157, in setup_layout self._set_default_handlers() File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 161, in _set_default_handlers self.add_cmd_handler(ANALOG_MESSAGE, self._handle_analog_message) File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 185, in add_cmd_handler len_args = len(inspect.getargspec(func)[0]) ^^^^^^^^^^^^^^^^^^ AttributeError: module 'inspect' has no attribute 'getargspec'. Did you mean: 'getargs'?
According to the first line of pyFirmata docs: It runs on Python 2.7, 3.6 and 3.7 You are using Python 3.11. The inspect (core library module) has changed since Python 3.7.
9
4
74,583,630
2022-11-26
https://stackoverflow.com/questions/74583630/why-is-python-saying-modules-are-imported-when-they-are-not
Python 3.6.5 Using this answer as a guide, I attempted to see whether some modules, such as math were imported. But Python tells me they are all imported when they are not. >>> import sys >>> 'math' in sys.modules True >>> 'math' not in sys.modules False >>> math.pi Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'math' is not defined >>> import math >>> 'math' in sys.modules True >>> math.pi 3.141592653589793
to explain this, let's define this function: def import_math(): import math import_math() the above function will import the module math, but only in its local scope, anyone that tries to reference math outside of it will get a name error, because math is not defined in the global scope. any module that is imported is saved into sys.modules so a call to check import_math() print("math" in sys.modules) will print True, because sys.modules caches any module that is loaded anywhere, whether or not it was available in the global scope, a very simple way to define math in the global scope would then to import_math() math = sys.modules["math"] which will convert it from being only in sys.modules to being in the global scope, this is just equivalent to import math which defines a variable math in the global scope that points to the module math. now if you want to see whether "math" exists in the global scope is to check if it is in the global scope directly. print("math" in globals()) print("math" in locals()) which will print false if "math" wasn't imported into the global or local scope and is therefore inaccessable.
8
9
74,582,213
2022-11-26
https://stackoverflow.com/questions/74582213/access-token-scope-insufficient-error-with-updating-sheet-using-google-sheets-ap
I've been making a program in python that is intended to have each user be able to use it to access a single google sheet and read and update data on it only in the allowed ways, so ive used google api on the google developer console and managed to get test accounts reading the information that ive manually put in, however the update function returns <HttpError 403 when requesting https://sheets.googleapis.com/v4/spreadsheets/ google sheets id here /values/write%21D1?valueInputOption=RAW&alt=json returned "Request had insufficient authentication scopes.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT', 'domain': 'googleapis.com', 'metadata': {'method': 'google.apps.sheets.v4.SpreadsheetsService.UpdateValues', 'service': 'sheets.googleapis.com'}}]"> but the scopes added and used are the ones that relate to what the program does, it has no issue with reading but refuses to update for some reason. I've deleted the token.json file and tried updating scopes but it doesn't seem to want to work. scopes it currently has is: non sensitive scopes: Google Sheets API .../auth/drive.file See, edit, create and delete only the specific Google Drive files that you use with this app sensitive scopes: Google Sheets API .../auth/spreadsheets See, edit, create and delete all your Google Sheets spreadsheets Google Sheets API .../auth/spreadsheets.readonly See all your Google Sheets spreadsheets there are no restricted scopes. code here: from __future__ import print_function import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly'] # The ID and range of a sample spreadsheet. spreadID = *redacted for security but this is the correct ID in original code* def main(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: service = build('sheets', 'v4', credentials=creds) # Call the Sheets API sheet = service.spreadsheets() #range used for data reading sheetdata_range = 'read!A1:E5' #"pagename!range" result = sheet.values().get(spreadsheetId=spreadID, range=sheetdata_range).execute() #this is the command that gets the values from the range values = result.get('values', []) #print(f"""values variable is: #{values}""") if not values:#if nothing found it returns this. print('No data found.') return for item in values[0]:#for item in row 1 print(f"values[0] test {item}") for row in values:#prints items from col 1 print(row[0]) rangeupdate = [["1","2","3"],["a","b","c"]] request = sheet.values().update(spreadsheetId=spreadID, range="write!D1",#this is the start point, lists and 2d lists work from this point valueInputOption="RAW",#how the data is added, this prevents the data being interpreted body={"values":rangeupdate}).execute() print(request) except HttpError as err: print(err) if __name__ == '__main__': main()
You are using the sheets.values.update method. If you check the documentation. you will see that this method requires one of the following scopes of authorization Your code apears to only request 'https://www.googleapis.com/auth/spreadsheets.readonly' read only access is not going to let you write to the file. You need to request the proper scope.
5
6
74,581,912
2022-11-26
https://stackoverflow.com/questions/74581912/how-to-extract-multiple-strings-from-list-spaced-apart
I have the following list: lst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888'] I'm looking to extract the first element (posiiton 0) in the list ('L38A') and the 5th element (position 4) (-6.7742) multiple times: Desired output [('L38A','-6.7742'), ('L38C','-7.7904'), ('L38D','-6.3475')...('L38E','-6.4752')] I have tried: lst[::5]
We can handle this via a zip operation and list comprehension: lst = ['L38A', '38', 'L', 'A', '-6.7742', '-3.5671', '0.00226028', '0.4888', 'L38C', '38', 'L', 'C', '-7.7904', '-6.6306', '0.0', '0.4888', 'L38D', '38', 'L', 'D', '-6.3475', '-3.0068', '0.00398551', '0.4888', 'L38E', '38', 'L', 'E', '-6.4752', '-3.4645', '0.00250913', '0.4888'] it = [iter(lst)] * 8 output = [(x[0], x[4]) for x in zip(*it)] print(output) This prints: [('L38A', '-6.7742'), ('L38C', '-7.7904'), ('L38D', '-6.3475'), ('L38E', '-6.4752')] The first zip generates a list of 8 tuples, with 8 being the number of values in each cycle. The comprehension then generates a list of 2-tuples containing the first and fifth elements.
3
2
74,575,385
2022-11-25
https://stackoverflow.com/questions/74575385/adding-different-colors-for-markers-in-plotly
I have a graph that looks like this: I want to sort the color combinations for the dots on this, to achieve something like one color for all the versions that start with 17, different one for 18 and lastly the 20. I don't know if I can do this in plotly since it is very specific and found no information on this. Is it possible to also add different colors for the sub versions, like for 17 we have different categories like 17.2.3, 17.2.2 and so on. here is my data: Days Difference commitDate Year-Month 18538 1291 2021-01-25 11:15:48 2020-01 18539 1135 2020-11-30 05:11:41 2020-11 18540 1100 2020-08-17 07:22:54 2020-08 18541 900 2020-08-17 07:12:05 2020-01 18542 340 2020-01-09 06:21:03 2020-01 18543 203 2019-11-20 06:03:28 2019-11 18544 120 2019-11-15 02:50:28 2019-11 This is the code I have written so far: data1= final_api.query("info_title=='Avi CertificateManagementProfile Object API'") data1['commitDate'] = pd.to_datetime(final_api['commitDate']) import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter(mode='lines', x=data1["commitDate"], y=data1["Days_difference"], line_color='black', line_width=1, line_shape='vh', showlegend=False ) ) fig.add_trace(go.Scatter(mode='markers', x=data1["commitDate"], y=data1["Days_difference"], marker=dict(color=data1['day'], colorscale='plasma', size=10), showlegend=False ) ) for _,row in data1.iterrows(): fig.add_annotation( go.layout.Annotation( x=row["commitDate"], y=row["Days_difference"], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=10, textangle=-90) ) fig.update_layout(template='plotly_white',title_text=' Version Change in Avi CertificateManagementProfile Object API over its Age',title_x=0.5, xaxis_title='Year-Month', yaxis_title='Age of the API (in days)', xaxis_tickformat = '%d %B (%a)<br>%Y', height=700, width=1300) fig.update_xaxes(showline=True, linewidth=1, linecolor='black', mirror=True) fig.update_yaxes(showline=True, linewidth=1, linecolor='black', mirror=True) fig.show() Any help or guidance will be appreciated.
Currently you are assigning marker color based on the 'day' column in your argument marker=dict(color=data1['day'], colorscale='plasma', size=10), but it sounds like you want to assign the color based on the major version. You can extract the major version from the info_version column, and store it in a new column called "major_version": data1['major_version'] = data1['info_version'].str.split('.').str.get(0) Then I would suggest using px.scatter which makes it easier to pass a categorical column as a color (see the documentation here). Then you would restructure your code to plot the markers first, and then the lines connecting the markers: import plotly.express as px fig = px.scatter(data1, x='commitDate', y='Days_difference', color='major_version') fig.add_trace(go.Scatter(mode='lines', x=data1["commitDate"], y=data1["Days_difference"], line_color='black', line_width=1, line_shape='vh', showlegend=False ) ) fig.update_layout(showlegend=False) And keep the rest of your code the same.
4
3
74,575,744
2022-11-25
https://stackoverflow.com/questions/74575744/github-action-to-execute-a-python-script-that-create-a-file-then-commit-and-pus
My repo contains a main.py that generates a html map and save results in a csv. I want the action to: execute the python script (-> this seems to be ok) that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo. name: refresh map on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: getdataandrefreshmap: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script uses: actions/checkout@v3 run: | python main.py git config user.name github-actions git config user.email [email protected] git add . git commit -m "crongenerated" git push The github-action does not pass when I include the 2nd uses: actions/checkout@v3 and the git commands. Thanks in advance for your help
If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here. In your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow: name: refresh map on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: getdataandrefreshmap: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script run: | python main.py git config user.name github-actions git config user.email [email protected] git add . git commit -m "crongenerated" git push I tested it with a dummy repo and everything worked.
5
7
74,573,624
2022-11-25
https://stackoverflow.com/questions/74573624/why-does-the-dtype-of-a-numpy-array-automatically-change-to-object-if-you-mult
Given an arbitrary numpy array (its size and shape don't seem to play a role) import numpy as np a = np.array([1.]) print(a.dtype) # float64 it changes its dtype if you multiply it with a number equal or larger than 10**20 print((a*10**19).dtype) # float64 print((a*10**20).dtype) # object a *= 10**20 # Throws TypeError: ufunc 'multiply' output (typecode 'O') # could not be coerced to provided output parameter (typecode 'd') # according to the casting rule ''same_kind'' a *= 10.**20 # Throws numpy.core._exceptions._UFuncOutputCastingError: # Cannot cast ufunc 'multiply' output from dtype('float64') to # dtype('int32') with casting rule 'same_kind' However, this doesn't happen if you multiply element-wise a[0] *= 10**20 print(a, a.dtype) # [1.e+20] float64 or specifically convert the number to a float (or int) a *= float(10**20) print(a, a.dtype) # [1.e+20] float64 Just for the record, if you do the multiplication outside of numpy, there are no issues b = 1. print(type(b), type(10**20), type(10.**20)) # float int float b *= 10**20 print(type(b)) # float
I expect it is the size a "natural" integer can take on the system. print(sys.maxsize, sys.getsizeof(sys.maxsize)) => 9223372036854775807 36 print(10**19, sys.getsizeof(10**19)) => 10000000000000000000 36 And this is where on my system the conversion to object starts, when I do for i in range(1, 24): print(f'type of a*10**{i}:', (a * 10**i).dtype) I do expect it is linked to the implementation of the integer: PEP 0237: Essentially, long renamed to int. That is, there is only one built-in integral type, named int; but it behaves mostly like the old long type. See https://docs.python.org/3.1/whatsnew/3.0.html#integers To notice this, one could use numpy.multiply with a forced output type. This will throw an error and not silently convert (similar to your *= example).
3
4
74,556,349
2022-11-24
https://stackoverflow.com/questions/74556349/no-module-named-huggingface-hub-snapshot-download
When I try to run the quick start notebook of this repo, I get the error ModuleNotFoundError: No module named 'huggingface_hub.snapshot_download'. How can I fix it? I already installed huggingface_hub using pip. I get the error after compiling the following cell: !CUDA_VISIBLE_DEVICES=0 python -u ../scripts/main.py --summarizer gpt3_summarizer --controller longformer_classifier longformer_classifier --loader alignment coherence --controller-load-dir emnlp22_re3_data/ckpt/relevance_reranker emnlp22_re3_data/ckpt/coherence_reranker --controller-model-string allenai/longformer-base-4096 allenai/longformer-base-4096 --save-outline-file output/outline0.pkl --save-complete-file output/complete_story0.pkl --log-file output/story0.log Here's the entire output: Traceback (most recent call last): File "../scripts/main.py", line 20, in <module> from story_generation.edit_module.entity import * File "/home/jovyan/emnlp22-re3-story-generation/story_generation/edit_module/entity.py", line 20, in <module> from story_generation.common.util import * File "/home/jovyan/emnlp22-re3-story-generation/story_generation/common/util.py", line 13, in <module> from sentence_transformers import SentenceTransformer File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/__init__.py", line 3, in <module> from .datasets import SentencesDataset, ParallelSentencesDataset File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/datasets/__init__.py", line 3, in <module> from .ParallelSentencesDataset import ParallelSentencesDataset File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/datasets/ParallelSentencesDataset.py", line 4, in <module> from .. import SentenceTransformer File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 25, in <module> from .evaluation import SentenceEvaluator File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/evaluation/__init__.py", line 5, in <module> from .InformationRetrievalEvaluator import InformationRetrievalEvaluator File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/evaluation/InformationRetrievalEvaluator.py", line 6, in <module> from ..util import cos_sim, dot_score File "/opt/conda/lib/python3.8/site-packages/sentence_transformers/util.py", line 407, in <module> from huggingface_hub.snapshot_download import REPO_ID_SEPARATOR ModuleNotFoundError: No module named 'huggingface_hub.snapshot_download'
Updating to the latest version of sentence-transformers fixes it (no need to install huggingface-hub explicitly): pip install -U sentence-transformers I've proposed a pull request for this in the original repo.
17
22
74,572,953
2022-11-25
https://stackoverflow.com/questions/74572953/pydantic-attributeerror-object-has-no-attribute-fields-set
from pydantic import BaseModel class A(BaseModel): date = '' class B(A): person: float def __init__(self): self.person = 0 B() tried to initiate class B but raised error AttributeError: 'B' object has no attribute '__fields_set__', why is it?
It's because you override the __init__ and do not call super there so Pydantic cannot do it's magic with setting proper fields. With pydantic it's rare you need to implement your __init__ most cases can be solved different way: from pydantic import BaseModel class A(BaseModel): date = "" class B(A): person: float = 0 B()
8
7
74,568,115
2022-11-25
https://stackoverflow.com/questions/74568115/is-lightgbm-available-for-mac-m1
My goal is to learn a notebook. It has recall 97% while I am struggling with F1 Score 'Attrited Customer' 77.9%. The problem is the notebook uses LightGBM. I am unable to install LightGBM. What I've tried: pip install lightgbm -> it throws error python setup.py egg_info did not run successfully. Then, I did pip install wheel -> now it throws error python setup.py bdist_wheel did not run successfully. Then, I did pip install Cmake, pip install --upgrade pip setuptools, brew install libomp -> the error persisted. The full error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [80 lines of output] INFO:root:running bdist_wheel /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( INFO:root:running build INFO:root:running build_py INFO:root:creating build INFO:root:creating build/lib INFO:root:creating build/lib/lightgbm
As of this writing, no official release of lightgbm (the Python package for LightGBM) supports the M1 Macs (which us ARM chips). osx-arm64 builds of lightgbm are supported by the lightgbm conda-forge feedstock, so you can install lightgbm on an M1 Mac using conda. conda install \ --yes \ -c conda-forge \ 'lightgbm>=3.3.3' Progress towards officially supporting M1 Mac builds of LightGBM can be tracked in microsoft/LightGBM#5269 and microsoft/LightGBM#5328.
5
8
74,567,273
2022-11-25
https://stackoverflow.com/questions/74567273/for-a-new-python-project-using-the-latest-version-of-python-should-i-declare-my
According to pep-0585 for the latest versions of Python, it appears we can use List and list interchangeably for type declarations. So which should I use? Assume: no requirement for backward compatibility using the latest version of python from typing import List def hello_world_1(animals: list[str]) -> list[str]: return animals def hello_world_2(animals: List[str]) -> List[str]: return animals How can I set up a python linter to enforce consistency and only allow either upper or lowercase List for example?
According to the current Python docs, typing.List and similar are deprecated. The docs further state The deprecated types will be removed from the typing module in the first Python version released 5 years after the release of Python 3.9.0. See details in PEP 585—Type Hinting Generics In Standard Collections. Concerning enforcing consistency, it says It is expected that type checkers will flag the deprecated types when the checked program targets Python 3.9 or newer. However, I use Pylint personally (not a type checker, admittedly) and I don't believe I've noticed it flagging the deprecated types yet.
3
3
74,566,704
2022-11-24
https://stackoverflow.com/questions/74566704/cannot-install-lightgbm-3-3-3-on-apple-silicon
Here the full log of pip3 install lightgbm==3.3.3. me % pip3 install lightgbm==3.3.3 Collecting lightgbm==3.3.3 Using cached lightgbm-3.3.3.tar.gz (1.5 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: wheel in /opt/homebrew/lib/python3.10/site-packages (from lightgbm==3.3.3) (0.37.1) Collecting numpy Using cached numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl (13.4 MB) Collecting scipy Using cached scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl (28.5 MB) Collecting scikit-learn!=0.22.0 Downloading scikit_learn-1.1.3-cp310-cp310-macosx_12_0_arm64.whl (7.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7/7.7 MB 4.7 MB/s eta 0:00:00 Collecting threadpoolctl>=2.0.0 Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB) Collecting joblib>=1.0.0 Using cached joblib-1.2.0-py3-none-any.whl (297 kB) Building wheels for collected packages: lightgbm Building wheel for lightgbm (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [86 lines of output] running bdist_wheel /opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib creating build/lib/lightgbm copying lightgbm/callback.py -> build/lib/lightgbm copying lightgbm/compat.py -> build/lib/lightgbm copying lightgbm/plotting.py -> build/lib/lightgbm copying lightgbm/__init__.py -> build/lib/lightgbm copying lightgbm/engine.py -> build/lib/lightgbm copying lightgbm/dask.py -> build/lib/lightgbm copying lightgbm/basic.py -> build/lib/lightgbm copying lightgbm/libpath.py -> build/lib/lightgbm copying lightgbm/sklearn.py -> build/lib/lightgbm running egg_info writing lightgbm.egg-info/PKG-INFO writing dependency_links to lightgbm.egg-info/dependency_links.txt writing requirements to lightgbm.egg-info/requires.txt writing top-level names to lightgbm.egg-info/top_level.txt reading manifest file 'lightgbm.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' no previously-included directories found matching 'build' warning: no files found matching '*.so' under directory 'lightgbm' warning: no files found matching '*.so' under directory 'compile' warning: no files found matching '*.dll' under directory 'compile/Release' warning: no files found matching '*.dll' under directory 'compile/windows/x64/DLL' warning: no previously-included files matching '*.py[co]' found anywhere in distribution warning: no previously-included files found matching 'compile/external_libs/compute/.git' adding license file 'LICENSE' writing manifest file 'lightgbm.egg-info/SOURCES.txt' copying lightgbm/VERSION.txt -> build/lib/lightgbm installing to build/bdist.macosx-13-arm64/wheel running install INFO:LightGBM:Starting to compile the library. INFO:LightGBM:Starting to compile with CMake. Traceback (most recent call last): File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call subprocess.check_call(cmd, stderr=log, stdout=log) File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call retcode = call(*popenargs, **kwargs) File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call with Popen(*popenargs, **kwargs) as p: File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'cmake' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module> setup(name='lightgbm', File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands dist.run_commands() File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands self.run_command(cmd) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command super().run_command(command) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command cmd_obj.run() File "/opt/homebrew/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 335, in run self.run_command('install') File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command self.distribution.run_command(command) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command super().run_command(command) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command cmd_obj.run() File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi, File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first') File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call raise Exception("\n".join((error_msg, LOG_NOTICE))) Exception: Please install CMake and all required dependencies first The full version of error log was saved into /Users/me/LightGBM_compilation.log [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lightgbm Running setup.py clean for lightgbm Failed to build lightgbm Installing collected packages: threadpoolctl, numpy, joblib, scipy, scikit-learn, lightgbm Running setup.py install for lightgbm ... error error: subprocess-exited-with-error × Running setup.py install for lightgbm did not run successfully. │ exit code: 1 ╰─> [45 lines of output] running install /opt/homebrew/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( INFO:LightGBM:Starting to compile the library. INFO:LightGBM:Starting to compile with CMake. Traceback (most recent call last): File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 95, in silent_call subprocess.check_call(cmd, stderr=log, stdout=log) File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 364, in check_call retcode = call(*popenargs, **kwargs) File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call with Popen(*popenargs, **kwargs) as p: File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'cmake' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 334, in <module> setup(name='lightgbm', File "/opt/homebrew/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands dist.run_commands() File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands self.run_command(cmd) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command super().run_command(command) File "/opt/homebrew/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command cmd_obj.run() File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 248, in run compile_cpp(use_mingw=self.mingw, use_gpu=self.gpu, use_cuda=self.cuda, use_mpi=self.mpi, File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 198, in compile_cpp silent_call(cmake_cmd, raise_error=True, error_msg='Please install CMake and all required dependencies first') File "/private/var/folders/qf/7kpp7kws2bs9ljbxdl6vf9480000gn/T/pip-install-qdmnvrdo/lightgbm_7e71affc27c54e8fb3f78d9ef73bd942/setup.py", line 99, in silent_call raise Exception("\n".join((error_msg, LOG_NOTICE))) Exception: Please install CMake and all required dependencies first The full version of error log was saved into /Users/me/LightGBM_compilation.log [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> lightgbm note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. There is a workaround for this?
When you run pip install lightgbm and see this message in logs: Building wheels for collected packages: lightgbm it means that there is not a pre-compiled binary (i.e. wheel) available matching your platform (operating system + architecture + Python version), and that LightGBM needs to be built from source. lightgbm is a Python package wrapping lib_lightgbm, a C++ library with a C API. So "built from source" for lightgbm means compiling that C/C++ code, which for LightGBM requires: C and C++ compilers the CMake build system an installation of OpenMP Those components are the "CMake and all required dependencies" referred to in the error message. On macOS, you should already have a C/C++ compiler installed (clang) by default. To get CMake and OpenMP, run the following. brew install cmake libomp NOTE: lightgbm v3.3.3 and older does not support the newest version of OpenMP available as of this writing (v15.x). That was fixed in microsoft/LightGBM#5563. If you end up with OpenMP >=15.0 and lightgbm>=4.0 is not yet available from PyPI, either downgrade OpenMP or build a development version of lightgbm (see "Install from GitHub" in the docs).
13
30
74,566,749
2022-11-24
https://stackoverflow.com/questions/74566749/no-module-names-src-when-importing-from-parent-folder-in-jupyter-notebook
I have the following folder structure in my project my_project notebook |-- some_notebook.ipynb src |-- preprocess |-- __init__.py |-- some_processing.py __init__.py Now, inside some_notebook.ipynb I simply want to get the methods from some_processing.py. Now we I run from src.preprocess import some_processing from some_notebook.ipynb it always throws ModuleNotFoundError: No module named 'src' I found multiple questions regarding this and played around with sys.path.append(<path-to-src>). But I couldn't solve it. Which path do I provide? Something like ../src didnt work? I checked for example the AlphaFold project from DeepMind and they are using it also with this structure. I tried to replicate exactly like they did. How can I solve this? Which path do I provide in sys.path.append()? I appreciate any help!
I found the answer. Running sys.path.insert(1, os.path.join(sys.path[0], '../src')) made it possible to import anything from parent module src.
4
3
74,557,655
2022-11-24
https://stackoverflow.com/questions/74557655/python-type-hints-how-to-use-literal-with-strings-to-conform-with-mypy
I want to restrict the possible input arguments by using typing.Literal. The following code works just fine, however, mypy is complaining. from typing import Literal def literal_func(string_input: Literal["best", "worst"]) -> int: if string_input == "best": return 1 elif string_input == "worst": return 0 literal_func(string_input="best") # works just fine with mypy # The following call leads to an error with mypy: # error: Argument "string_input" to "literal_func" has incompatible type "str"; # expected "Literal['best', 'worst']" [arg-type] input_string = "best" literal_func(string_input=input_string)
Unfortunately, mypy does not narrow the type of input_string to Literal["best"]. You can help it with a proper type annotation: input_string: Literal["best"] = "best" literal_func(string_input=input_string) Perhaps worth mentioning that pyright works just fine with your example. Alternatively, the same can be achieved by annotating the input_string as Final: from typing import Final, Literal ... input_string: Final = "best" literal_func(string_input=input_string)
11
9
74,557,297
2022-11-24
https://stackoverflow.com/questions/74557297/f-string-with-percent-and-fixed-decimals
I know that I can to the following. But is it possible to combine them to give me a percent with fixed decimal? >>> print(f'{0.123:%}') 12.300000% >>> print(f'{0.123:.2f}') 0.12 But what I want is this output: 12.30%
You can specify the number of decimal places before %: >>> f'{0.123:.2%}' '12.30%'
9
19
74,554,325
2022-11-24
https://stackoverflow.com/questions/74554325/how-to-disable-the-header-with-filename-and-date-when-converting-ipynb-to-pdf-w
I am using nbconvert for converting my .ipynb into an .pdf file. When doing so the resulting .pdf file contains a header with the filename and the current date below. How can I disable that? I was looking in the docs but cannot find how to do it. CLI command jupyter nbconvert --to pdf filename.ipynb Actual Wanted
I found some helpful pointer in the docs. Just follow these steps: Run jupyter --paths in your command-line. Copy the path who looks like /Users/username/.venv/venvName/share/jupyter (I run nbconvert from a venv. Could be different for you). Go to the path and duplicate the folder latex Name the folder hide_header or whatever you want Open base.tex.j2 and delete the line ((* block maketitle *))\maketitle((* endblock maketitle *)) Save run jupyter nbconvert --to pdf filename.ipynb --template=hide_header
5
8
74,553,366
2022-11-23
https://stackoverflow.com/questions/74553366/yarl-quoting-c19612-fatal-error-longintrepr-h-file-not-found-1-error-ge
Python version: 3.11 Installing dependencies for an application by pip install -r requirements.txt gives the error below. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation goes fine. Related question: ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects Running setup.py install for yarl ... error error: subprocess-exited-with-error × Running setup.py install for yarl did not run successfully.\ │ exit code: 1 ╰─> [45 lines of output] **** * Accellerated build * **** /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running install /data/data/com.termux/files/home/folder_for_app/venv/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-armv8l-cpython-311 creating build/lib.linux-armv8l-cpython-311/yarl copying yarl/init.py -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/quoting.py -> build/lib.linux-armv8l-cpython-311/yarl running egg_info writing yarl.egg-info/PKG-INFO writing dependency_links to yarl.egg-info/dependency_links.txt writing requirements to yarl.egg-info/requires.txt writing top-level names to yarl.egg-info/top_level.txt reading manifest file 'yarl.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.cache' found anywhere in distribution warning: no previously-included files found matching 'yarl/_quoting.html' warning: no previously-included files found matching 'yarl/_quoting.*.so' warning: no previously-included files found matching 'yarl/_quoting.pyd' warning: no previously-included files found matching 'yarl/_quoting.*.pyd' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE' writing manifest file 'yarl.egg-info/SOURCES.txt' copying yarl/init.pyi -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.c -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/_quoting.pyx -> build/lib.linux-armv8l-cpython-311/yarl copying yarl/py.typed -> build/lib.linux-armv8l-cpython-311/yarl running build_ext building 'yarl._quoting' extension creating build/temp.linux-armv8l-cpython-311 creating build/temp.linux-armv8l-cpython-311/yarl arm-linux-androideabi-clang -mfloat-abi=softfp -mfpu=vfpv3-d16 -DNDEBUG -g -fwrapv -O3 -Wall -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -march=armv7-a -mfpu=neon -mfloat-abi=softfp -mthumb -fstack-protector-strong -O3 -fPIC -I/data/data/com.termux/files/home/folder_for_app/venv/include -I/data/data/com.termux/files/usr/include/python3.11 -c yarl/_quoting.c -o build/temp.linux-armv8l-cpython-311/yarl/_quoting.o yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> yarl note: This is an issue with the package mentioned above, not pip.
Solution for this error: need to update requirements.txt. Not working versions of modules with Python 3.11: yarl==1.4.2 frozenlist==1.3.0 aiohttp==3.8.1 Working versions: yarl==1.8.1 frozenlist==1.3.1 aiohttp==3.8.2 Links to the corresponding issues with fixes: https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305 https://github.com/aio-libs/aiohttp/issues/6600
5
4
74,551,529
2022-11-23
https://stackoverflow.com/questions/74551529/raspi-pico-w-errno-98-eaddrinuse-despite-using-socket-so-reuseaddr
I'm trying to set up a simple server/client connection using the socket module on a Raspberry Pi Pico W running the latest nightly build image rp2-pico-w-20221123-unstable-v1.19.1-713-g7fe7c55bb.uf2 which I've downloaded from https://micropython.org/download/rp2-pico-w/ The following code runs fine for the first connection (network connectivity can be assumed at this point). import socket def await_connection(): print(' >> Awaiting connection ...') try: host_addr = socket.getaddrinfo('0.0.0.0', 46321)[0][-1] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(host_addr) sock.listen(1) con, addr = sock.accept() while True: # keep receiving commands on the open connection until the client stops sending cmd = con.recv(1024) if not cmd: print(f' >> {addr} disconnected') break elif cmd.decode() == 'foo': response = 'bar'.encode() else: response = cmd print(f"Received {cmd.decode()}") print(f"Returning {response.decode()}") con.sendall(response) except OSError as e: print(f' >> ERROR: {e}') finally: # appearantly, context managers are currently not supported in MicroPython, therefore the connection is closed manually con.close() print(' >> Connection closed.') while True: # main loop, causing the program to await a new connection as soon as the previous one is closed await_connection() If the client closes the connection and tries to re-connect, the infamous [Errno 98] EADDRINUSE is thrown: Please note that I've already implemented the sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) statement, as recommended here, but to no avail. However, if I run the exact same code on a Raspberry Pi 3 B with python 3.7.3 within the same network, everything works as expected - the client can disconnect and reconnect multiple times without issues: How do I get the Pico to reuse the address after the initial connection, just like it is working in python 3.7.3?
While I was able to mitigate the reconnection crash by adding a sock.close() statement after the con.close(), the main issue with my code was the structure itself, as Steffen Ullrich pointed out. The actual fix was to move the operations on the sock object out of the loop. import socket def await_connection(): print(' >> Awaiting connection ...') try: con, addr = sock.accept() while True: # keep receiving commands on the open connection until the client stops sending cmd = con.recv(1024) if not cmd: print(f' >> {addr} disconnected') break else: response = cmd print(f"Received {cmd.decode()}") print(f"Returning {response.decode()}") con.sendall(response) except OSError as e: print(f' >> ERROR: {e}') finally: # appearantly, context managers are currently not supported in MicroPython, therefore the connection is closed manually con.close() print(' >> Connection closed.') host_addr = socket.getaddrinfo('0.0.0.0', 46321)[0][-1] sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(host_addr) sock.listen(1) while True: # main loop, causing the program to await a new connection as soon as the previous one is closed await_connection()
3
4
74,550,830
2022-11-23
https://stackoverflow.com/questions/74550830/error-could-not-build-wheels-for-aiohttp-which-is-required-to-install-pyprojec
Python version: 3.11 Installing dependencies for an application by pip install -r requirements.txt gives the following error: socket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects This error is specific to Python 3.11 version. On Python with 3.10.6 version installation goes fine. Related question: yarl/_quoting.c:196:12: fatal error: 'longintrepr.h' file not found - 1 error generated
Solution for this error: need to update requirements.txt. Not working versions of modules with Python 3.11: aiohttp==3.8.1 yarl==1.4.2 frozenlist==1.3.0 Working versions: aiohttp==3.8.2 yarl==1.8.1 frozenlist==1.3.1 Links to the corresponding issues with fixes: https://github.com/aio-libs/aiohttp/issues/6600 https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305
30
20
74,548,693
2022-11-23
https://stackoverflow.com/questions/74548693/why-do-python-variables-of-same-value-point-to-the-same-memory-address
I ran into an interesting case today wherein a = 10 b = 10 print (a is b) logged out True. I did some searching and came across the concept of interning. Now that explains why True is correct for the range [-5, 256]. However, I get the same results even while using floats. Please help me understand why. Here is the part I don't get- a = 1000.00 b = 999.99 + 0.01 print (a is b) # Output was 'True' I expect the output to be False since a's value is assigned before running the program, whereas b's value is determined at run-time as a result of expression evaluation and hence should have a different memory address. I'd also appreciate it if you could point out a case where a==b is True, but a is b evaluates to False where both a and b are of type float
What you are looking at in your case is called "constant folding". That's an implementation detail, not a language specification - meaning there is no guarantee that this behaviour will remain the same and you should not rely on it in your code. But, in general it comes down to the fact that things that can be calculated "statically" are optimised away to their simplest form before the code is actually being run. Take a look at the output of the following snippet import dis def f(): return 0.1 + 0.9 def g(): a = 0.1 b = 0.9 return a+b dis.dis(f) dis.dis(g) You will notice that there is no addition performed in f, even though looking at the code addition is clearly there. As for two same-valued variables having different memory addresses, its not very difficult - any calculations that cannot be optimised away pre-runtime will yield you variables at different memory addresses if they are not within interning range. a = 1 b = 1 for _ in range(1000): a += 1 b += 1 print(f"{id(a) = }") print(f"{id(b) = }") print(f"{a == b = }") print(f"{a is b = }")
3
5
74,548,343
2022-11-23
https://stackoverflow.com/questions/74548343/inner-merge-two-dataframes-on-string-partial-match
We have the following two data frames temp = pd.DataFrame(np.array([['I am feeling very well',1],['It is hard to believe this happened',0], ['What is love?',1], ['No new friends',0], ['I love this show',1],['Amazing day today',1]]), columns = ['message','sentiment']) temp_truncated = pd.DataFrame(np.array([['I am feeling very',1],['It is hard to believe',1], ['What is',1], ['Amazing day',1]]), columns = ['message','cutoff']) My idea is to create a third DataFrame that would represent the inner join between temp and temp_truncated by finding matches in temp that start with / contain the strings in temp_truncated Desired Output: message sentiment cutoff 0 I am feeling very well 1 1 1 It is hard to believe this happened 0 1 2 What is love 1 1 3 Amazing day today 1 1
You can use: import re pattern = '|'.join(map(re.escape, temp_truncated['message'])) key = temp['message'].str.extract(f'({pattern})', expand=False) out = (temp .merge(temp_truncated.rename(columns={'message': 'sub'}), left_on=key, right_on='sub') .drop(columns='sub') ) Output: message sentiment cutoff 0 I am feeling very well 1 1 1 It is hard to believe this happened 0 1 2 What is love? 1 1 3 Amazing day today 1 1
4
5
74,547,365
2022-11-23
https://stackoverflow.com/questions/74547365/how-to-reduce-cognitive-complexity-in-this-python-method
I am faced with a challenge. I have an Python method implemented and the SonarLint plugin of my PyCharm warns me with the message: "Refactor this function to reduce its Cognitive Complexity from 19 to the 15 allowed." but I can't see how to reduce the complexity. My Python method is: def position(key): if key == 'a': return 0 elif key == 'b': return 1 elif key == 'c': return 2 elif key == 'd': return 3 elif key == 'e': return 4 elif key == 'f': return 5 elif key == 'g': return 6 elif key == 'h': return 7 elif key == 'i': return 8 elif key == 'j': return 9 elif key == 'k': return 10 elif key == 'l': return 11 elif key == 'm': return 12 elif key == 'n': return 13 elif key == 'ñ': return 14 elif key == 'o': return 15 elif key == 'p': return 16 elif key == 'q': return 17 else: logger.info('error') And the warning of SonarLint is: And if I click on show issue locations it gives me the explanation of how the Cognitive Complexity is calculated: I can't see how to reduce the complex of this function. I know that I can implement another method with the same behaviour using things like the ascii code, but it's not the point of this question. The summary of the question is how can I follow the suggestion of SonarLint, I mean, how can I reduce the Cognitive Complexity from 19 to the 15 of this particular method. Something I've noticed is that if I remove elif statements until I have only 14 characters cases, the warning magically disappears.
I've found what's happening. The Plugin SonarLint has a maximum number of Cognitive Complexity, as you can see in this capture: So SonarLint doesn't say you that you can simplify your method, it tells you that this method is more complex than the prefixed limit that SonarLint has setted. This is the reason because if I delete elif until I reach the magic number of 15 for the Cognitive Complexity the warning magically disappears. ¡¡WARNING!! It's not recommendable to increase the SonarLint limit of Cognitive Complexity. The most advisable option is to refactor your method and find another way to do the same. In my case I've implemented the next method instead (there are another shorter solutions, but because I have to use spanish alphabet I decided to use what for mi is the most readable solution): def position(key: str) -> (int, None): char_map = { 'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4, 'f': 5, 'g': 6, 'h': 7, 'i': 8, 'j': 9, 'k': 10, 'l': 11, 'm': 12, 'n': 13, 'ñ': 14, 'o': 15, 'p': 16, 'q': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'x': 24, 'y': 25, 'z': 26 } try: output = char_map[key] except KeyError: logger.error('KeyError in position() with the key: ' + str(key)) return None return output
3
1
74,544,539
2022-11-23
https://stackoverflow.com/questions/74544539/python-how-to-check-what-types-are-in-defined-types-uniontype
I am using Python 3.11 and I would need to detect if an optional class attribute is type of Enum (i.e. type of a subclass of Enum). With typing.get_type_hints() I can get the type hints as a dict, but how to check if a field's type is optional Enum (subclass)? Even better if I could get the type of any optional field regardless is it Optional[str], Optional[int], Optional[Class_X], etc. Example code from typing import Optional, get_type_hints from enum import IntEnum, Enum class TestEnum(IntEnum): foo = 1 bar = 2 class Foo(): opt_enum : TestEnum | None = None types = get_type_hints(Foo)['opt_enum'] This works (ipython) In [4]: Optional[TestEnum] == types Out[4]: True These ones fail (yes, these are desperate attempts) In [6]: Optional[IntEnum] == types Out[6]: False and In [11]: issubclass(Enum, types) Out[11]: False and In [12]: issubclass(types, Enum) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [12], line 1 ----> 1 issubclass(types, Enum) TypeError: issubclass() arg 1 must be a class and In [13]: issubclass(types, Optional[Enum]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [13], line 1 ----> 1 issubclass(types, Optional[Enum]) File /usr/lib/python3.10/typing.py:1264, in _UnionGenericAlias.__subclasscheck__(self, cls) 1262 def __subclasscheck__(self, cls): 1263 for arg in self.__args__: -> 1264 if issubclass(cls, arg): 1265 return True TypeError: issubclass() arg 1 must be a class and In [7]: IntEnum in types --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [7], line 1 ----> 1 IntEnum in types TypeError: argument of type 'types.UnionType' is not iterable Why I needed this I have several cases where I am importing data from csv files and creating objects of a class from each row. csv.DictReader() returns a dict[str, str] and I need to fix the types for the fields before attempting to create the object. However, some of the object fields are Optional[int], Optional[bool], Optional[EnumX] or Optional[ClassX]. I have several of those classes multi-inheriting my CSVImportable() class/interface. I want to implement the logic once into CSVImportable() class instead of writing roughly same code in field-aware way in every subclass. This CSVImportable._field_type_updater() should: correctly change the types at least for basic types and enums Gracefully skip Optional[ClassX] fields Naturally I am thankful for better designs too :-)
When you are dealing with a parameterized type (generic or special like typing.Optional), you can inspect it via get_args/get_origin. Doing that you'll see that T | S is implemented slightly differently than typing.Union[T, S]. The origin of the former is types.UnionType, while that of the latter is typing.Union. Unfortunately this means that to cover both variants, we need two distinct checks. from types import UnionType from typing import Union, get_origin def is_union(t: object) -> bool: origin = get_origin(t) return origin is Union or origin is UnionType Using typing.Optional just uses typing.Union under the hood, so the origin is the same. Here is a working demo: from enum import IntEnum from types import UnionType from typing import Optional, get_type_hints, get_args, get_origin, Union class TestEnum(IntEnum): foo = 1 bar = 2 class Foo: opt_enum1: TestEnum | None = None opt_enum2: Optional[TestEnum] = None opt_enum3: TestEnum opt4: str def is_union(t: object) -> bool: origin = get_origin(t) return origin is Union or origin is UnionType if __name__ == "__main__": for name, type_ in get_type_hints(Foo).items(): if type_ is TestEnum or is_union(type_) and TestEnum in get_args(type_): print(name, "accepts TestEnum") Output: opt_enum1 accepts TestEnum opt_enum2 accepts TestEnum opt_enum3 accepts TestEnum
3
5
74,543,989
2022-11-23
https://stackoverflow.com/questions/74543989/why-do-we-need-try-finally-when-using-contextmanager-decorator
I wonder why we need to use a try-finally when using a the @contextmanager decorator. The provided example suggests: from contextlib import contextmanager @contextmanager def managed_resource(*args, **kwds): resource = acquire_resource(*args, **kwds) try: yield resource finally: release_resource(resource) It seems to me, however, that this will do the exact same thing: @contextmanager def managed_resource(*args, **kwds): resource = acquire_resource(*args, **kwds) yield resource release_resource(resource) I'm sure I must be missing something. What am I missing?
Because a finally statement is guaranteed to run no matter what (except a power outage), before the code can terminate. So writing it like this guarantees that the resource is always released
3
3
74,511,042
2022-11-20
https://stackoverflow.com/questions/74511042/one-connection-to-db-for-app-or-a-connection-on-every-execution
I'm using psycopg2 library to connection to my postgresql database. Every time I want to execute any query, I make a make a new connection like this: import psycopg2 def run_query(query): with psycopg2.connect("dbname=test user=postgres") as connection: cursor = connection.cursor() cursor.execute(query) cursor.close() But I think it's faster to make one connection for whole app execution like this: import psycopg2 connection = psycopg2.connect("dbname=test user=postgres") def run_query(query): cursor = connection.cursor() cursor.execute(query) cursor.close() So which is better way to connect my database during all execution time on my app? I've tried both ways and both worked, but I want to know which is better and why.
You should strongly consider using a connection pool, as other answers have suggested, this will be less costly than creating a connection every time you query, as well as deal with workloads that one connection alone couldn't deal with. Create a file called something like mydb.py, and include the following: import psycopg2 import psycopg2.pool from contextlib import contextmanager dbpool = psycopg2.pool.ThreadedConnectionPool(host=<<YourHost>>, port=<<YourPort>>, dbname=<<YourDB>>, user=<<YourUser>>, password=<<YourPassword>>, ) @contextmanager def db_cursor(): conn = dbpool.getconn() try: with conn.cursor() as cur: yield cur conn.commit() # You can have multiple exception types here. # For example, if you wanted to specifically check for the # 23503 "FOREIGN KEY VIOLATION" error type, you could do: # except psycopg2.Error as e: # conn.rollback() # if e.pgcode = '23503': # raise KeyError(e.diag.message_primary) # else # raise Exception(e.pgcode) except: conn.rollback() raise finally: dbpool.putconn(conn) This will allow you run queries as so: import mydb def myfunction(): with mydb.db_cursor() as cur: cur.execute("""Select * from blahblahblah...""")
3
6
74,537,026
2022-11-22
https://stackoverflow.com/questions/74537026/execute-function-specifically-on-cpu-in-jax
I have a function that will instantiate a huge array and do other things. I am running my code on TPUs so my memory is limited. How can I execute my function specifically on the CPU? If I do: y = jax.device_put(my_function(), device=jax.devices("cpu")[0]) I guess that my_function() is first executed on TPU and the result is put on CPU, which gives me memory error. and using jax.config.update('jax_platform_name', 'cpu') at the beginning of my code seems to have no effect. Also please note that I can't modify my_function() Thanks!
To directly specify the device on which a function should be executed, use the device argument of jax.jit. For example (using a GPU runtime because it's the accelerator I have access to at the moment): import jax gpu_device = jax.devices('gpu')[0] cpu_device = jax.devices('cpu')[0] def my_function(x): return x.sum() x = jax.numpy.arange(10) x_gpu = jax.jit(my_function, device=gpu_device)(x) print(x_gpu.device()) # gpu:0 x_cpu = jax.jit(my_function, device=cpu_device)(x) print(x_cpu.device()) # TFRT_CPU_0 This can also be controlled with the jax.default_device decorator around the call-site: with jax.default_device(cpu_device): print(jax.jit(my_function)(x).device()) # TFRT_CPU_0 with jax.default_device(gpu_device): print(jax.jit(my_function)(x).device()) # gpu:0
4
4
74,534,284
2022-11-22
https://stackoverflow.com/questions/74534284/anotate-return-type-with-psycopg2-type-stub
I have a function which returns a psycopg2 connection, if a connection can be established. So the return type should be Optional[psycopg2.connection], or psycopg2.connection | None. However I am unable to import psycopg2.connection at runtime. I've tried the workaround mentioned in How can I import type-definitions from a typeshed stub-file? but that gives me this mypy error: Single overload definition, multiple required. Here's my code import psycopg2 from typing import Optional, TYPE_CHECKING, overload if TYPE_CHECKING: from psycopg2 import connection @overload def get_connection() -> Optional[connection]: ... # Make DB error logging less spammy has_logged_error = False def get_connection(): try: conn = psycopg2.connect( dbname=settings.db_name, user=settings.db_user, password=settings.db_password, host=settings.db_host, port=settings.db_port, ) return conn except Exception as e: global has_logged_error if not has_logged_error: logger.error(f"Error connecting to DB: {e}") has_logged_error = True return
The question you linked proposes some extremely dirty hack which doesn't seem to work any more. There is absolutely no need for it under such simple circumstances. Moreover, to be honest, I cannot reproduce that solution on any mypy version starting from 0.800 (old enough, given that the linked answer is recent), so that perhaps never worked. I reduced your code samples to contain only minimal return for the sake of readability. Variant 1: use conditional import and string annotation import psycopg2 from typing import Optional, TYPE_CHECKING if TYPE_CHECKING: from psycopg2 import connection def get_connection() -> Optional['connection']: return psycopg2.connect(...) This is simple: mypy known what connection is (defined in stubs); runtime does not try to learn something about connection because it sees simply a string. Variant 2: use conditional import and annotations future from __future__ import annotations import psycopg2 from typing import Optional, TYPE_CHECKING if TYPE_CHECKING: from psycopg2 import connection def get_connection() -> Optional[connection]: return psycopg2.connect(...) Docs for future imports. This is very similar to direct string usage, but looks nicer and is more convenient, IMO. Variant 3: use string annotation, but avoid conditional import import psycopg2 from typing import Optional def get_connection() -> Optional['psycopg2.connection']: return psycopg2.connect(...) Variant 4: use annotations future, but avoid conditional import from __future__ import annotations import psycopg2 from typing import Optional def get_connection() -> Optional[psycopg2.connection]: return psycopg2.connect(...) Variants 3 and 4 do not expose that connection is stub-only, hiding it as implementation detail. You may prefer to state that explicitly - then use 1 or 2. Modification to use current features This is my favorite. Union syntax is valid in python 3.10+, so if you use an older one - you may want to stick with Optional as described above for consistency. However, annotations future-import makes this expression effectively a string, so if your tools do not perform any runtime typing introspection - you can still use pipe union syntax on older versions. Just be aware that typing.get_type_hints and similar utilities will fail with this syntax on python before 3.10. from __future__ import annotations import psycopg2 def get_connection() -> psycopg2.connection | None: return psycopg2.connect(...)
3
6
74,529,728
2022-11-22
https://stackoverflow.com/questions/74529728/shapelydeprecationwarnings-and-the-use-of-geoms
Some lines to look up geographical information by given pair of coordinates, referenced from https://gis.stackexchange.com/questions/254869/projecting-google-maps-coordinate-to-lookup-country-in-shapefile. import geopandas as gpd from shapely.geometry import Point pt = Point(8.7333333, 53.1333333) # countries shapefile from # http://thematicmapping.org/downloads/world_borders.php folder = 'C:\\My Documents\\' data = gpd.read_file(folder + 'TM_WORLD_BORDERS-0.3.shp') for index, row in data.iterrows(): poly = row['geometry'] if poly.contains(pt): print (row) # ---------- Print out as ----------------------------------- FIPS GM ISO2 DE ISO3 DEU UN 276 NAME Germany AREA 34895 POP2005 82652369 REGION 150 SUBREGION 155 LON 9.851 LAT 51.11 geometry (POLYGON ((8.710255000000018 47.69680799999997... Name: 71, dtype: object It works but prints out paragraphs of ShapelyDeprecationWarnings: C:\Python38\lib\site-packages\pandas\core\dtypes\inference.py:384: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry. iter(obj) # Can iterate over it. C:\Python38\lib\site-packages\pandas\core\dtypes\inference.py:385: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry. len(obj) # Has a length associated with it. C:\Python38\lib\site-packages\pandas\io\formats\printing.py:120: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry. s = iter(seq) C:\Python38\lib\site-packages\pandas\io\formats\printing.py:124: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry. for i in range(min(nitems, len(seq))) C:\Python38\lib\site-packages\pandas\io\formats\printing.py:128: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry. if nitems < len(seq): To update the above code to avoid ShapelyDeprecationWarnings, I've tried to replace "for index, row in data.iterrows()" to "for index, row in data.geoms", and "poly = row['geometry']" to "poly = row.geoms". Neither worked. What's the right way to update the code to avoid ShapelyDeprecationWarnings?
The data contains POLYGONs and MULTIPOLYGONs. Apparently there have been changes to Shapely's MULTIPOLYGON API which have not been fully integrated into GeoPandas yet. data.geometry > 0 MULTIPOLYGON (((-61.68667 17.02444, -61.88722 ... > 1 POLYGON ((2.96361 36.80222, 4.78583 36.89472, ... > ... The mentioned errors appear when a MULTIPOLYGON is printed. Solution Exclude the geometry column when printing a row: import geopandas as gpd from shapely.geometry import Point pt = Point(8.7333333, 53.1333333) # countries shapefile from # http://thematicmapping.org/downloads/world_borders.php # data = gpd.read_file('file.shp') for index, row in data.iterrows(): poly = row['geometry'] if poly.contains(pt): print(data.drop(columns='geometry') .iloc[[row.name]] .iloc[0]) Old answer The geoms property only exists on MULTIPOLYGONS. Thus you have to adjust your code to deal with this accordingly: for index, row in data.iterrows(): results = [] if hasattr(row.geometry, "geoms"): for part in row.geometry.geoms: if part.contains(pt): results.append(row) elif row.geometry.contains(pt): results.append(row) else: pass for row in results: print(data.drop(columns='geometry').iloc[[row.name]].iloc[0])
3
3
74,467,521
2022-11-16
https://stackoverflow.com/questions/74467521/how-to-get-foreign-key-attribute-and-many-to-many-attribute-of-a-model-instanc
In asynchronous queries, I want to get foreign key and many to many attributes of a model instance. In a simple example, I want to print university and courses for all instances of the model Student. models.py: from django.db import models class University(models.Model): name = models.CharField(max_length=64) class Course(models.Model): name = models.CharField(max_length=64) class Student(models.Model): name = models.CharField(max_length=64) university = models.ForeignKey(to=University, on_delete=models.CASCADE) courses = models.ManyToManyField(to=Course) when I use this code (in django 4.1): import asyncio async def main(): async for student in Student.objects.all(): print(student.name) print(student.university.name) for course in student.courses.all(): print(course.name) asyncio.run(main()) I get the following error: django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. How can I fix this error?
For getting foreign key and many to many attributes (for django 4.1 or higher) in async queries: Foreign key attribute: There are two options for getting foreign key attribute: 1- async for student in Student.objects.all(): university = await University.objects.aget(id=student.university_id) print(university.name) 2- async for student in Student.objects.select_related("university").all(): print(student.university.name) Many to Many attribute: async for student in Student.objects.all(): async for course in student.courses.all(): print(course.name)
3
2
74,507,306
2022-11-20
https://stackoverflow.com/questions/74507306/fastapi-returns-error-422-unprocessable-entity-when-i-send-multipart-form-dat
I have some issue with using Fetch API JavaScript method when sending some simple formData like so: function register() { var formData = new FormData(); var textInputName = document.getElementById('textInputName'); var sexButtonActive = document.querySelector('#buttonsMW > .btn.active'); var imagesInput = document.getElementById('imagesInput'); formData.append('name', textInputName.value); if (sexButtonActive != null){ formData.append('sex', sexButtonActive.html()) } else { formData.append('sex', ""); } formData.append('images', imagesInput.files[0]); fetch('/user/register', { method: 'POST', data: formData, }) .then(response => response.json()); } document.querySelector("form").addEventListener("submit", register); And on the server side (FastAPI): @app.post("/user/register", status_code=201) def register_user(name: str = Form(...), sex: str = Form(...), images: List[UploadFile] = Form(...)): try: print(name) print(sex) print(images) return "OK" except Exception as err: print(err) print(traceback.format_exc()) return "Error" After clicking on the submit button I get Error 422: Unprocessable entity. So, if I'm trying to add header Content-Type: multipart/form-data, it also doesn't help cause I get another Error 400: Bad Request. I want to understand what I am doing wrong, and how to process formData without such errors?
The 422 error response body will contain an error message about which field(s) is missing or doesn’t match the expected format. Since you haven't provided that (please do so), my guess is that the error is triggered due to how you defined the images parameter in your endpoint. Since images is expected to be a List of File(s), you should instead define it using the File type instead of Form. For example: images: List[UploadFile] = File(...) ^^^^ When using UploadFile, you don't have to use File() in the default value of the parameter, meaning that the below should work as well: images: List[UploadFile] Hence, the FastAPI endpoint should look similar to this: @app.post("/user/register") async def register_user(name: str = Form(...), images: List[UploadFile] = File(...)): pass In the frontend, make sure to use the body (not data) parameter in the fetch() function to pass the FormData object (see example in MDN Web Docs). For instance: var nameInput = document.getElementById('nameInput'); var imagesInput = document.getElementById('imagesInput'); var formData = new FormData(); formData.append('name', nameInput.value); for (const file of imagesInput.files) formData.append('images', file); fetch('/user/register', { method: 'POST', body: formData, }) .then(response => { console.log(response); }) .catch(error => { console.error(error); }); Please have a look at this answer, as well as this answer, which provide working examples on how to upload multiple files and form data to a FastAPI backend, using Fetch API in the frontend. As for manually specifying the Content-Type when sending multipart/form-data, you don't have to (and shouldn't) do that, but rather let the browser set the Content-Type—please take a look at this answer for more details.
3
2
74,481,613
2022-11-17
https://stackoverflow.com/questions/74481613/how-to-unstack-a-dataset-to-a-certain-dataframe
I have a dataset like this data = {'weight': ['NaN',2,3,4,'NaN',6,7,8,9,'NaN',11,12,13,14,15], 'MI': ['NaN', 21, 19, 18, 'NaN',16,15,14,13,'NaN',11,10,9,8,7]} df = pd.DataFrame(data, index= ['group1', "gene1", "gene2", 'gene3', 'group2', "gene1", 'gene21', 'gene4', 'gene7', 'group3', 'gene2', 'gene10', 'gene3', 'gene43', 'gene1']) I need to stack it to gene by group dataframe with MI values. If there are no gene values for particular group the imputated value should be 0.1. 'weights' column should be removed. The final dataframe should look like this
You can use: m = df['weight'].ne('NaN') (df[m] .set_index((~m).cumsum()[m], append=True)['MI'] .unstack('weight', fill_value=0.1) .add_prefix('group') ) Variant with pivot: m = df['weight'].ne('NaN') (df.assign(col=(~m).cumsum()) .loc[m] .pivot(columns='col', values='MI') .fillna(0.1) .add_prefix('group') ) Output: weight group1 group2 group3 gene1 21 16 7 gene10 0.1 0.1 10 gene2 19 0.1 11 gene21 0.1 15 0.1 gene3 18 0.1 9 gene4 0.1 14 0.1 gene43 0.1 0.1 8 gene7 0.1 13 0.1 update: keeping the original group names and lexicographic gene order from natsort import natsorted m = df['weight'].ne('NaN') grp = df.index.to_series().mask(m).ffill()[m] out = (df[m] .set_index(grp, append=True)['MI'] .unstack(-1, fill_value=0.1) .loc[natsorted(df.index[m].unique())] ) print(out) Output: group1 group2 group3 gene1 21 16 7 gene2 19 0.1 11 gene3 18 0.1 9 gene4 0.1 14 0.1 gene7 0.1 13 0.1 gene10 0.1 0.1 10 gene21 0.1 15 0.1 gene43 0.1 0.1 8
3
5
74,502,898
2022-11-19
https://stackoverflow.com/questions/74502898/polars-select-columns-not-exist-with-no-error
Is it possible to select a potentially non-existent column from a polars dataframe without exceptions (return a column with default values or null/None)? The behavior I really want can be shown in the example as follows: import polars as pl df1 = pl.DataFrame({"id": [1, 2, 3], "bar": ["sugar", "ham", "spam"]}) df2 = pl.DataFrame({"id": [4, 5, 6], "other": ["a", "b", "b"]}) df1.write_csv("df1.csv") df2.write_csv("df2.csv") df = pl.scan_csv("df*.csv").select(["id", "bar"]) res = df.collect() Now, if I run the code above, will get an error since df2.csv does not contain column "bar". The result I want is - res is just the contents in df1.csv, which means the dataframe in df2.csv will not be selected due to no column "bar" in it.
I mean as already in the comment mentioned above this functionality doesn't exist in polars, but we can construct a function which would fullfil your needs import glob def scan_csv_with_columns(file: str, needed_colnames: list[str]) -> pl.LazyFrame: file_collector = [] for filename in glob.glob(file): df_scan = pl.scan_csv(filename) if (df_scan.columns == needed_colnames): file_collector.append(df_scan) df = pl.concat(file_collector, how="vertical") return(df) file = "df*.csv" needed_colnames = ["id", "bar"] df = scan_csv_with_columns(file, needed_colnames) df.collect() shape: (3, 2) ┌─────┬───────┐ │ id ┆ bar │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════╪═══════╡ │ 1 ┆ sugar │ │ 2 ┆ ham │ │ 3 ┆ spam │ └─────┴───────┘
3
2
74,498,191
2022-11-19
https://stackoverflow.com/questions/74498191/how-to-define-multiple-api-endpoints-in-fastapi-with-different-paths-but-the-sam
I'm working on a project which uses FastAPI. My router file looks like the following: # GET API Endpoint 1 @router.get("/project/{project_id}/{employee_id}") async def method_one( project_id: str, organization_id: str, session: AsyncSession = Depends(get_db) ): try: return await CustomController.method_one( session, project_id, employee_id ) except Exception as e: return custom_exception_handler(e) # GET API Endpoint 2 @router.get("/project/details/{project_id}") async def method_two( project_id: str, session: AsyncSession = Depends(get_db) ): try: return await CustomController.method_two( session=session, project_id=project_id ) except Exception as e: return custom_exception_handler(e) # GET API Endpoint 3 @router.get("/project/metadata/{project_id}") async def method_three( project_id: str, session: AsyncSession = Depends(get_db) ): try: return await CustomController.method_three( session=session, project_id=project_id ) except Exception as e: return custom_exception_handler(e) The obvious expectation of workflow here is: when each of these API endpoints are triggered with their required path parameters, the controller method is executed, as defined in their body. However, for some strange reason, when API endpoints 2 and 3 are triggered, they are executing the controller method in endpoint 1, i.e., CustomController.method_one(). Upon adding some print() statements in the method method_one() of the router, I've observed that method_one() is being called when API endpoint 2 is called, while it is actually supposed to call method_two() in the router. Same is the case with API endpoint 3. I'm unable to understand why the method body of method_one() is getting executed, when API endpoints 2 and 3 are triggered. Am I missing out something on configuration, or something - can someone please correct me? Thanks!
In FastAPI, as described in this answer, because endpoints are evaluated in order (see FastAPI's about how order matters), it makes sure that the endpoint you defined first in your app—in this case, that is, /project/{project_id}/...—will be evaluated first. Hence, every time you call one of the other two endpoints, i.e., /project/details/... and /project/metadata/..., the first endpoint is triggered, using details or metadata as the project_id parameter. Solution Thus, you need to make sure that the other two endpoints are declared before the one for /project/{project_id}/.... For example: # GET API Endpoint 1 @router.get("/project/details/{project_id}") # ... # GET API Endpoint 2 @router.get("/project/metadata/{project_id}") # ... # GET API Endpoint 3 @router.get("/project/{project_id}/{employee_id}") # ...
4
5
74,510,279
2022-11-20
https://stackoverflow.com/questions/74510279/pyright-cant-see-poetry-dependencies
In a poetry project the local dependencies are installed in the ~/.cache/pypoetry/virtualenvs/ folder. Pyright in nvim is complaining that import package lines can't be resolved. What should I include into pyproject.toml? Or how to show pyright the path to the dependencies? Thanks My pyrightconfig.json looks like this: { "venvPath": ". /home/ajanb/.cache/pypoetry/virtualenvs/", "venv": "tools-configfactory-materialmodel-jnEEQvIP-py3.10" } I found that I need to add this to the config file of neovim, can you help me to write it in .lua? au FileType python let b:coc_root_patterns = ['.git', '.env', 'venv', '.venv', 'setup.cfg', 'setup.py', 'pyrightconfig.json']
I spent days troubleshooting this. In the end, the only thing that worked was including this in my pyproject.toml: [tool.pyright] venvPath = "/Users/user/Library/Caches/pypoetry/virtualenvs" venv = "bfrl-93mGb6aN-py3.11" I'm also using this nvim plugin: poet-v I guess you could accomplish this through a proper LSP configuration, but I was just not familiar enough with lua and the lsp configuration to tackle that automatically.
4
9
74,512,005
2022-11-20
https://stackoverflow.com/questions/74512005/fast-bitwise-get-column-in-python
Is there an efficient way to get an array of boolean values that are in the n-th position in bitwise array in Python? Create numpy array with values 0 or 1: import numpy as np array = np.array( [ [1, 0, 1], [1, 1, 1], [0, 0, 1], ] ) Compress size by np.packbits: pack_array = np.packbits(array, axis=1) Expected result - some function that could get all values from n-th column from bitwise array. For example if I would like the second column I would like to get (the same as I would call array[:,1]): array([0, 1, 0]) I have tried numba with the following function. It returns right results but it is very slow: import numpy as np from numba import njit @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res How to test it? import numpy as np import time from numba import njit array = np.random.choice(a=[False, True], size=(100000000,15)) pack_array = np.packbits(array, axis=1) start = time.time() array[:,10] print('np array') print(time.time()-start) @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res # To initialize getVector(pack_array, 10) start = time.time() getVector(pack_array, 10) print('getVector') print(time.time()-start) It returns: np array 0.00010132789611816406 getVector 0.15648770332336426
Besides some micro-optimisations, I dont believe that there is much that can be optimised here. There are also a few small mistakes in your code: @njit(nopython=True) is saying the same thing twice (the n in njit already stands for nopython mode.) simply @njit or @jit(nopython=True) should be used fastMath is for "cutting corners" when doing floating point math, since we are only working with integers and booleans, it can be safely removed because it does nothing for us here. My updated code (seeing a meagre 40% perfomance increase on my machine): import numba as nb import numpy as np np.random.seed(0) array = np.random.choice(a=[False, True], size=(10000000,15)) pack_array = np.packbits(array, axis=1) @nb.njit(locals={'res': nb.boolean[:]}) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=nb.boolean) byte = j//8 bit = 128>>(j%8) for i in range(n): res[i] = bool(packed[i, byte] & bit) return res getVector(pack_array, 10) In your answer, "res" is a list of 32 bit integers, by giving np.zeros() the numba (NOT numpy) boolean datatype, we can swap it to the more efficient booleans. This is where most of the perfomance improvement comes from. On my machine putting j_mod and j_flr outside of the loop had no noticable effect. But it did have an effect for the commenter @Michael Szczesny, so it might help you aswell. I would not try to use strides, which @Nick ODell is suggesting, because they can be quite dangerous if used incorrectly. (See the numpy documentation). edit: I have made a few small changes that were suggested by Michael. (Thanks)
3
2
74,483,457
2022-11-17
https://stackoverflow.com/questions/74483457/check-column-names-and-column-types-in-great-expectations
Currently, I am validating the table schema with expect_table_columns_to_match_set by feeding in a list of columns. However, I want to validate the schema associated with each column such as string. The only available Great Expectations rule expect_column_values_to_be_of_type has to be written for each column name and also creates redundancy by repeating the column names. Is there any rule that I am missing that I can validate both the name and the schema at the same time? For exmaple, given column a: string, b: int, c: boolean, I want to pass that whole info into one function instead of having to break it into [a,b,c] and validating [a], string` separately for each column. Ideally, it will be something like expect_column_schmea([(column_name_a, column_type_a), (column_name_b, column_type_b)]
You can use expect_column_values_to_match_json_schema (or regex / pattern - depending on what you are more comfortable with). Here is the list of expectations that are possible to use. With expect_column_values_to_match_json_schema you can define your schema in a json format: schema = { "column_name_a": {"type": "string"}, "column_name_b": {"type": "integer"}, "column_name_c": {"type": "boolean"}, } Create a new ExpectColumnValuesToMatchSchema instance (import for that was from great_expectations.expectations.core.expect_column_values_to_match_schema import ( ExpectColumnValuesToMatchSchema, )): expectation = ExpectColumnValuesToMatchSchema(schema=schema) And finally validate it to get your results: `result = expectation.validate(dataset)! You will get a ExpectationSuiteValidationResult as a return and can accordingly check whether the columns you provided match / do not match the schema!
4
2
74,493,571
2022-11-18
https://stackoverflow.com/questions/74493571/asyncio-sleep0-does-not-yield-control-to-the-event-loop
I have a simple async setup which includes two coroutines: light_job and heavy_job. light_job halts in the middle and heavy_job starts. I want heavy_job to yield the control in the middle and allow light_job to finish but asyncio.sleep(0) is not working as I expect. this is the setup: import asyncio import time loop = asyncio.get_event_loop() async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") await asyncio.sleep(0) print("heavy halt ended") time.sleep(3) print("heavy done") loop.run_until_complete(asyncio.gather( light_job(), heavy_job() )) if I run this code, the light_job will not continue until after heavy_job is done. this is the outpu: hello 1668793123.159075 haevy start heavy halt started heavy halt ended heavy done 1668793129.1706061 world! but if I change asyncio.sleep(0) to asyncio.sleep(0.0001), the code will work as expected: hello 1668793379.599066 heavy start heavy halt started 1668793382.605899 world! heavy halt ended heavy done based on documentations and related threads, I expect asyncio.sleep(0) to work exactly as asyncio.sleep(0.0001). what is off here?
I think this subject needs some more discussion. I intend this post as an appendix to Daniel T's excellent and very clever answer - that's a fine piece of work. But Dan Getz's comment made me think that some more detail would be helpful. Dan suggests that there is no general way to yield to another task. This is correct because there is no guarantee that any other Task is ready to run, nor is there any guarantee of the execution order of the various Tasks. The example program fails to meet expectations because of details in the event loop implementation, which I discuss below. There are, however, tools for unambiguously synchronizing work between different Tasks. It's probably a bad idea to rely on time intervals in asyncio.sleep() for this purpose. Consider the following program, which uses an asyncio.Event to force light_job() to finish before heavy_job() can enter its second time.sleep delay. This will always work because the program logic is explicit: import asyncio import time event = asyncio.Event() async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") event.set() async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") # await asyncio.sleep(0) await event.wait() print("heavy halt ended") time.sleep(3) print("heavy done") async def main(): await asyncio.gather(light_job(), heavy_job()) asyncio.run(main()) Even simpler is this approach, which avoids the use of Event and even of gather: import asyncio import time async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): light = asyncio.create_task(light_job()) print("heavy start") time.sleep(3) print("heavy halt started") # await asyncio.sleep(0) await light print("heavy halt ended") time.sleep(3) print("heavy done") async def main(): await heavy_job() asyncio.run(main()) As for why the original script failed, the explanation can be found in the event loop implementation. An event loop keeps track of two things: a list of "ready" items, representing Tasks that are able to execute right now; and a list of "scheduled" items, representing Tasks that are waiting for some time interval to expire. Every time the event loop goes through a cycle, its first step is to examine the list of scheduled items and see if any are ready to proceed. It appends any of those items to the "ready" list. Then it executes this simple loop to run all the ready Tasks (I have omitted some diagnostic code; this is from Python3.10 standard library module base_events.py). Here, _ready is a deque. The items in the queue all have a run method that causes the Task to take one step forward, or in other words, to resume the Task at the point where it previously was suspended (typically an await expression). ntodo = len(self._ready) for i in range(ntodo): handle = self._ready.popleft() if handle._cancelled: continue else: handle._run() It's also the case that await asyncio.sleep(0) is implemented differently from await asyncio.sleep(x) where x > 0. In the first case, the await expression yields a value of None. The Task object simply appends an item to the "ready" list. In the second case, the await expression executes a loop.call_later function call, which creates a Future. The Task object appends an item to the "scheduled" list. Here is the implementation of asyncio.sleep in tasks.py: @types.coroutine def __sleep0(): """Skip one event loop run cycle. This is a private helper for 'asyncio.sleep()', used when the 'delay' is set to 0. It uses a bare 'yield' expression (which Task.__step knows how to handle) instead of creating a Future object. """ yield async def sleep(delay, result=None): """Coroutine that completes after a given time (in seconds).""" if delay <= 0: await __sleep0() return result loop = events.get_running_loop() future = loop.create_future() h = loop.call_later(delay, futures._set_result_unless_cancelled, future, result) try: return await future finally: h.cancel() So in the example script in the original post, the Task test will start with two items in its "ready" list: [light_job, heavy_job]. The scheduled list is empty. Light_job starts and hits await asyncio.sleep(1), so an item is appended to the "scheduled" list that represents this time delay. Now heavy_job runs for three seconds and hits await asyncio.sleep(0), so an item is appended to the "ready" list which indicates that this Task is to proceed without delay. That's the end of one full cycle of the event loop. The cycle ends even though the ready list isn't empty at that point, because the await with a zero delay caused heavy_job to be appended to the ready list immediately. In the next cycle of the event loop, the ready list has one item, which was placed there on the previous cycle: [heavy_job]. The scheduled list also has one item: [light_job]. The event loop examines the scheduled list and sees that light_job is now ready, so it appends light_job to ready_list, which now looks like this: [heavy_job, light_job]. So the code logic has essentially caused the order of the Tasks to get switched. Result: heavy_job runs twice in a row, once at the end of the first cycle and once at the beginning of the second. This also explains what happened when you replaced await asyncio.sleep(0) with await asyncio.sleep(0.0001). In that case, the Task got appended to the scheduled list rather than the ready list. Then ready=[] and scheduled=[light_job, heavy_job]. On the next cycle of the loop both Tasks are ready, but the order will once again be [light_job, heavy_job]. This machinery is invisible to client code, as it should be, but it has a weird consequence in this particular script. Whether or not this should be called a "bug" is a matter of debate. I assume there are good performance reasons why asyncio.sleep(0) is implemented differently from asyncio.sleep(nonzero).
12
19
74,508,024
2022-11-20
https://stackoverflow.com/questions/74508024/is-requirements-txt-still-needed-when-using-pyproject-toml
Since mid 2022 it is now possible to get rid of setup.py, setup.cfg in favor of pyproject.toml. Editable installs work with recent versions of setuptools and pip and even the official packaging tutorial switched away from setup.py to pyproject.toml. However, documentation regarding requirements.txt seems to be have been also removed, and I wonder where to put the pinned requirements now? As a refresher: It used to be common practice to put the dependencies (without version pinning) in setup.py avoiding issues when this package gets installed with other packages needing the same dependencies but with conflicting version requirements. For packaging libraries a setup.py was usually sufficient. For deployments (i.e. non libraries) you usually also provided a requirements.txt with version-pinned dependencies. So you don't accidentally get the latest and greatest but the exact versions of dependencies that that package has been tested with. So my question is, did anything change? Do you still put the pinned requirements in the requirements.txt when used together with pyproject.toml? Or is there an extra section for that in pyproject.toml? Is there some documentation on that somewhere?
Quoting myself from here My current assumption is: [...] you put your (mostly unpinned) dependencies to pyproject.toml instead of setup.py, so you library can be installed as a dependency of something else without causing much troubles because of issues resolving version constraints. On top of that, for "deployable applications" (for lack of a better term), you still want to maintain a separate requirements.txt with exact version pinning. Which has been confirmed by a Python Packaging Authority (PyPA) member and clarification of PyPA's recommendations should be updated accordingly at some point.
78
30
74,524,530
2022-11-21
https://stackoverflow.com/questions/74524530/how-to-get-the-items-inside-of-an-openaiobject-in-python
I would like to get the text inside this data structure that is outputted via GPT3 OpenAI. I'm using Python. When I print the object I get: <OpenAIObject text_completion id=cmpl-6F7ScZDu2UKKJGPXTiTPNKgfrikZ at 0x7f7648cacef0> JSON: { "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "text": "\nWhat was Malcolm X's original name?\nMalcolm X's original name was Malcolm Little.\n\nWhere was Malcolm X born?\nMalcolm X was born in Omaha, Nebraska.\n\nWhat was the profession of Malcolm X's father?\nMalcolm X's father was a Baptist minister.\n\nWhat did Malcolm X do after he stopped attending school?\nMalcolm X became involved in petty criminal activities." } ], "created": 1669061618, "id": "cmpl-6F7ScZDu2gJJHKZSPXTiTPNKgfrikZ", "model": "text-davinci-002", "object": "text_completion", "usage": { "completion_tokens": 86, "prompt_tokens": 1200, "total_tokens": 1286 } } How do I get the 'text' component of this? For example, if this object is called: qa ... I can output qa['choices'] And I get the same items as above... but adding a .text or ['text'] to this does not do it, and gets an error. But not sure how to isolate the 'text' I've read the docs, but cannot find this... https://beta.openai.com/docs/api-reference/files/delete?lang=python
x = {&quot;choices&quot;: [{&quot;finish_reason&quot;: &quot;length&quot;, &quot;text&quot;: &quot;, everyone, and welcome to the first installment of the new opening&quot;}], } text = x['choices'][0]['text'] print(text) # , everyone, and welcome to the first installment of the new opening
7
3
74,496,411
2022-11-18
https://stackoverflow.com/questions/74496411/does-python-have-a-maximum-group-refer-for-regex-like-perl
Context: When running a regex match in Perl, $1, $2 can be used as references to captured regex references from the match, similarly in Python \g<0>,\g<1> can be used Perl also has a $+ special reference which refers to the captured group with highest numerical value My question: Does Python have an equivalent of $+ ? I tried \g<+> and tried looking in the documentation which only says: There’s also a syntax for referring to named groups as defined by the (?P<name>...) syntax. \g<name> will use the substring matched by the group named name, and \g<number> uses the corresponding group number. \g<2> is therefore equivalent to \2, but isn’t ambiguous in a replacement string such as \g<2>0. (\20 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character '0'.) The following substitutions are all equivalent, but use all three variations of the replacement string.
The method captures in the regex module provides the same functionality: it "returns a list of all the captures of a group." So get the last one >>> import regex >>> str = 'fza' >>> m = regex.search(r'(a)|(f)', str) >>> print(m.captures()[-1]) f When the str has a before f this code prints a. This is the exact equivalent of Per's $+. Here we do get all captures, not only the highest one, and other related methods. Follow the word "captures" in the linked docs. Another option that fits the intended use, explained in a comment, is the branch reset group, (?|pattern). It is also available in the regex module. >>> import regex >>> m = regex.search(r'(?|(a)|(b))', 'zba') >>> m.group(1) 'b' In short, with the branch reset (?|(pA)|(pB)|(pC)) the whole pattern is one capture group (with three alternations), not three. So you always know which is the "last" capture as there is only one, which has the match. This can be used with named capture groups as well. This feature adds far more power as the pattern in (?|...) gets more complex. Find it in your favorite regex documentation. Here it is in regular-expressions.info, for example, and here are some Perl resources, in perlre and an article in The Effective Perler.
3
3
74,538,831
2022-11-22
https://stackoverflow.com/questions/74538831/how-to-send-push-notifications-to-ios-using-a-python-api
I have create a webscraper that sends notifications to my phone whenever certain events are detected. So far I have achieved this by sending emails through the sendgrid api. Its a pretty nice service, and it is free, but it clutters up the mailbox quite a bit. In stead I’d like to send messages directly to the iOS notification bar. Does anyone here has experience with sending push-notifications to iOS and can point me in the correct direction? I would be happy with a subscription service, but would off course prefer a solution that does not require a third party if it is possible. I have tested PushNotifier, but I found it a bit clunky, and the notifications are neither customisable or beautiful. Its also not a free service, which would have been a great plus.
Maybe you should check out pushover.net. They have a simple WebAPI to send customized notifications to iOS devices. See https://support.pushover.net/i44-example-code-and-pushover-libraries#python for code samples.
3
4
74,536,056
2022-11-22
https://stackoverflow.com/questions/74536056/why-is-plotly-express-so-much-more-performant-than-plotly-graph-objects
I'm visualizing a scatterplots with between 400K and 2.5M points. I expectected to need to downsample before visualizing but to see just how much I ran a pilot test with a 400k dataset in plotly express, and the plot popped up quickly, beautifully, and responsively. In order to make the interractive figure I really need to use plotly.graph_objects, as I need multiple traces with different colorscales, so I made basically the same graph with graph_objects and it wasn't just slower, it crashed my computer. I'd really like to downsample as little as possible and I'm surprised by the sheer performance difference between these two approaches so I guess that boils down to my question: Why is there such a performance difference and is it possible to change layout/figure/whatever parameters in graph_objects so to close the gap? Here is a snippet to show what I mean by basically the same graph: graph_objects fig = go.Figure() fig.add_trace(go.Scatter(x = x_values, y = y_values, opacity = opacity, marker = { 'size': size, 'color': community, 'colorscale': colorscale })) express pacmap_map = px.scatter(x = x_values, y = y_values, color_continuous_scale=colorscale, opacity = opacity, color = community) pacmap_map.update_traces(marker = { 'size': size }) I would have expected performance to either be identical or at least in the same ballpark, but express works like a dream and graph_objects crashes the jupyter kernel and whatever IDE it is running from, so a large difference.
Running the following simple example: import numpy as np import plotly.graph_objects as go import plotly.express as px x = np.linspace(-2, 2, 100000) y = np.cos(x) fig = go.Figure(data=[go.Scatter(x=x, y=y)]) fig2 = px.scatter(x=x, y=y) type(fig.data[0]), type(fig2.data[0]) # out: (plotly.graph_objs._scatter.Scatter, plotly.graph_objs._scattergl.Scattergl) As you can see, plotly express appears to switch to Scattergl when the number of points is higher than some threshold. Scattergl renders on an html5 canvas, hence it uses the GPU (hence efficiency). Whereas Scatter creates svg objects that get inserted in the current document, consuming muuuuuch more memory.
5
7
74,532,061
2022-11-22
https://stackoverflow.com/questions/74532061/how-to-get-todays-date-in-sparql
I use Python and SPARQL to make a scheduled query for a database. I tried to use the python f-string and doc-string to inject today's date in the query, but when I try so, a conflict occurs with SPARQL syntax and the python string. The better way would be to use SPARQL to get today's date. In my python file my query looks like this: query = """ PREFIX skos: <http://www.w3.org/2004/02/skos/core#> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> SELECT * { VALUES (?currentDateString) {(today)} FILTER(xsd:date(?dateApplicabilityNode) >= xsd:date(?validDateString) && xsd:date(?dateApplicabilityNode) <= xsd:date(?currentDateString)) } GROUP BY ... ORDER BY ... How to get today's date in the format of "YYYY-MM-DD"?
now() returns the datetime (as xsd:dateTime) of the query execution: BIND( now() AS ?currentDateTime ) . To get only the date (as xsd:string), you could use CONCAT() with year(), month(), and day(): BIND( CONCAT( year(?currentDateTime), "-", month(?currentDateTime), "-", day(?currentDateTime) ) AS ?currentDateString ) . (To get the date as xsd:date, you could use xsd:date(?currentDateString).)
5
3
74,525,250
2022-11-21
https://stackoverflow.com/questions/74525250/how-to-use-multiple-urls-with-pip-extra-index-url
I want to configure my pip using environmental variables. I already have two pip index urls. So I'm already using PIP_INDEX_URL and PIP_EXTRA_INDEX_URL variables. PIP_INDEX_URL="https://example.com" PIP_EXTRA_INDEX_URL="https://example2.com" But I want to add one more index url. I don't know how I tried to add it with ; PIP_INDEX_URL="https://example.com" PIP_EXTRA_INDEX_URL="https://example2.com;https://example3.com" But it didn't seem to work
Pip expects an empty space ( ) to separate the values in environment variables. In this case, for example: PIP_EXTRA_INDEX_URL="https://example2.com https://example3.com" See pip's documentation section "Environment variables".
5
8
74,527,775
2022-11-22
https://stackoverflow.com/questions/74527775/how-to-convert-avif-to-png-with-python
I have an image file in avif format How can I convert this file to png format? I found some code to convert jpg files to avif, but I didn't find any code to reconvert them.
You need to install this modules: pip install pillow-avif-plugin Pillow Then: from PIL import Image import pillow_avif img = Image.open('input.avif') img.save('output.png')
4
13
74,479,890
2022-11-17
https://stackoverflow.com/questions/74479890/tracking-claims-using-date-timestamp-columns-and-creating-a-final-count-using-pa
I have an issue where I need to track the progression of patients insurance claim statuses based on the dates of those statuses. I also need to create a count of status based on certain conditions. DF: ClaimID New Accepted Denied Pending Expired Group 001 2021-01-01T09:58:35:335Z 2021-01-01T10:05:43:000Z A 002 2021-01-01T06:30:30:000Z 2021-03-01T04:11:45:000Z 2021-03-01T04:11:53:000Z A 003 2021-02-14T14:23:54:154Z 2021-02-15T11:11:56:000Z 2021-02-15T11:15:00:000Z A 004 2021-02-14T15:36:05:335Z 2021-02-14T17:15:30:000Z A 005 2021-02-14T15:56:59:009Z 2021-03-01T10:05:43:000Z A In the above dataset, we have 6 columns. ClaimID is simple and just indicates the ID of the claim. New, Accepted, Denied, Pending, and Expired indicate the status of the claim and the day/time those statuses were set. What I need to do is get a count of how many claims are New on each day and how many move out of new into a new status. For example, There are 2 new claims on 2021-01-01. On that same day 1 moved to Accepted about 7 minutes later. Thus on 2021-01-01 the table of counts would read: DF_Count: Date New Accepted Denied Pending Expired 2021-01-01 2 1 0 0 0 2021-01-02 1 0 0 0 0 2021-01-03 1 0 0 0 0 2021-01-04 1 0 0 0 0 2021-01-05 1 0 0 0 0 .... .... .... .... .... .... 2021-02-14 4 2 0 0 0 2021-02-15 2 3 0 0 1 2021-02-16 2 2 0 0 0 Few Conditions: If a claim moves from one status to the other on the same day (even if they are a minutes/hours apart) it would not be subtracted from the original status until the next day. This can be seen on 2021-01-01 where claim 001 moves from new to accepted on the same day but the claim is not subtracted from new until 2021-01-02. Until something happens to a claim, it should remain in its original status. Claim 002 will remain in new until 2021-03-01 when it is approved. If a claim changes status on a later date than its original status, it will be subtracted on that later date. For this, see status 003. It is new on 2/14 but accepted on 2/15. This is why New goes down by 2 on 2/15 (the other claim is the is 004 which is new and accepted on the same day) For certain statuses, I do not need to look at all columns. For example, For new I only look at the dates inside Accepted and Denied. Not Pending and Expired. When I do these same steps for approved, I no longer need to look at new, just the other columns. How would I do that? In the final DF_count table, the dates should start from the earliest date in 'New' and end on todays date. The code needs to be grouped by the Group Column as well. For example, patients in group B (not pictured) will have to have the same start and end date but for their own claims. I need to do this separately for all of the statuses. Not just new. Current Solution: My current solution has been to create an dataset with just dates from the min New Date to todays date. Then for each column, what I do is use the .loc method to find dates that are greater than New in each of the other columns. For example, in the code below I look for all cases where new is equal to approved. df1 = df.loc[(df['New'] == df['Approved']) & ((df['Expired'].isnull()) | (df['Expired'] >= df['Accepted'])) & ((df['Pending'].isnull()) | (df['Pending'] >= df['Accepted'])) & ((df['Denied'].isnull()) | (df['Denied'] >= df['Accepted']))] newtoaccsday = df1.loc[:, ('Group', 'Accepted')] newtoappsday['Date'] = newtoappsday['Accepted'] newtoappsday = newtoappsday.reset_index(drop = True) newtoappsday= newtoappsday.groupby(['Date', 'Group'], as_index = False)['Approved'].value_counts() newtoappsday.drop(columns = {'Accepted'}, inplace = True) newtoappsday.rename(columns = {'count': 'NewAppSDay'}, inplace = True) newtoappsday['Date'] = newtoappsday['Date'] + timedelta(1) df_count= df_count.merge(newtoappsday, how = 'left', on = ['Date', 'Group']).fillna(0) --After doing the above steps for all conditions (where new goes to accepted on a later date etc.) I will do the final calculation for new: df_count['New'] = df_count.eval('New = New - (NewAccSDay + NewAccLater + NewDenSDay + NewDenLater + NewExpLater + NewPendSDay + NewPendLater)').groupby(['Tier2_ID', 'ClaimType'])['New'].cumsum() Any and all help would be greatly appreciated. My method above is extremely inefficient and leading to some errors. Do I need to write a for loop for this? What is the best way to go about this.
First convert the date columns with something like for i in ['New', 'Accepted', 'Denied', 'Pending', 'Expired']: df[i] = pd.to_datetime(df[i], format="%Y-%m-%dT%H:%M:%S:%f%z") Then develop the date range applicable based on your column conditions. In this logic if Denied is there the range is new --> denied, or if accepted new --> accepted or if no acceptance new --> today with code like (alter as per rules): df['new_range'] = df[['New','Accepted','Denied']].apply (lambda x: pd.date_range(x['New'],x['Denied']).date.tolist() if pd.notnull(x['Denied']) else pd.date_range(x['New'],x['Accepted']).date.tolist() if pd.notnull(x['Accepted']) else pd.date_range(x['New'],datetime.today()).date.tolist() ,axis=1) You should be able filter on a group and see date ranges in your df like: df[df['Group']=='A']['new_range'] 0 [2021-01-01] 1 [2021-01-01, 2021-01-02, 2021-01-03, 2021-01-0... 2 [2021-02-14] 3 [2021-02-14] 4 [2021-02-14, 2021-02-15, 2021-02-16, 2021-02-1.. Then you can explode the date ranges and group on counts to get the new counts for each day with code like: new = pd.to_datetime(df[df['Group']=='A']['new_range'].explode('Date')).reset_index() newc = new.groupby('new_range').count() newc new_range 2021-01-01 2 2021-01-02 1 2021-01-03 1 2021-01-04 1 2021-01-05 1 2021-01-06 1... Similarly get counts for accepted, denied and then left joined on date to arrive at final table, fill na to 0. By creating your rules to expand your date range, then explode over date range and groupby to get your counts you should be able to avoid much of the expensive operation.
3
2
74,517,390
2022-11-21
https://stackoverflow.com/questions/74517390/python-playwright-is-there-a-way-to-introspect-and-or-run-commands-interactivel
I'm trying to move from Selenium to Playwright for some webscraping tasks. Perhaps I got stuck into this bad habit of having Selenium running the browser on the side while testing the commands and selectors on the run. Is there any way to achieve something similar using Playwright? What I achieved so far was running playwright on the console, something similar to this: from playwright.sync_api import sync_playwright with sync_playwright() as pw: browser = pw.chromium.launch(headless=False) page = browser.new_page() page.goto('https://google.com') page.pause() I get a Browser window together with a Playwright Inspector - from there I none of my commands or variables will execute.
I'd use the technique from can i run playwright outside of 'with'? and How to start playwright outside 'with' without context managers on the interactive repl: PS C:\Users\foo\Desktop> py Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from playwright.sync_api import sync_playwright >>> p = sync_playwright().start() >>> browser = p.chromium.launch(headless=False) >>> page = browser.new_page() >>> page.goto("https://www.example.com") <Response url='https://www.example.com/' request=<Request url='https://www.example.com/' method='GET'>> >>> page.title() 'Example Domain' >>> page.close() >>> browser.close() >>> p.stop() If you use page.pause(), try running playwright.resume() in the browser dev tools console to resume the Python repl. If you really need to do this from a script rather than the Python repl, you could use the code interpreter or roll your own, but I'd try to avoid this if possible.
3
2
74,523,327
2022-11-21
https://stackoverflow.com/questions/74523327/color-problem-with-log-transform-to-brighten-dark-area-why-and-how-to-fix
So I try to enhance this image by applying log transform on it original image The area where there are bright white color turns into color blue on the enhanced image. enhanced image path = '...JPG' image = cv2.imread(path) c = 255 / np.log(1 + np.max(image)) log_image = c * (np.log(image + 1)) # Specify the data type so that # float value will be converted to int log_image = np.array(log_image, dtype = np.uint8) cv2.imwrite('img.JPG', log_image) There's also a warning: RuntimeWarning: divide by zero encountered in log I tried using other type of log (e.g log2, log10...) but it still show the same result. I tried changing dtype = np.uint32 but it causes error.
Same cause for the two problems Namely this line log_image = c * (np.log(image + 1)) image+1 is an array of np.uint8, as image is. But if there are 255 components in image, then image+1 overflows. 256 are turned into 0. Which lead to np.log(imag+1) to be log(0) at this points. Hence the error. And hence the fact that brightest parts have strange colors, since they are the ones containing 255 So, since log will have to work with floats anyway, just convert to float yourself before calling log path = '...JPG' image = cv2.imread(path) c = 255 / np.log(1 + np.max(image)) log_image = c * (np.log(image.astype(float) + 1)) # Specify the data type so that # float value will be converted to int log_image = np.array(log_image, dtype = np.uint8) cv2.imwrite('img.JPG', log_image)
3
3
74,519,974
2022-11-21
https://stackoverflow.com/questions/74519974/fancy-indexing-calculation-of-adjacency-matrix-from-adjacency-list
Problem: I want to calculate at several times the adjacency matrix A_ij given the adjacency list E_ij, where E_ij[t,i] = j gives the edge from i to j at time t. I can do it with the following code: import numpy as np nTimes = 100 nParticles = 10 A_ij = np.full((nTimes, nParticles, nParticles), False) E_ij = np.random.randint(0, 9, (100, 10)) for t in range(nTimes): for i in range(nParticles): A_ij[t, i, E_ij[t,i]] = True Question: How can I do it in a vectorized way, either with fancy indexing or using numpy functions such as np.take_along_axis? What I tried: I expected this to work: A_ij[:,np.arange(nParticles)[None,:,None], E_ij[:,None,np.arange(nParticles)]] = True But it does not. Related to: Trying to convert adjacency list to adjacency matrix in Python
I think this might work: import numpy as np nTimes = 100 nParticles = 10 A_ij = np.full((nTimes, nParticles, nParticles), False) E_ij = np.random.randint(0, 9, (100, 10)) np.put_along_axis(A_ij, E_ij[..., None], True, axis=2)
3
2
74,515,286
2022-11-21
https://stackoverflow.com/questions/74515286/prophet-forecasting
My dataframe is in weekly level as below: I was trying to implement a prophet model using this code: df.columns = ['ds', 'y'] # define the model model = Prophet(seasonality_mode='multiplicative') # fit the model model1 = model.fit(df) model1.predict(10) I need to predict the output in a weekly level for the next 10 weeks. How can I fix this?
You need to use model.make_future_dataframe to create new dates: model = Prophet() model.fit(df) future = model.make_future_dataframe(periods=10, freq='W') predictions = model.predict(future) predictions will give predicted values for the whole dataframe, you can reach to the forecasted values for the next 10 weeks with simple indexing: future_preds = predictions.iloc[-10:]
3
4
74,516,642
2022-11-21
https://stackoverflow.com/questions/74516642/how-do-i-get-specific-keys-and-their-values-from-nested-dict-in-python
I need help, please be kind I'm a beginner. I have a nested dict like this: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name" : "example", "person":{ "birthyear": "2002" "birthname": "Examply" }, "order":{ "orderId": "1234" "ordername": "onetwothreefour" } } How do I get a new dict like: new_dict = {"timestamp": "2022-11-18T10: 10: 49.301Z", "birthyear": "2002", "birthname": "Examply", "orderId": "1234"} I tried the normal things I could google. But I only found solutions like getting the values without the keys back or it only works for flatten dicts. Last thing I tried: new_dict = {key: msg[key] for key in msg.keys() & {'timestamp', 'birthyear', 'birthname', 'orderId'} This do not work for the nested dict. May someone has an easy option for it.
A general approach: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name": "example", "person": { "birthyear": "2002", "birthname": "Examply" }, "order": { "orderId": "1234", "ordername": "onetwothreefour" } } def nested_getitem(d, keys): current = d for key in keys: current = current[key] return current new_dict = {"timestamp": nested_getitem(dict_, ["timestamp"]), "birthyear": nested_getitem(dict_, ["person", "birthyear"]), "birthname": nested_getitem(dict_, ["person", "birthname"]), "orderId": nested_getitem(dict_, ["order", "orderId"]), } print(new_dict) Output {'timestamp': '2022-11-18T10: 10: 49.301Z', 'birthyear': '2002', 'birthname': 'Examply', 'orderId': '1234'}
3
2
74,510,820
2022-11-20
https://stackoverflow.com/questions/74510820/add-two-legends-in-the-same-plot
I've a x and y. Both are flattened 2D arrays. I've two similar arrays, one for determining the colour of datapoint, another for determining detection method ("transit" or "radial"), which is used for determining the marker shape. a=np.random.uniform(0,100,(10,10)).ravel() #My x b=np.random.uniform(0,100,(10,10)).ravel() #My y d=np.random.choice([0,1,2],(10,10)).ravel() #Color map e=np.random.choice(["radial","transit"],(10,10)).ravel() #Marker type Since there can be only one type of marker in a scatterplot and I have two types of markers, I call the scatterplot twice. a_radial=a[e=="radial"] b_radial=b[e=="radial"] d_radial=d[e=="radial"] a_transit=a[e=="transit"] b_transit=b[e=="transit"] d_transit=d[e=='transit'] fig,ax=plt.subplots() #One plot each for two types of markers. scatter1=ax.scatter(a_radial,b_radial,c=d_radial,marker='o',label="Radial") scatter2=ax.scatter(a_transit,b_transit,c=d_transit,marker='^',label="Transit") ax.legend(*scatter1.legend_elements(),loc=(1.04, 0),title="Legend") ax.legend(loc=(1.01, 0),title="Detection") plt.show() This is giving me the following plot I want a label for color map too but as soon as I add the code for it, the label for "Detection" disappears Why is that and how can I resolve it? I added command for the label legends but it only shows one at a time. I was expecting both of them to show up at the same time.
You could just manually add the first legend to the Axes: leg1 = ax.legend(*scatter1.legend_elements(), bbox_to_anchor=(1.04, 1), loc="upper left", title="Legend") ax.add_artist(leg1) However, this is not every clear as the color legend uses the marker for Radial and the Detection legend uses just two arbitrary colors out of the three. So a better solution is to make two neutral legends that don't mix marker symbol and color: import matplotlib.patches as mpatches import matplotlib.lines as mlines # ... handles = [mpatches.Patch(color=line.get_color()) for line in scatter1.legend_elements()[0]] leg1 = ax.legend(handles, scatter1.legend_elements()[1], bbox_to_anchor=(1.04, 1), loc="upper left", title="Legend") ax.add_artist(leg1) handles = [mlines.Line2D([], [], marker=marker, mec='k', mfc='w', ls='') for marker in ['o', '^']] ax.legend(handles, ['Radial', 'Transit'], loc=(1.01, 0),title="Detection") As an alternative, you could use seaborn's scatterplot, where you can specify hue and style to get two legend categories (although in one legend), see 4th example in the seaborn scatterplot docs. In this example, however, the marker used for the different hues (days) is the same as for Lunch, so I think my solution above is a bit clearer.
3
5
74,500,614
2022-11-19
https://stackoverflow.com/questions/74500614/python-decimal-multiplication-by-zero
Why does the following code: from decimal import Decimal result = Decimal('0') * Decimal('0.8881783462119193534061639577') print(result) return 0E-28 ? I've traced it to the following code in the module: if not self or not other: ans = _dec_from_triple(resultsign, '0', resultexp) # Fixing in case the exponent is out of bounds ans = ans._fix(context) return ans The code appears to follow Decimal Arithmetic Specification, which doesn't explicitly suggest what to do when we multiply by zero, referring to 'special numbers' from another standard, which also doesn't specify what we do when we multiply an integer by zero :) So the decimal library does the thing that is explicitly specified: The coefficient of the result, before rounding, is computed by multiplying together the coefficients of the operands. The exponent of the result, before rounding, is the sum of the exponents of the two operands. The sign of the result is the exclusive or of the signs of the operands. Question: what is the need to return the coefficient and exponent (i.e, 0E-28) if one of the operands is a zero? We already know what that coefficient is when calling the multiplication function. Why not just return zero?
Raymond Hettinger has given a comprehensive explanation at cpython github: In Arithmetic Operations, the section on Arithmetic operations rules tells us: Trailing zeros are not removed after operations. There are test cases covering multiplication by zero. Here are some from multiply.decTest: -- zeros, etc. mulx021 multiply 0 0 -> 0 mulx022 multiply 0 -0 -> -0 mulx023 multiply -0 0 -> -0 mulx024 multiply -0 -0 -> 0 mulx025 multiply -0.0 -0.0 -> 0.00 mulx026 multiply -0.0 -0.0 -> 0.00 mulx027 multiply -0.0 -0.0 -> 0.00 mulx028 multiply -0.0 -0.0 -> 0.00 mulx030 multiply 5.00 1E-3 -> 0.00500 mulx031 multiply 00.00 0.000 -> 0.00000 mulx032 multiply 00.00 0E-3 -> 0.00000 -- rhs is 0 mulx033 multiply 0E-3 00.00 -> 0.00000 -- lhs is 0 mulx034 multiply -5.00 1E-3 -> -0.00500 mulx035 multiply -00.00 0.000 -> -0.00000 mulx036 multiply -00.00 0E-3 -> -0.00000 -- rhs is 0 mulx037 multiply -0E-3 00.00 -> -0.00000 -- lhs is 0 mulx038 multiply 5.00 -1E-3 -> -0.00500 mulx039 multiply 00.00 -0.000 -> -0.00000 mulx040 multiply 00.00 -0E-3 -> -0.00000 -- rhs is 0 mulx041 multiply 0E-3 -00.00 -> -0.00000 -- lhs is 0 mulx042 multiply -5.00 -1E-3 -> 0.00500 mulx043 multiply -00.00 -0.000 -> 0.00000 mulx044 multiply -00.00 -0E-3 -> 0.00000 -- rhs is 0 mulx045 multiply -0E-3 -00.00 -> 0.00000 -- lhs is 0 And this from the examples: mulx053 multiply 0.9 -0 -> -0.0 In the Summary of Arithmetic section, the motivation is explained at a high level: The arithmetic was designed as a decimal extended floating-point arithmetic, directly implementing the rules that people are taught at school. Up to a given working precision, exact unrounded results are given when possible (for instance, 0.9 ÷ 10 gives 0.09, not 0.089999996), and trailing zeros are correctly preserved in most operations (1.23 + 1.27 gives 2.50, not 2.5). Where results would exceed the working precision, floating-point rules apply. More detail in given in the FAQ section Why are trailing fractional zeros important?.
9
3
74,509,113
2022-11-20
https://stackoverflow.com/questions/74509113/switch-change-the-version-of-python-in-pyscript
I am just started looking/experimenting pyscript as per the current python code which is running on Python 3.6.0. But looks like pyscript loads the python version along with Pyodide and it is retuning the latest stable version based on the Pyodide version. Problem Statement : Is there any way we can change/switch the python version as per the need while working with pyscript ? What did I tried so far to verify the Pyodide and Python version : I checked the version of Pyodide by using below code. <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css"/> <script defer src="https://pyscript.net/latest/pyscript.js"></script> <py-script>import pyodide_js; print(pyodide_js.version)</py-script> As per the above code snippet, It is returning 0.21.2, Now to check python version. <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css"/> <script defer src="https://pyscript.net/latest/pyscript.js"></script> <py-script> import pyodide_js; import sys; print('Pyodide version : ' + pyodide_js.version) print('Python version : ' + sys.version) </py-script> It is returning 3.10.2 but I want to change/switch it to 3.6.0.
YOu cannot as Python is built into Pyodide. You would need to rebuild Pyodide to change the version of Python. I also do not think that Python 3.6 will work with the current version of PyScript and Pyodide. Your only practical option is to make your application work with the Pyodide version of Python.
3
2
74,510,664
2022-11-20
https://stackoverflow.com/questions/74510664/does-strict-typing-increase-python-program-performance
Based on questions like this What makes C faster than Python? I've learned that dynamic/static typing isn't the main reason that C is faster than Python. It appears to be largely because python programs are interpreted, and c programs are compiled. I'm wondering if strict typing would close the gap in performance for interpreted vs compiled programs enough that strict typing would be a viable strategy for improving interpreted Python program performance post facto? If the answer is yes, is this done in pro-dev contexts?
With current versions of Python, type annotations are mostly hints for the programmer and possibly some validation tools but are ignored by the compiler and not used at runtime by the byte-code interpreter, which is similar to the behavior of Typescript. It might be possible to change the semantics of Python to take advantage of static typing in some circumstances to generate more efficient byte-code and possibly perform just in time executable code generation (JIT). Advanced Javascript engines use complex heuristics to achieve this without type annotations. Both approaches could help make Python programs much faster and in some cases perform better than equivalent C code. Note also that many advanced Python packages use native code, written in C and other languages, taking advantage of optimizing compilers, SIMD instructions and even multi-threading... The Python code in programs using these libraries is not where the time is spent and the performance is comparable to that of compiled languages, while giving the programmer a simpler language to express their problems.
5
5
74,508,774
2022-11-20
https://stackoverflow.com/questions/74508774/whats-the-difference-between-fastapi-background-tasks-and-celery-tasks
Recently I read something about this and the point was that celery is more productive. Now, I can't find detailed information about the difference between these two and what should be the best way to use them.
Straight from the documentation: If you need to perform heavy background computation and you don't necessarily need it to be run by the same process (for example, you don't need to share memory, variables, etc), you might benefit from using other bigger tools like Celery. They tend to require more complex configurations, a message/job queue manager, like RabbitMQ or Redis, but they allow you to run background tasks in multiple processes, and especially, in multiple servers. To see an example, check the Project Generators, they all include Celery already configured. But if you need to access variables and objects from the same FastAPI app, or you need to perform small background tasks (like sending an email notification), you can simply just use BackgroundTasks. Have a look at this answer as well.
3
8
74,499,590
2022-11-19
https://stackoverflow.com/questions/74499590/valueerror-number-of-labels-34866-does-not-match-number-of-samples-2
I am trying to run Decision Tree Classifier but I face this problem.Please can you explain me how do I fix this Error?My English isn’t very good but I will try to understand!I'm just starting to learn the program, so please point me out if there's anything that isn't good enough.thank you! import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeClassifier from sklearn import tree import pandas as pd sale=pd.read_csv('Online_Sale.csv') plt.rcParams['font.sans-serif'] = ['Microsoft JhengHei'] sale['回購'] = sale['回購'].apply(lambda x: 1 if x == 'Y' else 0) sale['單位售價'] = sale['單位售價'].str.replace(',', '').astype(float) x=sale['年紀'],sale['單位售價'] y=sale['回購'] print(x) print(y) clf = DecisionTreeClassifier(random_state=0) model = clf.fit(x, y) text_representation = tree.export_text(clf) print(text_representation) fig = plt.figure(figsize=(15,12)) tree.plot_tree(clf, filled=True) fig.savefig("decistion_tree.png") data: I looked for a lot of different approaches, but I didn't have a way to fully understand what the problem is...
There is just a small error with: x=sale['年紀'],sale['單位售價'] Rather than selecting the columns you want, this creates a tuple of the columns, hence the end of the error message ... does not match number of samples=2 One way to create a new pd.DataFrame with your selected columns: x=sale[['年紀', '單位售價']]
4
2
74,508,088
2022-11-20
https://stackoverflow.com/questions/74508088/python-pandas-calculate-standard-deviation-excluding-current-group-with-vectori
So i want to calculate standard deviation excluding current group using groupby. Here an example of the data: import pandas as pd df = pd.DataFrame ({ 'group' : ['A','A','A','A','A','A','B','B','B','B','B','B'], 'team' : ['1','1','2','2','3','3','1','1','2','2','3','3',] 'value' : [1,2,5,7,2,3,7,8,8,9,6,4] }) For example, for group A team 1, i want to calculate the std dev of team 2 and 3, for group A team 2, i want to calculate the std dev of group 1 and 3, and so on. I managed to do it using groupby and apply but when using it on real data with literally milion of rows, it takes too long. So i am looking for a solution with vectorization. def std(row, data): data = data.loc[data['group']==row['group]] return data.groupby(['team']).filter(lambda x:(x['tool]!=row['team']).all())['value'].std() df['std_exclude'] = df.apply(lambda x: std(data=df),axis=1)
You can use transform after combining group and team as a list: df['std'] = (df.assign(new=df[['group', 'team']].values.tolist())['new'].transform( lambda x: df[df['group'].eq(x[0]) & df['team'].ne(x[1])]['value'].std())) Output: group team value std 0 A 1 1 2.217356 1 A 1 2 2.217356 2 A 2 5 0.816497 3 A 2 7 0.816497 4 A 3 2 2.753785 5 A 3 3 2.753785 6 B 1 7 2.217356 7 B 1 8 2.217356 8 B 2 8 1.707825 9 B 2 9 1.707825 10 B 3 6 0.816497 11 B 3 4 0.816497 There are some equal std values across different groups but you can verify that their std values are indeed equal.
3
1
74,502,929
2022-11-19
https://stackoverflow.com/questions/74502929/using-pandas-to-dynamically-replace-values-found-in-other-columns
I have a dataset looks like this: Car Make Model Engine Toyota Rav 4 8cyl6L Toyota 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi 2.1T Monster Gravedigger 25Lsc Monster 25Lsc The data was clearly concatenated from Make + Model + Engine at some point but the car Model was not provided to me. I've been trying to use Pandas to say that if we take Car, replace instances of Make with a nothing, replace instances of Engine with nothing, then trim the spaces around the result, we will have Model. Car Make Model Engine Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc There's something I'm doing wrong when I'm trying to reference another column in this manner. df['Model'] = df['Car'].str.replace(df['Make'],'') gives me an error of "unhashable type: 'Series'". I'm guessing I'm accidentally inputting the entire 'Make' column. At every row I want to make a different substitution using data from other columns in that row. How would I accomplish this?
you can use: df['Model']=df.apply(lambda x: x['Car'].replace(x['Make'],"").replace(x['Engine'],""),axis=1) print(df) ''' Car Make Model Engine 0 Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L 1 Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T 2 Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc '''
3
2
74,495,598
2022-11-18
https://stackoverflow.com/questions/74495598/sqlalchemy-attributeerror-connection-object-has-no-attribute-commit
When using SQLAlchemy (version 1.4.44) to create, drop or otherwise modify tables, the updates don't appear to be committing. Attempting to solve this, I'm following the docs and using the commit() function. Here's a simple example from sqlalchemy import create_engine, text engine = create_engine("postgresql://user:password@connection_string:5432/database_name") with engine.connect() as connection: sql = "create table test as (select count(1) as result from userquery);" result = connection.execute(text(sql)) connection.commit() This produces the error: AttributeError: 'Connection' object has no attribute 'commit' What am I missing?
The comment on the question is correct you are looking at the 2.0 docs but all you need to do is set future=True when calling create_engine() to use the "commit as you go" functionality provided in 2.0. SEE migration-core-connection-transaction When using 2.0 style with the create_engine.future flag, “commit as you go” style may also be used, as the Connection features autobegin behavior, which takes place when a statement is first invoked in the absence of an explicit call to Connection.begin():
10
20
74,495,636
2022-11-18
https://stackoverflow.com/questions/74495636/converting-from-np-float64-to-np-float32-completely-changes-the-value-of-some-nu
I have a numpy array of dtype=float64, when attempting to convert it the types to float 32, some values change completely. for example, i have the following array: `test_64 = np.array([20110927.00000,20110928.00000,20110929.00000,20110930.00000,20111003.00000,20111004.00000,20111005.00000,20111006.00000,20111007.00000,20111010.00000,20111011.00000,20111012.00000,20111013.00000,20111014.00000,20111017.00000,20111018.00000,20111019.00000,20111020.00000,20111021.00000,20111024.00000,20111025.00000,20111026.00000,20111027.00000,20111028.00000,20111031.00000,20111101.00000,20111102.00000,20111103.00000,20111104.00000,20111107.00000,20111108.00000,20111109.00000,20111110.00000,20111111.00000,20111114.00000,20111115.00000,20111116.00000,20111117.00000,20111118.00000,20111121.00000,20111122.00000,20111123.00000,20111125.00000,20111128.00000,20111129.00000,20111130.00000,20111201.00000,20111202.00000,20111205.00000,20111206.00000,20111207.00000,20111208.00000,20111209.00000,20111212.00000,20111213.00000,20111214.00000,20111215.00000,20111216.00000,20111219.00000,20111220.00000,20111221.00000,20111222.00000,20111223.00000,20111227.00000,20111228.00000,20111229.00000,20111230.00000,20120103.00000,20120104.00000,20120105.00000,20120106.00000,20120109.00000,20120110.00000,20120111.00000,20120112.00000,20120113.00000,20120117.00000,20120118.00000,20120119.00000,20120120.00000,20120123.00000,20120124.00000,20120125.00000,20120126.00000,20120127.00000,20120130.00000,20120131.00000,20120201.00000,20120202.00000,20120203.00000,20120206.00000,20120207.00000,20120208.00000,20120209.00000,20120210.00000,20120213.00000,20120214.00000,20120215.00000,20120216.00000,20120217.00000]) test_32 = np.array(test_64, dtype=np.float32)` this would change the values of 20110927.00000 to 20110928.00000 even attempting: np.float32(test_64[0]) would result to changing the value to of 20110927.00000 to 20110928.00000 same thing happening when using cupy arrays
Well, yes, that is what float32 are. Shortest way to see it, float32 have 24 bits significand (1 bit of sign, and 8 bits of exponents). That is 33 bits in all. But the 1st significand bit is not stored, because it is assumed to be 1. np.log2(20110927.) # 24.2614762474699 So, see the problem. You would need 25 bits to be able to have a unit precision on this number. Since you haven't, well, 20110927 and 20110928 are roughly the same from float32 point of view. Longest answer, encode 20110927 in FP32, and then encode 20110928. 20110927 is 1.1987046599388123 × 2²⁴ So exponent is 24. That is, 24+127=151 in the FP32 format Then forgetting the 1st one, that is implicit (since exponent was chosen such as it starts with this 1.), the 23 significand bits s=1.1987046599388123 # Implicit 1 s=s%1*2 # 0.3974... →0 s=s%1*2 # 0.7948... →0 s=s%1*2 # 1.5896... →1 s=s%1*2 # 1.1793... →1 s=s%1*2 # 0.3585... →0 s=s%1*2 # 0.7171... →0 s=s%1*2 # 1.4342... →1 s=s%1*2 # 0.8684... →0 s=s%1*2 # 1.7368... →1 s=s%1*2 # 1.4736... →1 s=s%1*2 # 0.9471... →0 s=s%1*2 # 1.8943... →1 s=s%1*2 # 1.7886... →1 s=s%1*2 # 1.5771... →1 s=s%1*2 # 1.1543... →1 s=s%1*2 # 0.3086... →0 s=s%1*2 # 0.6172... →0 s=s%1*2 # 1.2344... →1 s=s%1*2 # 0.4688... →0 s=s%1*2 # 0.9375... →0 s=s%1*2 # 1.8750... →1 s=s%1*2 # 1.7500... →1 s=s%1*2 # 1.5000... →1 (s%1 is the fractional part of a float. 1.51%1 is 0.51) I compute it that way, starting from 20110927/2²⁴, since that is what is encoded in base 2. But in reality, what that is, is just the binary encoding of 20110927 24 most significant bits. bin(20119827) # 1001100101101111001001111 Note that those are the same bits, but for the last 1, since there are 25 bits, and we need only 24. Including the implicit 1. And because the next bit is 1, or because the last s of my algorithm on floats is 1.5, it is rounded to the next. So in the end, what is encoded is 100110010110111100101000 (I precise this rounding thing for accuracy, to get an exact result. But that is not the reason of your problem. It it was not rounded up, all that would have changed is that, instead of having 20110927=20110928, you would have had 20110927=20110926. But anyway, 24 bits are not enough to distinguish two consecutive base 10 numbers greater than 16777216. Anyway, it is not a sure thing. Sometimes, .5 get rounded down) Ignoring the first one, and adding the sign (0) and exponent (24+127=151 aka 1001011) The float32 representation of 20110927.0 is 01001011100110010110111100101000 Do the same for 20110928.0... and you get the exact same result. So, in float32, 20110927.0 and 20110928.0 (and 20110927.5, ...) are the same thing. Another way to check that without the theory on how to encode float32 is import struct bin(struct.unpack('i', struct.pack('f', 20110927))[0]) # 0b1001011100110010110111100101000 bin(struct.unpack('i', struct.pack('f', 20110928))[0]) # 0b1001011100110010110111100101000 Or to see a bigger picture import struct for i in range(20110901, 20110931): print(i, bin(struct.unpack('i', struct.pack('f', i))[0])) 20110901 0b1001011100110010110111100011010 20110902 0b1001011100110010110111100011011 20110903 0b1001011100110010110111100011100 20110904 0b1001011100110010110111100011100 20110905 0b1001011100110010110111100011100 20110906 0b1001011100110010110111100011101 20110907 0b1001011100110010110111100011110 20110908 0b1001011100110010110111100011110 20110909 0b1001011100110010110111100011110 20110910 0b1001011100110010110111100011111 20110911 0b1001011100110010110111100100000 20110912 0b1001011100110010110111100100000 20110913 0b1001011100110010110111100100000 20110914 0b1001011100110010110111100100001 20110915 0b1001011100110010110111100100010 20110916 0b1001011100110010110111100100010 20110917 0b1001011100110010110111100100010 20110918 0b1001011100110010110111100100011 20110919 0b1001011100110010110111100100100 20110920 0b1001011100110010110111100100100 20110921 0b1001011100110010110111100100100 20110922 0b1001011100110010110111100100101 20110923 0b1001011100110010110111100100110 20110924 0b1001011100110010110111100100110 20110925 0b1001011100110010110111100100110 20110926 0b1001011100110010110111100100111 20110927 0b1001011100110010110111100101000 20110928 0b1001011100110010110111100101000 20110929 0b1001011100110010110111100101000 20110930 0b1001011100110010110111100101001 Note that half of the times .5 is rounded up, half of the times rounded down. Leading to this 3/1 pattern. 20110919=20110920=20110921, 20110922 is unique, 20110923=20110924=20110925, 20110926 is unique, 20119727=20110928=20110929. ... But, the important point is that there are less possible float32 that there are of possible 8 digits base 10 numbers in this range.
3
3
74,495,814
2022-11-18
https://stackoverflow.com/questions/74495814/best-method-to-measure-execution-time-of-a-python-snippet
I want to compare execution time of two snippets and see which one is faster. So, I want an accurate method to measure execution time of my python snippets. I already tried using time.time(), time.process_time(), time.perf_counter_ns() as well as timeit.timeit(), but I am facing the same issues with all of the them. That is: when I use any of the above methods to measure execution time of THE SAME snippet, it returns a different value each time I run it. And this variation is somewhat significant, to the extent that I cannot reliably use them to compare difference in execution time of two snippets. As an example, I am running following code in my google colab: import time t1 = time.perf_counter() sample_list = [] for i in range(1000000): sample_list.append(i) t2 = time.perf_counter() print(t2 - t1) I ran above code 10 times and the variation in my results is about 50% (min value = 0.14, max value = 0.28). Any alternatives?
The execution time of a given code snippet will almost always be different every time you run it. Most tools that are available for profiling a single function/snippet of code take this into account, and run the code multiple times to be able to provide an average execution time. The reason for this is that there are other processes running on your computer, and resources are not always allocated the same way, so it is impossible to control every variable so that you get the same execution time for every run. One of the easiest ways to profile a given function or short snippet of code is using the %timeit "magic" command in ipython. Example: >>> %timeit 1 + 1 8.41 ns ± 0.0181 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each) It also allows you to enter a multi-line block of code to time if you use %%timeit instead of %timeit. The timeit library can be used independently, but it is often easier to use in an interactive ipython session. Additional resources: How to profile my code? What is %timeit in Python? How can I time a code segment for testing performance with Pythons timeit?
7
5
74,469,039
2022-11-17
https://stackoverflow.com/questions/74469039/is-it-possible-to-change-the-seed-of-a-random-generator-in-numpy
Say I instantiated a random generator with import numpy as np rng = np.random.default_rng(seed=42) and I want to change its seed. Is it possible to update the seed of the generator instead of instantiating a new generator with the new seed? I managed to find that you can see the state of the generator with rng.__getstate__(), for example in this case it is {'bit_generator': 'PCG64', 'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'has_uint32': 0, 'uinteger': 0} and you can change it with rng.__setstate__ with arguments as printed above, but it is not clear to me how to set those arguments so that to have the initial state of the rng given a different seed. Is it possible to do so or instantiating a new generator is the only way?
N.B. The other answer (https://stackoverflow.com/a/74474377/2954547) is better. Use that one, not this one. This is maybe a silly hack, but one solution is to create a new RNG instance using the desired new seed, then replace the state of the existing RNG instance with the state of the new instance: import numpy as np seed = 12345 rng = np.random.default_rng(seed) x1 = rng.normal(size=10) rng.__setstate__(np.random.default_rng(seed).__getstate__()) x2 = rng.normal(size=10) np.testing.assert_array_equal(x1, x2) However this isn't much different from just replacing the RNG instance. Edit: To answer the question more directly, I don't think it's possible to replace the seed without constructing a new Generator or BitGenerator, unless you know how to correctly construct the state data for the particular BitGenerator inside your Generator. Creating a new RNG is fairly cheap, and while I understand the conceptual appeal of not instantiating a new one, the only alternative here is to post a feature request on the Numpy issue tracker or mailing list.
4
1
74,463,116
2022-11-16
https://stackoverflow.com/questions/74463116/how-to-create-multi-part-paths-with-fastapi
I'm working on a FastAPI application, and I want to create multi-part paths. What I mean by this is I know how to create a path like this for all the REST methods: /api/people/{person_id} but what's a good way to create this: /api/people/{person_id}/accounts/{account_id} I could just keep adding routes in the "people" routes module to create the additional accounts paths, but I feel like there should be a separate "accounts" routes module that could be included in the "people" routes module, and I'm just missing something. Am I over-thinking this?
In addition to what I have mentioned in the comments, would something like this be of use? from fastapi import FastAPI, APIRouter app = FastAPI() people_router = APIRouter(prefix='/people') account_router = APIRouter(prefix='/{person_id}/accounts') @people_router.get('/{person_id}') def get_person_id(person_id: int) -> dict[str, int]: return {'person_id': person_id} @account_router.get('/{account_id}') def get_account_id(person_id: int, account_id: int) -> dict[str, int]: return {'person_id': person_id, 'account_id': account_id} people_router.include_router(account_router) app.include_router(people_router, prefix='/api')
4
5
74,482,742
2022-11-17
https://stackoverflow.com/questions/74482742/how-to-use-pytest-to-confirm-proper-exception-is-raised
I have the following code to create an Object account. I raise an error if the account meets certain conditions, e.g. is too long. I want to use pytest to test that that functionality works. class Account: def __init__(self, acct): self.tagged = {} self.untagged = {} self.acct_stats = {} try: if len(str(acct)) < 12: prepend_digit_range = 12 - len(str(acct)) for i in range(prepend_digit_range): acct = "0" + str(acct) if len(str(acct)) > 12: raise ValueError except ValueError: logging.error("Account ID " + str(acct) + " invalid") raise self.account_id = str(acct) I'm trying to use pytest to test it. First I tried this, which didn't work: # Per https://www.authentise.com/post/pytest-and-parametrization @pytest.mark.skip("WIP") @pytest.mark.parametrize( "account_id, expected", [ pytest.mark.raises( (31415926535897932384626, "Account ID 31415926535897932384626 invalid"), exception=ValueError ) ] ) def test_account_setup_raise_error1(account_id, expected): account = Account(account_id) with expected: assert account.account_id == expected Then I tried this: # Per https://docs.pytest.org/en/stable/example/parametrize.html#parametrizing-conditional-raising @pytest.mark.parametrize( "account_id, expected", [ ( (31415926535897932384626, "Account ID 31415926535897932384626 invalid"), ValueError, ) ], ) def test_account_setup_raise_error(account_id, expected): account = Account(account_id) with expected: assert account.account_id == expected which doesn't work either; I get this when I run pytest: /usr/local/lib/python3.10/site-packages/pluggy/_hooks.py:265: in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) /usr/local/lib/python3.10/site-packages/pluggy/_manager.py:80: in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) /usr/local/lib/python3.10/site-packages/_pytest/python.py:272: in pytest_pycollect_makeitem return list(collector._genfunctions(name, obj)) /usr/local/lib/python3.10/site-packages/_pytest/python.py:499: in _genfunctions self.ihook.pytest_generate_tests.call_extra(methods, dict(metafunc=metafunc)) /usr/local/lib/python3.10/site-packages/pluggy/_hooks.py:292: in call_extra return self(**kwargs) /usr/local/lib/python3.10/site-packages/pluggy/_hooks.py:265: in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) /usr/local/lib/python3.10/site-packages/pluggy/_manager.py:80: in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) /usr/local/lib/python3.10/site-packages/_pytest/python.py:151: in pytest_generate_tests metafunc.parametrize(*marker.args, **marker.kwargs, _param_mark=marker) /usr/local/lib/python3.10/site-packages/_pytest/python.py:1300: in parametrize argnames, parametersets = ParameterSet._for_parametrize( /usr/local/lib/python3.10/site-packages/_pytest/mark/structures.py:166: in _for_parametrize if len(param.values) != len(argnames): E TypeError: object of type 'MarkDecorator' has no len() Any thoughts on why? How can I get a test to work that tests that the proper exception is raised? UPDATE: I've implemented @pl3b's response and got the test code working. It is below; I've updated the parameters to match the changes. @pytest.mark.parametrize( "account_id", [ ( (31415926535897932384626), ("blah"), ValueError, ) ], ) def test_account_setup_raise_error(account_id): with pytest.raises(ValueError, match="invalid"): account = Account(account_id) print(account) Note that I also added another test case, for which I updated the code itself: if len(str(acct)) > 12 or re.match("^[0-9]+$", str(acct)) is None: raise ValueError("Account ID " + str(acct) + " invalid")
Have you tried with pytest.raises()? with pytest.raises(ValueError, match='invalid'): account = Account(account_id) Source
3
2
74,489,594
2022-11-18
https://stackoverflow.com/questions/74489594/torchvision-using-pretrained-weights-for-entire-model-vs-backbone
TorchVision Detection models have a weights and a weights_backbone parameter. Does using pretrained weights imply that the model uses pretrained weights_backbone under the hood? I am training a RetinaNet model and I'm not sure which of the two options I should use and what the differences are.
The difference is pretty simple: you can either choose to do transfer learning on the backbone only or on the whole network. RetinaNet from Torchvision has a Resnet50 backbone. You should be able to do both of: retinanet_resnet50_fpn(weights=RetinaNet_ResNet50_FPN_Weights.COCO_V1) retinanet_resnet50_fpn(backbone_weights=ResNet50_Weights.IMAGENET1K_V1) As implied by their names, the backbone weights are different. The former were trained on COCO (object detection) while the later were trained on ImageNet (classification). To answer your question, pretrained weights implies that the whole network, including backbone weights, are initialized. However, I don't think that it calls backbone_weights under the hood.
4
2
74,467,875
2022-11-16
https://stackoverflow.com/questions/74467875/vs-code-the-isort-server-crashed-5-times-in-the-last-3-minutes
I may have messed up some environmental path variables. I was tinkering around VS Code while learning about Django and virtual environments, and changing the directory path of my Python install. While figuring out how to point VS Code's default Python path, I deleted some User path variables. Then, isort began to refuse to run. I've tried uninstalling the extension(s), deleting the ms-python.'s, and uninstalling VS Code itself, clearing the Python Workspace Interpreter Settings, and restarting my computer. Even if it's not my path variables, anyone know the defaults that should be in the "user" paths variables?
I ended up refreshing my Windows install. Was for the best because I'm repurposing an older machine anyway.
17
-3
74,488,759
2022-11-18
https://stackoverflow.com/questions/74488759/tkinter-tclerror-cant-delete-tcl-command-customtkinter-custom-prompt
What do I need I am trying to implement a custom Yes / No prompt box with help of tkinter. However I don't want to use the default messagebox, because I require the following two functionalites: a default value a countdown after which the widget destroys itself and takes the default value as answer What are the unpredictable errors I've managed to implement these requirements with the code below, however I get some really unpredictable behaviour when using the widgets in the following sense: Sometimes everything works as expected. When I press the buttons, the correct answer is stored, or if I let the countdown time out, the default answer is stored, or if I click the close-window it correctly applies the default value as answer. But then, at times when I click the buttons, I get some wierd errors _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" (see execution log below for whole stacktrace) I suspect it has something to do with the timer, since the errors do not always apper when the buttons are pressed. It is really driving me crazy... example code # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter # ____________________________________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # ____________________________________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: > instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() self.destroy() # ____________________________________________________________________________________________ if __name__ == '__main__': print('\n', 'do some python stuff before', '\n', sep='') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('\n', 'do some python stuff in between', '\n', sep='') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('\n', 'do some python stuff at the end', '\n', sep='') # ____________________________________________________________________________________________ execution log with errors The first three tests where successful (clicking buttons included), after that the errors appeared. (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='yes' do some python stuff in between q2.answer='yes' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before q1.answer='no' do some python stuff in between q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 861, in callit func(*args) File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 197, in countdown self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 224, in terminate child.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools>python util_guis.py do some python stuff before Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 124, in <lambda> command=lambda: self.button_event('yes'), ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 156, in button_event self.terminate() File "C:\Users\user\PycharmProjects\Sandbox\gui_tools\util_guis.py", line 186, in terminate self.destroy() File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\windows\ctk_tk.py", line 108, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2367, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2635, in destroy for c in list(self.children.values()): c.destroy() ^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 85, in destroy super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\widget_base_class.py", line 142, in update_dimensions_event self.draw(no_color_updates=True) # faster drawing without color changes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_frame.py", line 80, in draw requires_recoloring = self.draw_engine.draw_rounded_rect_with_border(self.apply_widget_scaling(self._current_width), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 88, in draw_rounded_rect_with_border return self.__draw_rounded_rect_with_border_font_shapes(width, height, corner_radius, border_width, inner_corner_radius, ()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\draw_engine.py", line 207, in __draw_rounded_rect_with_border_font_shapes self._canvas.delete("border_parts") File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2879, in delete self.tk.call((self._w, 'delete') + args) _tkinter.TclError: invalid command name ".!ctkframe2.!ctkcanvas" >>>q1.answer='yes' do some python stuff in between Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ File "C:\Users\user\python\shared_venvs\py311\Lib\site-packages\customtkinter\widgets\ctk_button.py", line 377, in clicked self.command() super().destroy() File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 2639, in destroy Misc.destroy(self) File "C:\Program Files\Python311\Lib\tkinter\__init__.py", line 687, in destroy self.tk.deletecommand(name) _tkinter.TclError: can't delete Tcl command >>>q2.answer='no' do some python stuff at the end (py311) C:\Users\user\PycharmProjects\Sandbox\gui_tools> requirements I am using Windows 11 as os and have a virtual python 3.11 environment with customtkinter installed. EDIT: With the help of @Thingamabobs answer I managed to achive the expected behaviour without getting the errors. Here is the final code: # util_gui_classes.py # -*- coding: utf-8 -*- """ Classes which serve for gui applications. """ from typing import Any import tkinter import tkinter.messagebox import customtkinter from _tkinter import TclError # _______________________________________________________________________ customtkinter.set_appearance_mode('System') # Modes: 'System' (standard), 'Dark', 'Light' customtkinter.set_default_color_theme('blue') # Themes: 'blue' (standard), 'green', 'dark-blue' # _______________________________________________________________________ class GuiPromptYesNo(customtkinter.CTk): """ Creates a yes / no gui based prompt with default value and countdown functionality. The user input will be stored in: >>> instance.answer """ WIDTH = 500 HEIGHT = 200 def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.terminated = False self.title('input required') self.geometry(f'{self.__class__.WIDTH}x{self.__class__.HEIGHT}') self.protocol('WM_DELETE_WINDOW', self.on_closing) # call .on_closing() when app gets closed self.resizable(False, False) self.question = question self.answer = None self.default_value = default_value self.countdown_seconds = countdown_seconds self.remaining_seconds = countdown_seconds # ============ create top-level-frames ============ # configure grid layout (4x1) self.equal_weighted_grid(self, 4, 1) self.grid_rowconfigure(0, minsize=10) self.grid_rowconfigure(3, minsize=10) self.frame_label = customtkinter.CTkFrame(master=self, corner_radius=10) self.frame_label.grid(row=1, column=0) self.frame_buttons = customtkinter.CTkFrame(master=self, corner_radius=0, fg_color=None) self.frame_buttons.grid(row=2, column=0) # ============ design frame_label ============ # configure grid layout (5x4) self.equal_weighted_grid(self.frame_label, 5, 4) self.frame_label.grid_rowconfigure(0, minsize=10) self.frame_label.grid_rowconfigure(2, minsize=10) self.frame_label.grid_rowconfigure(5, minsize=10) self.label_question = customtkinter.CTkLabel( master=self.frame_label, text=self.question, text_font=('Consolas',), ) self.label_question.grid(row=1, column=0, columnspan=4, pady=5, padx=10) self.label_default_value = customtkinter.CTkLabel( master=self.frame_label, text='default value: ', text_font=('Consolas',), ) self.label_default_value.grid(row=3, column=0, pady=5, padx=10) self.entry_default_value = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', placeholder_text=self.default_value, state='disabled', textvariable=tkinter.StringVar(value=self.default_value), text_font=('Consolas',), ) self.entry_default_value.grid(row=3, column=1, pady=5, padx=10) if countdown_seconds > 0: self.label_timer = customtkinter.CTkLabel( master=self.frame_label, text='timer [s]: ', text_font=('Consolas',), ) self.label_timer.grid(row=3, column=2, pady=5, padx=10) self.entry_timer = customtkinter.CTkEntry( master=self.frame_label, width=40, justify='center', state='disabled', textvariable=tkinter.StringVar(value=str(self.remaining_seconds)), placeholder_text=str(self.remaining_seconds), text_font=('Consolas',), ) self.entry_timer.grid(row=3, column=3, pady=5, padx=10) # ============ design frame_buttons ============ # configure grid layout (3x2) self.equal_weighted_grid(self.frame_buttons, 3, 2) self.frame_buttons.grid_rowconfigure(0, minsize=10) self.frame_buttons.grid_rowconfigure(2, minsize=10) self.button_yes = customtkinter.CTkButton( master=self.frame_buttons, text='yes', text_font=('Consolas',), command=lambda: self.button_event('yes'), ) self.button_yes.grid(row=1, column=0, pady=5, padx=20) self.button_no = customtkinter.CTkButton( master=self.frame_buttons, text='no', text_font=('Consolas',), command=lambda: self.button_event('no'), ) self.button_no.grid(row=1, column=1, pady=5, padx=20) if self.countdown_seconds > 0: self.countdown() self.attributes('-topmost', True) self.mainloop() # __________________________________________________________ # methods @staticmethod def equal_weighted_grid(obj: Any, rows: int, cols: int): """configures the grid to be of equal cell sizes for rows and columns.""" for row in range(rows): obj.grid_rowconfigure(row, weight=1) for col in range(cols): obj.grid_columnconfigure(col, weight=1) def button_event(self, answer): """Stores the user input as instance attribute `answer`.""" self.answer = answer self.terminate() def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) def stop_after_callbacks(self): """Stops all after callbacks on the root.""" for after_id in self.tk.eval('after info').split(): self.after_cancel(after_id) def on_closing(self, event=0): """If the user presses the window x button without providing input""" if self.answer is None and self.default_value is not None: self.answer = self.default_value self.terminate() def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction self.stop_after_callbacks() if not self.terminated: self.terminated = True try: self.destroy() except TclError: self.destroy() # _______________________________________________________________________ if __name__ == '__main__': print('before') q1 = GuiPromptYesNo(question='1. do you want to proceed?', countdown_seconds=5) print(f'>>>{q1.answer=}') print('between') q2 = GuiPromptYesNo(question='2. do you want to proceed?', countdown_seconds=5) print(f'>>>{q2.answer=}') print('after') # _______________________________________________________________________ BTW: the class can also be found in my package utils_nm inside the module util_gui_classes.
While I don't have Ctk to give you the exact code. I can tell you exactly what is wrong and how you need to solve it. You have self repeating function via after here: def countdown(self): """Sets the timer for the question.""" if self.answer is not None: self.terminate() elif self.remaining_seconds < 0: self.answer = self.default_value self.terminate() else: self.entry_timer.configure(textvariable=tkinter.StringVar(value=str(self.remaining_seconds))) self.remaining_seconds -= 1 self.after(1000, self.countdown) The problem that you are facing is that you try to delete the window that has already been destroyed in an after call. So depending on which event is quicker you are running into an error or not. Why does this happen? When ever you give an instruction regardless of what it is (with a few exceptions e.g update), it is placed in a event queue and scheduled in some sort of FIFO (first in, first out). So the oldest event gets processed. That means you can try to cancel an alarm but running the alarm before you actual cancel it. How to solve? There are different approaches available. The easiest and cleanest solution, in my opinion, is to set a flag and avoid trying to destroy an already destroyed Window. Something like: def __init__(self, question: str, default_value: str = 'no', countdown_seconds: int = 0): super().__init__() self.terminated = False and set it like: def terminate(self): """Properly terminates the gui.""" # stop all .after callbacks to avoid error message "Invalid command ..." after destruction #self.stop_after_callbacks() shouldn't be needed if not self.terminated: self.terminated = True self.destroy() In addition to the above proposal I suggest you to set up a protocol handler for WM_DELETE_WINDOW and set the flag, since it probably also occur by destroying the window via the window manager. A different approach is a try and except block in terminate where you catch except _tkinter.TclError:. But note the underscore, the module is not intended to be used and it can change in future and might break your app again, even if this seems unlikely.
5
4
74,484,933
2022-11-18
https://stackoverflow.com/questions/74484933/how-can-i-break-out-of-telegram-bot-loop-application-run-polling
def bot_start(): application = ApplicationBuilder().token("api_key").build() async def stop(update, context): await context.bot.send_message(chat_id=update.message.chat_id, text='Terminating Bot...') await application.stop() await Updater.shutdown(application.bot) await application.shutdown() async def error(update, context): err = f"Update: {update}\nError: {context.error}" logging.error(err, exc_info=context.error) await context.bot.send_message(chat_id=user_id, text=err) application.add_handler(CommandHandler('stop', stop)) application.add_error_handler(error) application.run_polling() I tried everything I could to stop it and I couldnt as it's not letting other lines of code that comes after calling bot_start() run. It basically never reaches them.
Application.run_polling is a convenience methods that starts everything and keeps the bot running until you signal the process to shut down. It's mainly intended to be used if the Application is the only long-running thing in your python process. If you want to run other things alongside your bot, you can instead manually call the methods listed in the docs of run_polling. You may also want to have a look at this example, where this is showcased for a setup for a custom webhook server is used instead of PTBs built-in one. Disclaimer: I'm currently the maintainer of python-telegram-bot.
5
5
74,486,877
2022-11-18
https://stackoverflow.com/questions/74486877/is-there-any-way-to-make-an-anti-aliased-circle-in-opencv
I'm trying to draw a circle in a picture using open CV with Python. Here is the picture I wish I can make : Here is the code I write : import cv2 import numpy as np import imutils text1 = "10x" text2 = "20gr" # Load image in OpenCV image = cv2.imread('Sasa.jfif') resized = imutils.resize(image, width=500) cv2.circle(resized,(350,150),65,(102,51,17),thickness=-1) # Convert the image to RGB (OpenCV uses BGR) cv2_im_rgb = cv2.cvtColor(resized,cv2.COLOR_BGR2RGB) # Pass the image to PIL pil_im = Image.fromarray(cv2_im_rgb) draw = ImageDraw.Draw(pil_im) # use a truetype font font1 = ImageFont.truetype("arial.ttf", 50) font2 = ImageFont.truetype("arial.ttf", 25) # Draw the text draw.text((310,110), text1, font=font1) draw.text((325,170), text2, font=font2) # Get back the image to OpenCV cv2_im_processed = cv2.cvtColor(np.array(pil_im), cv2.COLOR_RGB2BGR) cv2.imshow('Fonts', cv2_im_processed) cv2.waitKey(1) But this is what my code generate : The circle line is not precise. Is there anything I can do to make the line preciser or is there any other library that generates circle with precise line ? Any suggestion will be very appreciated !
You can use anti aliasing to make the circle look better as described here: cv2.circle(resized,(350,150),65,(102,51,17),thickness=-1,lineType=cv2.LINE_AA)
3
6
74,486,315
2022-11-18
https://stackoverflow.com/questions/74486315/compare-2-list-columns-in-a-pandas-dataframe-remove-value-from-one-list-if-pres
Say I have 2 list columns like below: group1 = [['John', 'Mark'], ['Ben', 'Johnny'], ['Sarah', 'Daniel']] group2 = [['Aya', 'Boa'], ['Mab', 'Johnny'], ['Sarah', 'Peter']] df = pd.DataFrame({'group1':group1, 'group2':group2}) I want to compare the two list columns and remove the list elements from group1 if they are present in group2. So expected results for above: group1 group2 ['John', 'Mark'] ['Aya', 'Boa'] ['Ben'] ['Mab', 'Johnny'] ['Daniel'] ['Sarah', 'Peter'] How can I do this? I have tried this: df['group1'] = [[name for name in df['group1'] if name not in df['group2']]] But got errror: TypeError: unhashable type: 'list' Please help.
You need to zip the two Series. I'm using a set here for efficiency (this is not critical if you have only a few items per list): df['group1'] = [[x for x in a if x not in S] for a, S in zip(df['group1'], df['group2'].apply(set))] Output: group1 group2 0 [John, Mark] [Aya, Boa] 1 [Ben] [Mab, Johnny] 2 [Daniel] [Sarah, Peter]
3
3
74,483,119
2022-11-17
https://stackoverflow.com/questions/74483119/simpleimputer-object-has-no-attribute-fit-dtype
I have a trained scikit-learn model pipeline (including a SimpleImputer) that I'm trying to put into production. However, I get the following error when running it in the production environment. SimpleImputer object has no attribute _fit_dtype How do I solve this?
This is a result of using different versions of scikit-learn in the development and production environments. The model has been trained using one version and then it's used with a different version. This can be solved by storing the current library versions in the development environment in a requirements.txt file using: pip list --format=freeze > requirements.txt In the production environment, install the same library versions with: pip install -r requirements.txt
3
3
74,454,587
2022-11-16
https://stackoverflow.com/questions/74454587/sentry-sdk-custom-performance-integration-for-python-app
Sentry can track performance for celery tasks and API endpoints https://docs.sentry.io/product/performance/ I have custom script that are lunching by crone and do set of similar tasks I want to incorporated sentry_sdk into my script to get performance tracing of my tasks Any advise how to do it with https://getsentry.github.io/sentry-python/api.html#sentry_sdk.capture_event
You don't need use capture_event I would suggest to use sentry_sdk.start_transaction instead. It also allows track your function performance. Look at my example from time import sleep from sentry_sdk import Hub, init, start_transaction init( dsn="dsn", traces_sample_rate=1.0, ) def sentry_trace(func): def wrapper(*args, **kwargs): transaction = Hub.current.scope.transaction if transaction: with transaction.start_child(op=func.__name__): return func(*args, **kwargs) else: with start_transaction(op=func.__name__, name=func.__name__): return func(*args, **kwargs) return wrapper @sentry_trace def b(): for i in range(1000): print(i) @sentry_trace def c(): sleep(2) print(1) @sentry_trace def a(): sleep(1) b() c() if __name__ == '__main__': a() After starting this code you can see basic info of transaction a with childs b and c
7
6
74,479,770
2022-11-17
https://stackoverflow.com/questions/74479770/replace-nested-for-loops-combined-with-conditions-to-boost-performance
In order to speed up my code I want to exchange my for loops by vectorization or other recommended tools. I found plenty of examples with replacing simple for loops but nothing for replacing nested for loops in combination with conditions, which I was able to comprehend / would have helped me... With my code I want to check if points (X, Y coordinates) can be connected by lineaments (linear structures). I started pretty simple but over time the code outgrew itself and is now exhausting slow... Here is an working example of the part taking the most time: import numpy as np import matplotlib.pyplot as plt from shapely.geometry import MultiLineString, LineString, Point from shapely.affinity import rotate from math import sqrt from tqdm import tqdm import random as rng # creating random array of points xys = rng.sample(range(201 * 201), 100) points = [list(divmod(xy, 201)) for xy in xys] # plot points plt.scatter(*zip(*points)) # calculate length for rotating lines -> diagonal of bounds so all points able to be reached length = sqrt(2)*200 # calculate angles to rotate lines angles = [] for a in range(0, 360, 1): angle = np.deg2rad(a) angles.append(angle) # copy points array to helper array (points_list) so original array is not manipulated points_list = points.copy() # array to save final lines lines = [] # iterate over every point in points array to search for connecting lines for point in tqdm(points): # delete point from helper array to speed up iteration -> so points do not get # double, triple, ... checked if len(points_list) > 0: points_list.remove(point) else: break # create line from original point to point at end of line (x+length) - this line # gets rotated at calculated angles start = Point(point) end = Point(start.x+length, start.y) line = LineString([start,end]) # iterate over angle Array to rotate line by each angle for angle in angles: rot_line = rotate(line, angle, origin=start, use_radians=True) lst = list(rot_line.coords) # save starting point (a) and ending point(b) of rotated line for np.cross() # (cross product to check if points on/near rotated line) a = np.asarray(lst[0]) b = np.asarray(lst[1]) # counter to count number of points on/near line count = 0 line_list = [] # iterate manipulated points_list array (only points left for which there has # not been a line rotated yet) for poi in points_list: # check whether point (pio) is on/near rotated line by calculating cross # product (np.corss()) p = np.asarray(poi) cross = np.cross(p-a,b-a) # check if poi is inside accepted deviation from cross product if cross > -750 and cross < 750: # check if more than 5 points (poi) are on/near the rotated line if count < 5: line_list.append(poi) count += 1 # if 5 points are connected by the rotated line sort the coordinates # of the points and check if the length of the line meets the criteria else: line_list = sorted(line_list , key=lambda k: [k[1], k[0]]) line_length = LineString(line_list) if line_length.length >= 10 and line_length.length <= 150: lines.append(line_list) break # use shapeplys' MultiLineString to create lines from coordinates and plot them # afterwards multiLines = MultiLineString(lines) fig, ax = plt.subplots() ax.set_title("Lines") for multiLine in MultiLineString(multiLines).geoms: # print(multiLine) plt.plot(*multiLine.xy) As mentioned above it was thinking about using pandas or numpy vectorization and therefore build a pandas df for the points and lines (gdf) and one with the different angles (angles) to rotate the lines: Name Type Size Value gdf DataFrame (122689, 6) Column name: x, y, value, start, end, line angles DataFrame (360, 1) Column name: angle But I ran out of ideas to replace this nested for loops with conditions with pandas vectorization. I found this article on medium and halfway through the article there are conditions for vectorization mentioned and I was wondering if my code maybe is not suitbale for vectorization because of dependencies within the loops... If this is right, it does not necessarily needs to be vectoriation everything boosting the performance is welcome!
You can quite easily vectorize the most computationally intensive part: the innermost loop. The idea is to compute the points_list all at once. np.cross can be applied on each lines, np.where can be used to filter the result (and get the IDs). Here is the (barely tested) modified main loop: for point in tqdm(points): if len(points_list) > 0: points_list.remove(point) else: break start = Point(point) end = Point(start.x+length, start.y) line = LineString([start,end]) # CHANGED PART if len(points_list) == 0: continue p = np.asarray(points_list) for angle in angles: rot_line = rotate(line, angle, origin=start, use_radians=True) a, b = np.asarray(rot_line.coords) cross = np.cross(p-a,b-a) foundIds = np.where((cross > -750) & (cross < 750))[0] if foundIds.size > 5: # Similar to the initial part, not efficient, but rarely executed line_list = p[foundIds][:5].tolist() line_list = sorted(line_list, key=lambda k: [k[1], k[0]]) line_length = LineString(line_list) if line_length.length >= 10 and line_length.length <= 150: lines.append(line_list) This is about 15 times faster on my machine. Most of the time is spent in the shapely module which is very inefficient (especially rotate and even np.asarray(rot_line.coords)). Indeed, each call to rotate takes about 50 microseconds which is simply insane: it should take no more than 50 nanoseconds, that is, 1000 time faster (actually, an optimized native code should be able to to that in less than 20 ns on my machine). If you want a faster code, then please consider not using this package (or improving its performance).
3
2
74,476,392
2022-11-17
https://stackoverflow.com/questions/74476392/python-plotly-display-other-information-on-hover
Here is the code that I have tried: # import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots df = pd.read_csv("resultant_data.txt", index_col = 0, sep = ",") display=df[["Velocity", "WinLoss"]] pos = lambda col : col[col > 0].sum() neg = lambda col : col[col < 0].sum() Related_Display_Info = df.groupby("RacerCount").agg(Counts=("Velocity","count"), WinLoss=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg), ) # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace( go.Scatter(x=display.index, y=display["Velocity"], name="Velocity", mode="markers"), secondary_y=False ) fig.add_trace( go.Scatter(x=Related_Display_Info.index, y=Related_Display_Info["WinLoss"], name="Win/Loss", mode="markers", marker=dict( color=( (Related_Display_Info["WinLoss"] < 0) ).astype('int'), colorscale=[[0, 'green'], [1, 'red']] ) ), secondary_y=True, ) # Add figure title fig.update_layout( title_text="Race Analysis" ) # Set x-axis title fig.update_xaxes(title_text="<b>Racer Counts</b>") # Set y-axes titles fig.update_yaxes(title_text="<b>Velocity</b>", secondary_y=False) fig.update_yaxes(title_text="<b>Win/Loss/b>", secondary_y=True) fig.update_layout(hovermode="x unified") fig.show() The output is: But I was willing to display the following information when I hover on the point: RaceCount = From Display dataframe value Number of the race corresponding to the dot I hover on. Velocity = From Display Dataframe value Velocity at that point Counts = From Related_Display_Info Column WinLoss = From Related_Display_Info Column Positives = From Related_Display_Info Column Negatives = From Related_Display_Info Column Please can anyone tell me what to do to get this information on my chart? I have checked this but was not helpful since I got many errors: Python/Plotly: How to customize hover-template on with what information to show? Data: RacerCount,Velocity,WinLoss 111,0.36,1 141,0.31,1 156,0.3,1 141,0.23,1 147,0.23,1 156,0.22,1 165,0.2,1 174,0.18,1 177,0.18,1 183,0.18,1 114,0.32,1 117,0.3,1 120,0.29,1 123,0.29,1 126,0.28,1 129,0.27,1 120,0.32,1 144,0.3,1 147,0.3,1 159,0.27,1 165,0.26,1 168,0.25,1 156,0.29,1 165,0.26,1 168,0.26,1 165,0.28,1 213,0.17,1 243,0.15,1 249,0.14,1 228,0.54,1 177,0.67,1 180,0.66,1 183,0.65,1 192,0.66,1 195,0.62,1 198,0.6,1 180,0.66,1 222,0.56,1 114,0.41,1 81,0.82,1 102,0.56,1 111,0.55,1 90,1.02,1 93,1.0,1 90,1.18,1 90,1.18,1 93,1.1,1 96,1.07,1 99,1.04,1 102,0.99,1 105,0.94,1 108,0.92,1 111,0.9,1 162,0.66,1 159,0.63,1 162,0.65,-1 162,0.66,-1 168,0.64,-1 159,0.68,-1 162,0.67,-1 174,0.62,-1 168,0.65,-1 171,0.64,-1 198,0.55,-1 300,0.47,-1 201,0.56,-1 174,0.63,-1 180,0.61,-1 171,0.64,-1 174,0.62,-1 303,0.47,-1 312,0.48,-1 258,0.51,-1 261,0.51,-1 264,0.5,-1 279,0.47,-1 288,0.48,-1 294,0.47,-1 258,0.52,-1 261,0.51,-1 267,0.5,-1 222,0.53,-1 171,0.64,-1 177,0.63,-1 177,0.63,-1
Essentially, this code ungroups the data frame before plotting to create the hovertemplate you're looking for. As stated in the comments, the data has to have the same number of rows to be shown in the hovertemplate. At the end of my answer, I added the code all in one chunk. Since you have hovermode as x unified, you probably only want one of these traces to have hover content. I slightly modified the creation of Related_Display_Info. Instead of WinLoss, which is already in the parent data frame, I modified it to WinLoss_sum, so there wouldn't be a naming conflict when I ungrouped. Related_Display_Info = df.groupby("RacerCount").agg( Counts=("Velocity","count"), WinLoss_sum=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg)) Now it's time to ungroup the data you grouped. I created dui (stands for display info ungrouped). dui = pd.merge(df, Related_Display_Info, how = "outer", on="RacerCount", suffixes=(False, False)) I created the hovertemplate for both traces. I passed the entire ungrouped data frame to customdata. It looks like the only column that isn't in the template is the original WinLoss. # create hover template for all traces ht="<br>".join(["<br>RacerCount: %{customdata[0]}", "Velocity: %{customdata[1]:.2f}", "Counts: %{customdata[3]}", "Winloss: %{customdata[4]}", "Positives: %{customdata[5]}", "Negatives: %{customdata[6]}<br>"]) The creation of fig is unchanged. However, the traces are both based on dui. Additionally, the index isn't RacerCount, so I used the literal field instead. # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace(go.Scatter(x=dui["RacerCount"], y=dui["Velocity"], name="Velocity", mode="markers", customdata=dui, hovertemplate=ht), secondary_y=False) fig.add_trace( go.Scatter(x = dui["RacerCount"], y=dui["WinLoss_sum"], customdata=dui, name="Win/Loss", mode="markers", marker=dict(color=((dui["WinLoss_sum"] < 0)).astype('int'), colorscale=[[0, 'green'], [1, 'red']]), hovertemplate=ht), secondary_y=True) All the code altogether (for easier copy + paste) import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots df = pd.read_clipboard(sep = ',') display=df[["Velocity", "WinLoss"]] pos = lambda col : col[col > 0].sum() neg = lambda col : col[col < 0].sum() Related_Display_Info = df.groupby("RacerCount").agg( Counts=("Velocity","count"), WinLoss_sum=("WinLoss","sum"), Positives=("WinLoss", pos), Negatives=("WinLoss", neg)) # ungroup the data for the hovertemplate dui = pd.merge(df, Related_Display_Info, how = "outer", on="RacerCount", suffixes=(False, False)) # create hover template for all traces ht="<br>".join(["<br>RacerCount: %{customdata[0]}", "Velocity: %{customdata[1]:.2f}", "Counts: %{customdata[3]}", "Winloss: %{customdata[4]}", "Positives: %{customdata[5]}", "Negatives: %{customdata[6]}<br>"]) # Create figure with secondary y-axis fig = make_subplots(specs=[[{"secondary_y": True}]]) # Add traces fig.add_trace(go.Scatter(x=dui["RacerCount"], y=dui["Velocity"], name="Velocity", mode="markers", customdata=dui, hovertemplate=ht), secondary_y=False) fig.add_trace( go.Scatter(x = dui["RacerCount"], y=dui["WinLoss_sum"], customdata=dui, name="Win/Loss", mode="markers", marker=dict(color=((dui["WinLoss_sum"] < 0)).astype('int'), colorscale=[[0, 'green'], [1, 'red']]), hovertemplate=ht), secondary_y=True) # Add figure title fig.update_layout( title_text="Race Analysis" ) # Set x-axis title fig.update_xaxes(title_text="<b>Racer Counts</b>") # Set y-axes titles fig.update_yaxes(title_text="<b>Velocity</b>", secondary_y=False) fig.update_yaxes(title_text="<b>Win/Loss/b>", secondary_y=True) fig.update_layout(hovermode="x unified") fig.show()
3
4
74,468,285
2022-11-16
https://stackoverflow.com/questions/74468285/how-to-fix-runtimewarning-running-interpreter-doesnt-sufficiently-support-cod
Every time I run any pipenv command I'm getting this: C:\Users\user_name\AppData\Local\Programs\Python\Python311\Lib\site-packages\pipenv\vendor\attr_make.py:876: RuntimeWarning: Running interpreter doesn't sufficiently support code object introspection. Some features like bare super() or accessing class will not work with slotted classes. set_closure_cell(cell, cls) The command runs after it, but I would like to disable this message. I'm using Windows 10 19044.2194 and pipenv 2022.10.25.
I was fighting the same issue on MacOS. The problem seems to be when pipenv is installed with brew. I fixed it by uninstalling the brew version of pipenv, then installing pipenv using pip. Here are the commands: brew uninstall pipenv pip install pipenv Worked like a charm for me. Hope it helps you.
3
8
74,476,226
2022-11-17
https://stackoverflow.com/questions/74476226/add-values-to-new-column-from-a-dict-with-keys-matching-the-index-of-a-dataframe
I have a dictionary that for examples sake, looks like {'a': 1, 'b': 4, 'c': 7} I have a dataframe that has the same index values as the keys in this dict. I want to add each value from the dict to the dataframe. I feel like doing a check for every row of the DF, checking the index value, matching it to the one in the dict, then trying to add it is going to be a very slow way right?
You can use map and assign back to a new column: d = {'a': 1, 'b': 4, 'c': 7} df = pd.DataFrame({'c':[1,2,3]},index=['a','b','c']) df['new_col'] = df.index.map(d) prints: c new_col a 1 1 b 2 4 c 3 7
3
4
74,470,382
2022-11-17
https://stackoverflow.com/questions/74470382/plotly-3d-surface-plot-not-appearing
Hi I am trying to plot Plotly 3D surface plot, but unfortunately it doesnt appear. When I try with Scatter3D it works though not with Surface3D. Any ideas why? # Scatter 3D p = go.Figure() p.add_trace(go.Scatter3d( x = df.X1, y = df.X2, z = df.Y3, mode = "markers", marker = dict(size = 3), name = "actual" )) from plotly.offline import plot trace = go.Surface(x = df.X1,y = df.X2,z = df.Y3) data = [trace] p2 = dict(data = data) plot(p2)
A surface is not just a bunch of points. To draw a surface, Plotly needs to know how to split it in elementary triangles. Sure, you may think that, seeing your scatter plot, it seems obvious how to do so. But, well, it would be way less obvious if your points were not that planar. Plus, even in obvious cases, that would imply doing things like Delaunay's triangulation of all your (x,y) points. That would be costly. So, well, in plotly at least, a surface is not just a bunch of points. It is a matrix of z values, matching a regular, implicit mesh of (x,y) values. Just see how you would draw a 3×3 plane surface with both methods (scatter and surface). import plotly.graph_objects as go p = go.Figure() p.add_trace(go.Scatter3d( x = [0,1,2,0,1,2,0,1,2], y = [0,0,0,1,1,1,2,2,2], z = [1,1,1,1,1,1,1,1,1], mode = "markers", marker = dict(size = 3), name = "actual" )) p.show() import plotly.graph_objects as go p = go.Figure() p.add_trace(go.Surface( x = [0,1,2], y = [0,1,2], z = [[1,1,1],[1,1,1],[1,1,1]] )) p.show() In the second case, the surface is described as a 2D-array of z-values (x and y values just set the scale. You can omit them, unless you need irregular spacing) In the first case, x,y,z together describe a bunch of points (that, in this example, happen to be regularly spaced, but plotly can't guess that, since nothing prevent me to give different values for x and y)
3
4
74,466,414
2022-11-16
https://stackoverflow.com/questions/74466414/how-to-stop-browser-closing-in-python-selenium-without-calling-quit-or-close
Description of the problem: The problem I'm stuck on is when I run a code, it first opens the chrome browser and opens the google.com website and then it closes it for no reason. This is my code: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service opts = Options() opts.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36") PATH = "C:\Program Files (x86)\chromedriver.exe" driver_service = Service(executable_path=PATH) driver = webdriver.Chrome(service=driver_service,options=opts) driver.get("https://www.google.com/") print("connected to: " + driver.title) What I expect is: that my bot will stay on google.com and not close it! For example: according to a video https://www.youtube.com/watch?v=Xjv1sY630Uc at minute 9:37 he runs his bot, a bot that does not close the window and compared to what my bot does to me it does close the window (I don't understand why it opens and immediately closes the window with my bot) Here we see that my program finished without errors but also see that it stopped working for no reason
Add this code and try: options = Options() options.add_experimental_option("detach", True) driver = webdriver.Chrome(service=driver_service,options=options) Don't forget to add double slash '\\' in the chromedriver.exe path.
5
10
74,461,394
2022-11-16
https://stackoverflow.com/questions/74461394/issue-with-new-isort-extension-installed-as-from-vs-code-update-october-2022-ve
I'm using VS-Code version 1.73.1, with MS Python extension v2022.18.2, on Windows 10 Pro, Build 10.0.19045. After installing the October 2022 update of VS Code, when writing Python code I noticed nagging error diagnostics being issued by the isort extension about the import order of modules. Previously, I had never encountered such diagnostics. I traced this behaviour back to the VS Code release notes for the Update October 2022. These announce the migration of VS Code to a new stand-alone isort extension, instead of the isort support built into the Python extension, by automatically installing it alongside the Python extension. When opening a file in which the imports do not follow isort standards, the extension is intended to issue an error diagnostic and display a Code Action to fix the import order. Whilst the extension seems to work as intended, I found the issues described below: 1. Even after having executed the Code Action to fix the import order, a 'light-bulb' with the same error diagnostic and Code Action again pops up on moving the cursor to a new line of code. 2. The error diagnostic and Code Action 'light-bulb' are also displayed when moving the cursor to any new line of code, even when all lines of code in the file have been commented out; that is, effectively, there are no longer any import statements in the code, and therefore also nothing to be sorted. I'd appreciate comments on whether this is a recognised issue in VS Code, and if so, whether any workarounds are available. It defeats the purpose of having an 'error lightbulb' pop up on every line of code, just to find a code action recommending to fix the import order, even when this requires no fixing. I have opened this question on this forum as recommended on the GitHub 'Contributing to VS Code' page.
Upgrade the isort extension version to latest(v2022.8.0).
3
3
74,460,663
2022-11-16
https://stackoverflow.com/questions/74460663/context-manager-error-handling-inside-init-method
A bit of context I am working with a package that allows you to calculate several things about planets (such as their speed, or position), using information stored in files. The package includes methods to load, and unload files, so its basic usage would look like this: load(["File_1", "File_2"]) try: function() finally: unload(["File_1", "File_2"]) As this is a textbook example of the utility of a context manager, and the package lacks one, I am writing my own. class file_manager: def __init__(self, file_list) -> None: self.file_list = file_list load(self.file_list) return None def __enter__(self) -> None: return None def __exit__(self, exc_type, exc_value, traceback) -> None: unload(self.file_list) return None With the new context manager, the previous example can be rewritten as follows: with file_manager(["File_1", "File_2"]): function() and the __exit__ method guarantees that files will still be unloaded if function raises an error. My problem The load function loads files one by one, without first checking if all of them are available. As a result, if File_1 exists, but File_2 doesn't, File_1 will be loaded, and an exception will be raised while loading File_2. According to python documentation: The with statement guarantees that if the __enter__() method returns without an error, then __exit__() will always be called. Therefore, in the previous case, the execution of the program will end without File_2 being unloaded. What am I looking for I can obviously fix this by using a try...except clause inside the __init__() method: def __init__(self, file_list) -> None: self.file_list = file_list try: load(self.file_list) except FileDoesNotExistError: self.__exit__(FileDoesNotExistError, False, None) but I want to know if this is the proper way to solve this problem. For example, in Cython, classes have a __dealloc__() method, which is guaranteed to run, no matter what type of exception occurs.
You can wrap your original code using contextlib.contextmanager. from contextlib import contextmanager @contextmanager def file_manager(file_list): try: load(file_list) yield None # after this the code inside the with block is executed finally: # this is called when the with block has finished # or when load raises an exception unload(file_list) and use it like with file_manager(["File_1", "File_2"]): function()
5
4
74,460,495
2022-11-16
https://stackoverflow.com/questions/74460495/how-to-use-two-variable-types-in-a-pydantic-basemodel-with-typing-union
I need my model to accept either a bytes type variable or a string type variable and to raise an exception if any other type was passed. from typing import Union from pydantic import BaseModel class MyModel(BaseModel): a: Union[bytes, str] m1 = MyModel(a='123') m2 = MyModel(a=b'123') print(type(m1.a)) print(type(m2.a)) In my case the model interprets both bytes and string as bytes. Output: <class 'bytes'> <class 'bytes'> Desired output: <class 'str'> <class 'bytes'> The desired output above can be achieved if I re-assign member a: m1 = MyModel(a='123') m1.a = '123' Is it possible to get it in one go?
The problem you are facing is that the str type does some automatic conversions (here in the docs): strings are accepted as-is, int float and Decimal are coerced using str(v), bytes and bytearray are converted using v.decode(), enums inheriting from str are converted using v.value, and all other types cause an error bytes are accepted as-is, bytearray is converted using bytes(v), str are converted using v.encode(), and int, float, and Decimal are coerced using str(v).encode() You can use StrictTypes to avoid automatic conversion between compatible types (e.g.: str and bytes): from typing import Union from pydantic import BaseModel, StrictStr, StrictBytes class MyModel(BaseModel): a: Union[StrictStr, StrictBytes] m1 = MyModel(a='123') m2 = MyModel(a=b'123') print(type(m1.a)) print(type(m2.a)) Output will be as expected: <class 'str'> <class 'bytes'>
5
7
74,460,294
2022-11-16
https://stackoverflow.com/questions/74460294/creating-sum-of-date-ranges-in-pandas
I have the following DataFrame, with over 3 million rows: VALID_FROM VALID_TO VALUE 0 2022-01-01 2022-01-02 5 1 2022-01-01 2022-01-03 2 2 2022-01-02 2022-01-04 7 3 2022-01-03 2022-01-06 3 I want to create one large date_range with a sum of the values for each timestamp. For the DataFrame above that would come out to: dates val 0 2022-01-01 7 1 2022-01-02 14 2 2022-01-03 12 3 2022-01-04 10 4 2022-01-05 3 5 2022-01-06 3 However, as the DataFrame has a little over 3 Million rows I don't want to iterate over each row and I'm not sure how to do this without iterating. Any suggestions? Currently my code looks like this: new_df = pd.DataFrame() for idx, row in dummy_df.iterrows(): dr = pd.date_range(row["VALID_FROM"], end = row["VALID_TO"], freq = "D") tmp_df = pd.DataFrame({"dates": dr, "val": row["VALUE"]}) new_df = pd.concat(objs=[new_df, tmp_df], ignore_index=True) new_df.groupby("dates", as_index=False, group_keys=False).sum() The result of the groupby would be my desired output.
If performance is important use Index.repeat with DataFrame.loc for new rows, create date colun with counter by GroupBy.cumcount and last aggregate sum: df['VALID_FROM'] = pd.to_datetime(df['VALID_FROM']) df['VALID_TO'] = pd.to_datetime(df['VALID_TO']) df1 = df.loc[df.index.repeat(df['VALID_TO'].sub(df['VALID_FROM']).dt.days + 1)] df1['dates'] = df1['VALID_FROM'] + pd.to_timedelta(df1.groupby(level=0).cumcount(),unit='d') df1 = df1.groupby('dates', as_index=False)['VALUE'].sum() print (df1) dates VALUE 0 2022-01-01 7 1 2022-01-02 14 2 2022-01-03 12 3 2022-01-04 10 4 2022-01-05 3 5 2022-01-06 3
3
3
74,456,529
2022-11-16
https://stackoverflow.com/questions/74456529/python-typing-nested-dictionary-of-unknown-depth
I am using Python 3.11. Type hinting for dict of dicts of strs will look like this: dict[dict[str, str]] But what if I want to make hints for dict of unknown depth? For example, I want to write a function, which construct tree in dict form from list of tuples (parent, offspring): source = [('a', 'b'), ('b', 'c'), ('d', 'e')] target = {'a': {'b': {'c': {}}}, 'd': {'e': {}}} def tree_form(source: list[tuple[str, str]]) -> ???: """code""" pass What should I write instead of '???'?
You can use a type alias with a forward reference to itself: from typing import TypeAlias NestedDict: TypeAlias = dict[str, str | 'NestedDict'] def tree_form(source: list[tuple[str, str]]) -> NestedDict: return {'a': {'b': {'c': {}}}, 'd': {'e': {}}} print(tree_form([('a', 'b'), ('b', 'c'), ('d', 'e')])) Demo of this code passing mypy: https://mypy-play.net/?mypy=latest&python=3.10&gist=6d359c16ab3f5e82b5cd2cdf9e142a6d
4
6