question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,187,827 | 2023-9-27 | https://stackoverflow.com/questions/77187827/xcode-project-originally-python-kivy-not-building-to-iphone | Short summary: The app created by python kivy which was converted to Xcode project using kivy-ios/toolchain runs on the simulator and in some cases can be built to my iPhone. But I don’t understand what I am doing different on the occasions where it does not run on my iPhone -Mac M1 arm64 -Ventura 13.5.2 -Xcode15 -iOS17 Here are two of the errors I get when I fail to build. The second one is always the same the first one seems to change the file name on occasion. Error 1: Building for 'iOS', but linking in dylib (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics.tbd) built for 'iOS-simulator' *Sometimes Error 1 will be for AudioToolbox.framework/AudioToolbox.tbd file. Error2: Linker command failed with exit code 1 (use -v to see invocation) I have created a text file that lists the settings of each Xcode project there are a few differences. Below is a screen shot of a comparison of these files. The one on left builds to iPhone oddly enough. I do not where to find the ARCHS = arm64 setting in Xcode. To make this file I used: xcodebuild -project openmindset.xcodeproj -target openmindset -configuration openmindset -showBuildSettings > openmindset_settings_02.txt If there is something else better, please share. Here is an abbreviated version of my log with my most recent attempts to try to make sense of what is going on. Full version: https://1drv.ms/x/s!AmCs1-5fbd9gncEVVGmWSY8B3QyGSA?e=n4fla2 I have seen posts on this topic that have suggested I need to remove arm64 from Excluded Architectures, since I am using an arm64, but that doesn’t seem to matter either. Or maybe I’ve not set the right argument for that parameter? Honestly, I don’t think I’m in searching in the right ball park. This is mainly to show avenues I’ve searched in. So if there are any ideas even in a different ball park, I’d be greatful. Thanks in advance. | Ok, this is a little hacky but it works… Change Xcode filename. Background All toolchains commands need Xcode to be the same name as targeted when toolchain is built. Or at least there are locations that toolchain expects Xcode to be based on the setup of the toolchain environment. But then after the Xcode project is built, if I change the file name from “Xcode” to say “Xcode15”. At that point the project builds to my iPhone. I made no adjustments to the Xcode project other than sign in to my Team. My hope is that someone who knows more about this will run across this and be able to diagnose my problem and suggest a better solution. But for the time being this is for me. | 3 | 3 |
77,194,533 | 2023-9-28 | https://stackoverflow.com/questions/77194533/why-is-true-or-false-and-false-true | Why is this printing 'yes'? I don't understand why. I understand that x or y returns true, but x and z should return false, or is the order it compares different? x=True y=False z=False if(x or y and z): print ("yes") else: print("no") | Because and has a higher precedence than or, so it is equivalent to: True or (False and False) Which is: True or False which is: True | 2 | 5 |
77,194,151 | 2023-9-28 | https://stackoverflow.com/questions/77194151/ignore-path-in-pylint-config-raises-regex-error | I have the pylint config: [MAIN] load-plugins=pylint_django django-settings-module=kernel.settings ignore-paths=^kernel/**$ , ^migrations/*$ But whenever i want to run i get this error for kernel regex: re.error: multiple repeat at position 10 How to make it so it ignores all subdirs and files inside kernel? | It's an re.error. Which means your regular expression syntax is incorrect. so to ignore ALL subdirs and files inside the kernel. simply assign kernel/.* regex value to ignore-paths so it will look like this. [MAIN] load-plugins=pylint_django django-settings-module=kernel.settings ignore-paths=^kernel/.* hope this solves the problem. | 2 | 2 |
77,186,124 | 2023-9-27 | https://stackoverflow.com/questions/77186124/run-a-function-every-time-a-method-in-a-class-is-called | I am trying to make a method that gets called every time a function is called, for every single function in the class. The class is a subclass of another class (in my specific case, it is a subclass of list). class Foo(another_class): def __init__(self, bar): self.bar = bar super().__init__(self.bar) def function_to_be_called(self): self.bar += 1 def function(self): # do stuff foo = Foo(0) foo.function() foo.inherited_function() print(foo.bar) # should be 2, since function and inherited_function were called I found an answer that suggests overwriting the __getattribute__ magic method, but doing so results in infinite recursion (since the function that is called needs to be found using __getattribute__ first), or creates very weird behaviour. My first attempt (causes RecursionError): class Foo(another_class): def __init__(self, bar): self.bar = bar super().__init__(self.bar) def function_to_be_called(self): self.bar += 1 def function(self): # do stuff def __getattribute__(self, attr): method = another_class.__getattribute__(self, attr) self.function_to_be_called() # this required __getattribute__ to be called to find function_to_be_called, causing an infinite loop # even if self.function_to_be_called() is replaced with the self.bar += 1, it still causes an infinite loop as __getattribute__ is called to find bar return method My second attempt also includes the if callable(method) that was in the answer, and moving the contents of function_to_be_called into the if statement instead of calling the function (p.s. I am changing bar to be a list as that was my original intention, and is how I found the weird behaviour): # Assume that another_class has an append method class Foo(another_class): def __init__(self, bar): self.bar = bar super().__init__(self.bar) def function(self): print(self.bar) def __getattribute__(self, attr): method = another_class.__getattribute__(self, attr) if callable(method): self.bar.sort() # doing something to bar, for example sorting it return method foo = Foo([3, 1, 2]) print(foo) # [3, 1, 2], although the expected result is [1, 2, 3]. it does not actually seem to be sorting it foo.append(1) print(foo) # [3, 1, 2, 1], although the expected result is [1, 1, 2, 3] foo.function() # [1, 2, 3], the other 1 that was appended dissappears randomly (edit: I tried this again, with the actual class that I am using this in, and it seems to work for some reason), and it is sorted for some reason this time | You can call the super().__getattribute__ method (which in this case resolves to the object.__getattribute__ method) instead to avoid recursion: class Foo(list): def __init__(self, seq): super().__init__(seq) self.bar = 0 def function_to_be_called(self): self.bar += 1 def function(self): pass def __getattribute__(self, name): attr = super().__getattribute__(name) if callable(attr): super().__getattribute__('function_to_be_called')() return attr foo = Foo([3, 1, 2]) foo.append(1) foo.function() print(foo.bar) This outputs: 2 Demo: Try it online! | 2 | 4 |
77,185,386 | 2023-9-27 | https://stackoverflow.com/questions/77185386/python-rich-live-not-working-in-intellij-ide | I have the following example of Rich Live from the official examples of Rich. (layout.py) Code from datetime import datetime from time import sleep from rich.align import Align from rich.console import Console from rich.layout import Layout from rich.live import Live from rich.text import Text console = Console() layout = Layout() layout.split( Layout(name="header", size=1), Layout(ratio=1, name="main"), Layout(size=10, name="footer"), ) layout["main"].split_row(Layout(name="side"), Layout(name="body", ratio=2)) layout["side"].split(Layout(), Layout()) layout["body"].update( Align.center( Text( """This is a demonstration of rich.Layout\n\nHit Ctrl+C to exit""", justify="center", ), vertical="middle", ) ) class Clock: """Renders the time in the center of the screen.""" def __rich__(self) -> Text: return Text(datetime.now().ctime(), style="bold magenta", justify="center") layout["header"].update(Clock()) with Live(layout, screen=True, redirect_stderr=False) as live: try: while True: sleep(1) except KeyboardInterrupt: pass Problem This works as expected when I do python layout.py in Powershell. But if I click on run via the IntelliJ IDE it is not working. The following is my configuration | Toggling the "Emulate terminal in output console" option in the Run configuration should fix the problem. If this option is unavailable in the run configuration, remove it from Run | Edit Configuration and create a new one by right-clicking the file in the editor, More Run/Debug | Modify Configuration. Also, the version of IDEA is a bit outdated; please consider updating to the latest. | 2 | 7 |
77,191,221 | 2023-9-27 | https://stackoverflow.com/questions/77191221/undetected-chromedriver-attributeerror-chromeoptions-object-has-no-attribute | I'm getting an error when I run a Python Selenium script to open a webpage. I have tried uninstalling and reinstalling selenium, chromeautoinstaller, and undetected chromedriver. I also tried adding option.add_argument('--headless'). None of these were successful and the error remained the same. Here is my code: def driverInit(): option = uc.ChromeOptions() option.add_argument("--log-level=3") prefs = {"credentials_enable_service": False, "profile.password_manager_enabled": False, "profile.default_content_setting_values.notifications": 2 } option.add_experimental_option("prefs", prefs) driverr = uc.Chrome(options=option) return driverr def driverInitBuffMarket(): option = uc.ChromeOptions() option.add_argument( rf'--user-data-dir=C:\Users\{os.getlogin()}\AppData\Local\Google\ChromeBuff\User Data') # e.g. C:\Users\You\AppData\Local\Google\Chrome\User Data option.add_argument(r'--profile-directory=Default') driverr = uc.Chrome(options=option) return driver The error occurs in the second-to-last line, driverr = uc.Chrome(options=option) Here is the error: Traceback (most recent call last): File "C:\Users\kumpd\OneDrive\Desktop\All Market Bots\BuffMarket Purchase Bot Testing\main.py", line 266, in <module> start_buy_monitoring() File "C:\Users\kumpd\OneDrive\Desktop\All Market Bots\BuffMarket Purchase Bot Testing\main.py", line 207, in start_buy_monitoring driverBuffMarket = driverInitBuffMarket() ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kumpd\OneDrive\Desktop\All Market Bots\BuffMarket Purchase Bot Testing\main.py", line 42, in driverInitBuffMarket driverr = uc.Chrome(options=option) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\kumpd\AppData\Local\Programs\Python\Python311\Lib\site-packages\undetected_chromedriver\__init__.py", line 398, in __init__ if headless or options.headless: ^^^^^^^^^^^^^^^^ AttributeError: 'ChromeOptions' object has no attribute 'headless' Any help is greatly appreciated! | As of today, undetected-chromedriver is still using options.headless in their code. That's a problem due to this change in selenium 4.13.0 that was just released: * remove deprecated headless methods Here are some alternatives: Downgrade to an earlier selenium version until fixed. Use SeleniumBase's UC Mode (a modified fork of undetected-chromedriver): from seleniumbase import Driver import time driver = Driver(uc=True) driver.get("https://nowsecure.nl/#relax") time.sleep(6) driver.quit() Here's another, more advanced example with retries and clicks (using the SB() manager): from seleniumbase import SB with SB(uc=True) as sb: sb.driver.get("https://nowsecure.nl/#relax") sb.sleep(2) if not sb.is_text_visible("OH YEAH, you passed!", "h1"): sb.get_new_driver(undetectable=True) sb.driver.get("https://nowsecure.nl/#relax") sb.sleep(2) if not sb.is_text_visible("OH YEAH, you passed!", "h1"): if sb.is_element_visible('iframe[src*="challenge"]'): with sb.frame_switch('iframe[src*="challenge"]'): sb.click("span.mark") sb.sleep(4) sb.activate_demo_mode() sb.assert_text("OH YEAH, you passed!", "h1", timeout=3) (This is mainly what I wrote here: https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1584#issuecomment-1737963363) | 3 | 7 |
77,190,372 | 2023-9-27 | https://stackoverflow.com/questions/77190372/cannot-access-flask-app-running-in-docker-container-in-my-browser | I'm trying to run a Flask app in a docker container and connect to it using my browser. I am not able to see the app and get an error This site can’t be reached when trying to go to http://127.0.0.1:5000. I've already followed the advice in these two questions (1) (2). This is my Dockerfile: FROM python:3.11.5-bookworm WORKDIR /app COPY requirements.txt . RUN pip install --upgrade pip RUN pip install -r requirements.txt COPY . . EXPOSE 5000 CMD ["flask", "run", "--host", "0.0.0.0"] and this is my app: from flask import Flask app = Flask(__name__) @app.route("/") def home(): return 'hello' if __name__ == "__main__": app.run(host="0.0.0.0") When I use docker desktop, I can see that the app is running correctly inside the docker container: 2023-09-27 14:14:44 * Debug mode: off 2023-09-27 14:14:44 WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. 2023-09-27 14:14:44 * Running on all addresses (0.0.0.0) 2023-09-27 14:14:44 * Running on http://127.0.0.1:5000 2023-09-27 14:14:44 * Running on http://172.17.0.5:5000 2023-09-27 14:14:44 Press CTRL+C to quit 2023-09-27 14:15:14 127.0.0.1 - - [27/Sep/2023 19:15:14] "GET / HTTP/1.1" 200 - from the command line in the docker terminal, the output is also as expected: # curl http://127.0.0.1:5000 hello# However, when I use my browser to go to localhost (http://127.0.0.1:5000), I get an error: This site can’t be reached In the tutorial I was watching, it worked, so I'm not sure what I'm doing wrong here... | Make sure to run the container specifying the mapping port: docker run -p 5000:5000 <docker_image_id> The first port stands for the port on the host machine, and the second one is for which port you want to map it in the container. Make sure there aren't any background processes running. Check for any ( linux ): netstat -tuln | grep 5000 If any found, kill: kill -9 <PID> | 3 | 3 |
77,190,347 | 2023-9-27 | https://stackoverflow.com/questions/77190347/how-do-i-forward-fill-on-specific-rows-only-in-pandas | I have the following df: col1 col2 col3 0 a 1.0 1.0 1 b NaN NaN 2 c 5.0 3.0 3 d 5.0 NaN I want to do forward fill the rows only where col2 = NaN so the result would be like this col1 col2 col3 0 a 1.0 1.0 1 b 1.0 1.0 2 c 5.0 3.0 3 d 5.0 NaN I was able to get to here but this isn't a smart solution because it ffills row 3 even though it doesn't meet the requirements of col2 = NaN df['col2'] = df['col2'].fillna(method="ffill") df['col3'] = df['col3'].fillna(method="ffill") | Code df.fillna(df.ffill().where(df['col2'].isna())) output: col1 col2 col3 0 a 1.0 1.0 1 b 1.0 1.0 2 c 5.0 3.0 3 d 5.0 NaN Example import pandas as pd data = {'col1': ['a', 'b', 'c', 'd'], 'col2': [1, None, 5, 5], 'col3': [1, None, 3, None]} df = pd.DataFrame(data) | 2 | 4 |
77,186,034 | 2023-9-27 | https://stackoverflow.com/questions/77186034/how-to-replace-string-keeping-associated-info-in-python | Given an ordered list of words with associated info (or a list of tuples). I want to replace some of the strings with others but keep track of the associated info. Let's say that we have a simple case where our input data is two list: words = ["hello", "I", "am", "I", "am", "Jone", "101"] info = ["1", "3", "23", "4", "6", "5", "12"] input could also be just a list of tuples: list_tuples = list(zip(words, info))) Each item of "list_words" has an associated item (with the same index) from "list_info". e.g. "hello" corresponds to "1" and the second "I" corresponds to "4". I want to apply some normalization rules to transform them into: words = ["hello", "I'm", "I'm", "Jone", "one hundred and one"] info = ["1", ["3", "23"], ["4", "6"], "5", "12"] or to another possible solution: words = ["hello", "I'm", "I'm", "Jone", "one", "hundred", "and", "one"] info = ["1", ["3", "23"], ["4", "6"], "5", "12", "12", "12", "12"] Note this is a simple case, and the idea is to apply multiple normalization rules (numbers to words, substitutions, other contractions, etc.). I know how to transform my string into another using regex, but in that case, I am losing the associated information: def normalize_texts_loosing_info(text): # Normalization rules text = re.sub(r"I am", "I\'m", text) text = re.sub(r"101", "one hundred and one", text) # other normalization rules. e.g. # text = re.sub(r"we\'ll", "we will", text) # text = re.sub(r"you are", "you\'re", text) # .... return text.split() words = ["hello", "I", "am", "I", "am", "Jone", "101"] print(words) print(" ".join(words)) output = normalize_texts(" ".join(words)) print(output) Question is: How can I apply some transformations to an ordered string/list of words but keep the associated info of those words? PD: Thank you for all the useful comments | Using partial matching capability of the the regex library, one can keep track of which patterns might still be applicable. def apply_transformations(words_infos: list[tuple[str, Any]], transformations: dict[regex.Pattern, tuple[str, ...]]) \ -> list[tuple[str, Any]]: # Create the list which we modify to create the result result = words_infos.copy() # Keep track of the difference between the lengths of input and output offset = 0 # Apply a transformation of tokens [start, end) replacing them with the tokens in new # During that we collect together the infos of all input tokens into one tuple, # then we copy this info to all new tokens we create # Afterwards we update `offset` def apply(start: int, end: int, new: tuple[str, ...]): nonlocal offset print(start, end, new, offset) new_info = tuple(words_infos[i][1] for i in range(start, end)) if len(new_info) == 1: new_info, = new_info result[start + offset: end + offset] = [(w, new_info) for w in new] offset -= (end - start) - len(new) # Keep track of partial matches that might still be applied partials = [] for i, (word, info) in enumerate(words_infos): new_partials = [] # Try all patterns starting at this token for pattern, res in transformations.items(): if m := pattern.fullmatch(word, partial=True): if m.partial: # We have a partial match, add it to the backlog new_partials.append((pattern, i, word)) else: # Apply this transformation immediately, replacing only this token. apply(i, i + 1, res) partials = [] # After applying something, we completely aboard everything else going on new_partials = [] break # Look thought the backlog, add the current token to them and check if the pattern is now fully applied. for pattern, first, prefix in partials: if m := pattern.fullmatch(prefix + " " + word, partial=True): if m.partial: new_partials.append((pattern, first, prefix + " " + word)) else: apply(first, i + 1, transformations[pattern]) new_partials = [] break partials = new_partials return result This function makes use of the more sane list[tuple] representation of the data instead of keep two lists in sync. The transformations are defined as a mapping from regex.Pattern instances (i..e not regexes themself) to sequences of strings. If you want more complex transformations, i.e. more similar to the input of re.sub you can fiddle around with that by replacing the values in the transformations dict and modifying the local apply function. def main(): words = ["hello", "I", "am", "I", "am", "Jone", "101"] info = ["1", "3", "23", "4", "6", "5", "12"] word_infos = list(zip(words, info, strict=True)) transformations = { regex.compile(r"I am"): ("I\'m",), regex.compile(r"101"): ("one hundred and one",) } result = apply_transformations(word_infos, transformations) print(result) | 2 | 1 |
77,184,491 | 2023-9-27 | https://stackoverflow.com/questions/77184491/psycopg2-how-to-deal-with-special-characters-in-password | I am trying to connect to a db instance, but my password has the following special characters: backslash, plus, dot, asterisk/star and at symbol. For example, 12@34\56.78*90 (regex nightmare lol) How do I safe pass it to the connection string? My code looks like that: connection_string = f'user={user} password={pass} host={host} dbname={dbname} port={port}' connection = psg2.connect(connection_string) It gives me wrong pass/username error. However, I tried this combination directly on the db and it works, and I tried another combination on the python code and it worked as well. So looks like the problem is the password being passed weirdly to the connection. I tried urllib scape, I tried double quotes on the password, nothing works so far :( | Based on a reddit thread, I found out that passing variable by variable directly instead of a connection string did the trick: con = psycopg2.connect( dbname=dn, user=du, password=dp, host=dh, port=dbp, ) | 2 | 1 |
77,187,280 | 2023-9-27 | https://stackoverflow.com/questions/77187280/is-there-any-way-to-filter-a-multi-indexed-pandas-dataframe-using-a-dict | Please consider the following DataFrame: mi = pd.MultiIndex( levels = [[1, 2, 3], ['red', 'green', 'blue'], ['a', 'b', 'c']], codes = [[1,0,1,0], [0,1,1,2], [1,0,0,1]], names = ["Key1", "Key2", "Key3"]) df = pd.DataFrame({ "values": [1, 2, 3, 4] }, index = mi) ... which looks like this: Now I know how to filter this by values of the index levels, eg: df[ df.index.get_level_values("Key1").isin([1]) & df.index.get_level_values("Key2").isin(["green"]) ] I'm trying to write a function which makes this operation less verbose, so I'd like to pass in a dict like: {"Key1":1, "Key2":"green"} to do the same thing. The solution shouldn't hardcode the number of levels we are filtering on, so that later, I might want to only filter by one of the conditions, and would pass in {"Key1":1} or ``{"Key2":"green"}`. I don't know the syntax for constructing the predicate inside the df[ ... ] on-the-fly from a dict. Is this possible? | You can convert the MultiIndex to_frame, then slice the columns with dictionary keys and check if all are equal within a row to perform boolean indexing: d = {"Key1":1, "Key2":"green"} out = df[df.index.to_frame()[list(d)].eq(d).all(axis=1)] Alternatively, using np.logical_and.reduce and your original approach: out = df[np.logical_and.reduce([df.index.get_level_values(k)==v for k,v in d.items()])] Output: values Key1 Key2 Key3 1 green a 2 Intermediates: # df.index.to_frame()[list(d)] Key1 Key2 Key1 Key2 Key3 2 red b 2 red 1 green a 1 green 2 green a 2 green 1 blue b 1 blue # df.index.to_frame()[list(d)].eq(d) Key1 Key2 Key1 Key2 Key3 2 red b False False 1 green a True True 2 green a False True 1 blue b True False # df.index.to_frame()[list(d)].eq(d).all(axis=1) Key1 Key2 Key3 2 red b False 1 green a True 2 green a False 1 blue b False dtype: bool | 2 | 2 |
77,185,621 | 2023-9-27 | https://stackoverflow.com/questions/77185621/setting-an-item-of-incompatible-dtype-is-deprecated-and-will-raise-in-a-future-e | I have the below code which for instance work as excepted but won't work in the Future : total.name = 'New_Row' total_df = total.to_frame().T total_df.at['New_Row', 'CURRENCY'] = '' total_df.at['New_Row', 'MANDATE'] = Portfolio total_df.at['New_Row', 'COMPOSITE'] = 'GRAND TOTAL' total_df.set_index('COMPOSITE',inplace=True) since an error is thrown in FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value 'GRAND TOTAL' has dtype incompatible with float64, please explicitly cast to a compatible dtype first. total_df.at['New_Row', 'COMPOSITE'] = 'GRAND TOTAL' How to fix this ? variable total is : CURRENCY MANDATE Mandate_Test USD AMOUNT 123 LOCAL AMOUNT 12 Beg. Mkt 123 End. Mkt 456 Name: New_Row, dtype: object | I think it is bug - BUG: incompatible dtype when creating string column with loc #55025 . In next version of pandas should be solved. | 12 | 9 |
77,184,058 | 2023-9-27 | https://stackoverflow.com/questions/77184058/save-a-pandas-dataframe-to-a-csv-file-without-adding-extra-double-quotes | I want to save a Pandas dataframe to a CSV file in such a way that no additional double quotes or any other characters are added to these formulas. Here is my attempt: import pandas as pd data = { "Column1": [1, 2, 3], "Column2": ["A", "B", "C"], "Formula": ['"=HYPERLINK(""https://www.yahoo.com"",""See Yahoo"")"', '"=HYPERLINK(""https://www.google.com"",""See Google"")"', '"=HYPERLINK(""https://www.bing.com"",""See Bing"")"'] } df = pd.DataFrame(data) # Save the DataFrame to a CSV file without adding extra double quotes df.to_csv("output.csv", index=False, doublequote=False) But this throws this error: File "pandas/_libs/writers.pyx", line 75, in pandas._libs.writers.write_csv_rows _csv.Error: need to escape, but no escapechar set How can I bypass this? I need it so that the hyperlink shows in Excel as a clickable link. | Remove the double quotes "" from the hyperlinks: import pandas as pd data = { "Column1": [1, 2, 3], "Column2": ["A", "B", "C"], "Formula": [ '=HYPERLINK("https://www.yahoo.com", "See Yahoo")', '=HYPERLINK("https://www.google.com", "See Google")', '=HYPERLINK("https://www.bing.com", "See Bing")', ], } df = pd.DataFrame(data) df.to_csv("out.csv", index=False, quotechar='"') Creates out.csv: Column1,Column2,Formula 1,A,"=HYPERLINK(""https://www.yahoo.com"", ""See Yahoo"")" 2,B,"=HYPERLINK(""https://www.google.com"", ""See Google"")" 3,C,"=HYPERLINK(""https://www.bing.com"", ""See Bing"")" Opening the CSV in LibreOffice (Ctrl+Click on the formula opens a webpage): | 3 | 4 |
77,170,542 | 2023-9-25 | https://stackoverflow.com/questions/77170542/how-to-make-pydantic-discriminate-a-nested-object-based-on-a-field | I have this Pydantic model: import typing import pydantic class TypeAData(pydantic.BaseModel): aStr: str class TypeBData(pydantic.BaseModel): bNumber: int class TypeCData(pydantic.BaseModel): cBoolean: bool class MyData(pydantic.BaseModel): type: typing.Literal['A', 'B', 'C'] name: str data: TypeAData | TypeBData | TypeCData However, if type is equal to "A" and data contains TypeBData, it'll validate correctly when it shouldn't. This could be an alternative: class MyData(pydantic.BaseModel): type: typing.Literal['A', 'B', 'C'] name: str data: TypeAData | TypeBData | TypeCData @pydantic.validator('data', pre=True, always=True) def validate_data(cls, data): if isinstance(data, dict): data_type = data.get('type') if data_type == 'A': return TypeAData(**data) elif data_type == 'B': return TypeBData(**data) elif data_type == 'C': return TypeCData(**data) raise ValueError('Invalid data or type') It works; however, is there a better way to do it without repeating the enum keys ('A', 'B' and 'C') and values (TypeAData, TypeBData and TypeCData) twice? I've tried using discriminated unions, but since the type field and the discriminated fields are in different levels within the model (the latter is inside a nested object), I could not make further progress on this. | Kinda late to the party, but here it goes. Consider the input I1: { "type": "A", "name": "Model A", "data": {"string": "Some string"}, } Using import typing import pydantic class TypeAData(pydantic.BaseModel): string: str class TypeBData(pydantic.BaseModel): integer: int class TypeCData(pydantic.BaseModel): boolean: bool class MyData(pydantic.BaseModel): type: typing.Literal['A', 'B', 'C'] name: str data: TypeAData | TypeBData | TypeCData = pydantic.Field(discriminator='type') model = MyData.model_validate({ 'type': 'A', 'name': 'Model A', 'data': {'text': 'Some string'}, }) print(model) will, as noted, fail with pydantic.errors.PydanticUserError: Model 'TypeAData' needs a discriminator field for key 'type' The first fix is to add a type field into each union, as follows: class TypeAData(pydantic.BaseModel): type: typing.Literal['A'] = pydantic.Field(exclude=True, repr=False) text: str class TypeBData(pydantic.BaseModel): type: typing.Literal['B'] = pydantic.Field(exclude=True, repr=False) integer: int class TypeCData(pydantic.BaseModel): type: typing.Literal['C'] = pydantic.Field(exclude=True, repr=False) boolean: int We've added the exclude and repr parameters since this field is accessible only in the Python code while importing the data, and we don't want to export it messing with the schema. However, now the error will be pydantic_core._pydantic_core.ValidationError: 1 validation error for MyData data Unable to extract tag using discriminator 'type' [type=union_tag_not_found, input_value={'text': 'Some string'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.10/v/union_tag_not_found The second fix is transform I1 into the following input I2: { "type": "A", "name": "Model A", "data": {"string": "Some string", "type": "A"}, } This will be done by adding a model validator in MyData as follows: class MyData(pydantic.BaseModel): # ... @pydantic.model_validator(mode='wrap') @classmethod def discriminate_nested( cls, data: typing.Any, handler: pydantic.ValidatorFunctionWrapHandler, ) -> typing.Self: if isinstance(data, dict): updated_data = {**data} updated_data['data']['type'] = data['type'] return handler(updated_data) return data Running the code, now updated, import typing import pydantic class TypeAData(pydantic.BaseModel): type: typing.Literal['A'] = pydantic.Field(exclude=True, repr=False) text: str class TypeBData(pydantic.BaseModel): type: typing.Literal['B'] = pydantic.Field(exclude=True, repr=False) integer: int class TypeCData(pydantic.BaseModel): type: typing.Literal['C'] = pydantic.Field(exclude=True, repr=False) boolean: int class MyData(pydantic.BaseModel): type: typing.Literal['A', 'B', 'C'] name: str data: TypeAData | TypeBData | TypeCData = pydantic.Field(discriminator='type') @pydantic.model_validator(mode='wrap') @classmethod def discriminate_nested( cls, data: typing.Any, handler: pydantic.ValidatorFunctionWrapHandler, ) -> typing.Self: if isinstance(data, dict): updated_data = {**data} updated_data['data']['type'] = data['type'] return handler(updated_data) return data model = MyData.model_validate({ 'type': 'A', 'name': 'Model A', 'data': {'text': 'Some string'}, }) print(model) will give us the expected output: type='A' name='Model A' data=TypeAData(text='Some string') But we can do better Note the duplications, highlighted in red, green, yellow, blue and orange: To fix that, I've create a PydanticUnionHandler, which you can use as follows: import pydantic TypeData = PydanticUnionHandler(nested_field='type') @TypeData.add('A') class TypeAData(pydantic.BaseModel): text: str @TypeData.add('B') class TypeBData(pydantic.BaseModel): integer: int @TypeData.add('C') class TypeCData(pydantic.BaseModel): boolean: int @TypeData.register( data_spec=PydanticUnionHandler.RegisterSpec( field_name='data', field=pydantic.Field(), # Optional, if you want to define an alias ), type_spec=PydanticUnionHandler.RegisterSpec( field_name='type', field=pydantic.Field(), # Optional, if you want to define an alias ), ) class MyData(pydantic.BaseModel): name: str model = MyData.model_validate({ 'type': 'A', 'name': 'Model A', 'data': {'text': 'Some string'}, }) print(model) This also outputs type='A' name='Model A' data=TypeAData(text='Some string') Note now how the duplications were reduced: Its implementation is as follows (Python 3.12 tested): import dataclasses import typing import pydantic import pydantic.fields import pydantic._internal._decorators class PydanticUnionHandler: @dataclasses.dataclass(frozen=True, kw_only=True) class RegisterSpec: field_name: str field: pydantic.Field _types: dict[type, typing.LiteralString] def __init__(self, *, nested_field: str): self.nested_field = nested_field self._types = {} def add[R: pydantic.BaseModel]( self, name: typing.LiteralString, ) -> typing.Callable[[typing.Type[R]], typing.Type[R]]: """ Register a nested union model to this handler. :param name: which name should the `type` field of the root's model have. :return: the same decorated model, now with a new field (named by `self.nested_field`). """ def _(cls: typing.Type[R]) -> typing.Type[R]: # Dynamic add the `pydantic.Field(exclude=True, repr=False)` to the decorated `cls` cls.model_fields.update({ self.nested_field: pydantic.fields.FieldInfo.merge_field_infos( pydantic.fields.FieldInfo.from_annotation(typing.Literal[name]), pydantic.Field(exclude=True, repr=False), ), }) cls.model_rebuild(force=True) # Register the decorated `cls` into this handler self._types[cls] = name return cls return _ def register[R: pydantic.BaseModel]( self, *, data_spec: RegisterSpec, type_spec: RegisterSpec, ) -> typing.Callable[[typing.Type[R]], typing.Type[R]]: """ Modify the root model, which will contain the nested unions. :param data_spec: how the field containing the nested unions should be declared. :param type_spec: how the field containing the discriminator label should be declared. :return: the same decorated model, now with two news fields (named by `data_spec.field_name` and `type_spec.field_name), and a model validator. """ def _(cls: typing.Type[R]) -> typing.Type[R]: # The model validator to be added dynamically to the model def discriminated_nested( _, data: typing.Any, handler: pydantic.ValidatorFunctionWrapHandler, ) -> typing.Self: if isinstance(data, dict): updated_data = {**data} data_key: str = data_spec.field.alias or data_spec.field_name type_key: str = type_spec.field.alias or type_spec.field_name updated_data[data_key][self.nested_field] = updated_data[type_key] return handler(updated_data) return data # Source: https://github.com/pydantic/pydantic/issues/1937#issuecomment-1853320238 cls.model_fields.update({ # Adds the nested union field data_spec.field_name: pydantic.fields.FieldInfo.merge_field_infos( pydantic.fields.FieldInfo.from_annotation( typing.Annotated[ typing.Union[*[ typing.Annotated[type_, pydantic.Tag(name)] for type_, name in self._types.items() ]], pydantic.Discriminator(self.nested_field), ], ), data_spec.field, ), # Adds the discriminator label field type_spec.field_name: pydantic.fields.FieldInfo.merge_field_infos( pydantic.fields.FieldInfo.from_annotation( typing.Union[*[typing.Literal[name] for name in self._types.values()]], ), type_spec.field, ), }) # Adds the model field cls.discriminated_nested = classmethod(discriminated_nested) cls.__pydantic_decorators__.model_validators.update({ discriminated_nested.__name__: pydantic._internal._decorators.Decorator.build( cls, cls_var_name=discriminated_nested.__name__, shim=None, info=pydantic._internal._decorators.ModelValidatorDecoratorInfo(mode='wrap'), ), }) cls.model_rebuild(force=True) return cls return _ | 3 | 0 |
77,164,318 | 2023-9-23 | https://stackoverflow.com/questions/77164318/error-with-langchain-chatprompttemplate-from-messages | As shown in LangChain Quickstart, I am trying the following Python code: from langchain.prompts.chat import ChatPromptTemplate template = "You are a helpful assistant that translates {input_language} to {output_language}." human_template = "{text}" chat_prompt = ChatPromptTemplate.from_messages([ ("system", template), ("human", human_template), ]) chat_prompt.format_messages(input_language="English", output_language="French", text="I love programming.") But when I run the above code, I get the following error: Traceback (most recent call last): File "/home/yser364/Projets/SinappsIrdOpenaiQA/promptWorkout.py", line 6, in <module> chat_prompt = ChatPromptTemplate.from_messages([ File "/home/yser364/.local/lib/python3.10/site-packages/langchain/prompts/chat.py", line 220, in from_messages return cls(input_variables=list(input_vars), messages=messages) File "/home/yser364/.local/lib/python3.10/site-packages/langchain/load/serializable.py", line 64, in __init__ super().__init__(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 4 validation errors for ChatPromptTemplate messages -> 0 value is not a valid dict (type=type_error.dict) messages -> 0 value is not a valid dict (type=type_error.dict) messages -> 1 value is not a valid dict (type=type_error.dict) messages -> 1 value is not a valid dict (type=type_error.dict) I use Python 3.10.12. | Your example is from the Prompt templates section of the LangChain Quickstart tutorial. I did not spot any differences, so it should work as given. I tried out the example myself, with an additional loop to output the messages created by chat_prompt.format_messages: from langchain.prompts.chat import ChatPromptTemplate template = "You are a helpful assistant that translates {input_language} to {output_language}." human_template = "{text}" chat_prompt = ChatPromptTemplate.from_messages([ ("system", template), ("human", human_template), ]) messages = chat_prompt.format_messages(input_language="English", output_language="French", text="I love programming.") for message in messages: print(message.__repr__()) The example works without any errors. The result is very similar to what is shown in the tutorial, although not identical: SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}) HumanMessage(content='I love programming.', additional_kwargs={}, example=False) I ran the test with Python 3.9.5 and LangChain 0.0.300, which is the lastest version on PyPI. According to PyPI, it supports Python >=3.8.1 and <4.0. Maybe your version of LangChain or one of its dependencies is outdated? Try to run it in a new venv with a fresh install of LangChain. | 3 | 10 |
77,158,196 | 2023-9-22 | https://stackoverflow.com/questions/77158196/get-list-of-column-names-with-values-0-for-every-row-in-polars | I want to add a column result to a polars DataFrame that contains a list of the column names with a value greater than zero at that position. So given this: import polars as pl df = pl.DataFrame({"apple": [1, 0, 2, 0], "banana": [1, 0, 0, 1]}) cols = ["apple", "banana"] How do I get: shape: (4, 3) ┌───────┬────────┬─────────────────────┐ │ apple ┆ banana ┆ result │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ list[str] │ ╞═══════╪════════╪═════════════════════╡ │ 1 ┆ 1 ┆ ["apple", "banana"] │ │ 0 ┆ 0 ┆ [] │ │ 2 ┆ 0 ┆ ["apple"] │ │ 0 ┆ 1 ┆ ["banana"] │ └───────┴────────┴─────────────────────┘ All I have so far is the truth values: df.with_columns(pl.concat_list(pl.col(cols).gt(0)).alias("result")) shape: (4, 3) ┌───────┬────────┬────────────────┐ │ apple ┆ banana ┆ result │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ list[bool] │ ╞═══════╪════════╪════════════════╡ │ 1 ┆ 1 ┆ [true, true] │ │ 0 ┆ 0 ┆ [false, false] │ │ 2 ┆ 0 ┆ [true, false] │ │ 0 ┆ 1 ┆ [false, true] │ └───────┴────────┴────────────────┘ | Here's one way: you can use pl.when with pl.lit in the concat_list to get either the literal column names or nulls, then do a list.drop_nulls: df.with_columns( result=pl.concat_list( pl.when(pl.col(col) > 0).then(pl.lit(col)) for col in df.columns ).list.drop_nulls() ) shape: (4, 3) ┌───────┬────────┬─────────────────────┐ │ apple ┆ banana ┆ result │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ list[str] │ ╞═══════╪════════╪═════════════════════╡ │ 1 ┆ 1 ┆ ["apple", "banana"] │ │ 0 ┆ 0 ┆ [] │ │ 2 ┆ 0 ┆ ["apple"] │ │ 0 ┆ 1 ┆ ["banana"] │ └───────┴────────┴─────────────────────┘ | 4 | 4 |
77,160,103 | 2023-9-22 | https://stackoverflow.com/questions/77160103/exponential-moving-average-ema-calculations-in-polars-dataframe | I have the following list of 20 values: values = [143.15,143.1,143.06,143.01,143.03,143.09,143.14,143.18,143.2,143.2,143.2,143.31,143.38,143.35,143.34,143.25,143.33,143.3,143.33,143.36] In order to find the Exponential Moving Average, across a span of 9 values, I can do the following in Python: def calculate_ema(values, periods, smoothing=2): ema = [sum(values[:periods]) / periods] for price in values[periods:]: ema.append((price * (smoothing / (1 + periods))) + ema[-1] * (1 - (smoothing / (1 + periods)))) return ema ema_9 = calculate_ema(values, periods=9) [143.10666666666668, 143.12533333333334, 143.14026666666666, 143.17421333333334, 143.21537066666667, 143.24229653333333, 143.26183722666667, 143.25946978133334, 143.27357582506667, 143.27886066005334, 143.28908852804267, 143.30327082243414] The resulting list of EMA values is 12 items long, the first value [0] corresponding to the 9th [8] value from values. Using Pandas and TA-Lib, I can perform the following: import pandas as pd import talib as ta df_pan = pd.DataFrame( { 'value': values } ) df_pan['ema_9'] = ta.EMA(df_pan['value'], timeperiod=9) df_pan value ema_9 0 143.15 NaN 1 143.10 NaN 2 143.06 NaN 3 143.01 NaN 4 143.03 NaN 5 143.09 NaN 6 143.14 NaN 7 143.18 NaN 8 143.20 143.106667 9 143.20 143.125333 10 143.20 143.140267 11 143.31 143.174213 12 143.38 143.215371 13 143.35 143.242297 14 143.34 143.261837 15 143.25 143.259470 16 143.33 143.273576 17 143.30 143.278861 18 143.33 143.289089 19 143.36 143.303271 The Pandas / TA-Lib output corresponds with that of my Python function. However, when I try to replicate this using funtionality purely in Polars: import polars as pl df = ( pl.DataFrame( { 'value': values } ) .with_columns( pl.col('value').ewm_mean(span=9, min_periods=9,).alias('ema_9') ) ) df I get different values: value ema_9 f64 f64 143.15 null 143.1 null 143.06 null 143.01 null 143.03 null 143.09 null 143.14 null 143.18 null 143.2 143.128695 143.2 143.144672 143.2 143.156777 143.31 143.189683 143.38 143.229961 143.35 143.255073 143.34 143.272678 143.25 143.268011 143.33 143.280694 143.3 143.284626 143.33 143.293834 143.36 143.307221 Can anyone please explain what adjustments I need to make to my Polars code in order get the expected results? | Two things here: Reading the ewm_mean docs closely, you want adjust=False (default is True). min_periods is still doing the calculations as if you didn't skip any values, it just replaces those calculated values with null up to the min_periodsth row, so to speak. Try removing min_periods and see how the tail values don't change at all. To actually change the calculation (starting with the mean of the first min_periods values), we can do a a pl.when with cum_count (a handy way to get the row index of a value). The calculations will all still be done under the hood, but the ewm_mean will stay at this constant value, of course, until row 9, and min_periods=9 will null them out in the end. All together: df.with_columns( pl.when(pl.col('value').cum_count() <= 9) # NOTE: Polars cum_count starts at 1 .then(pl.col('value').head(9).mean()) .otherwise(pl.col('value')) .ewm_mean(span=9, min_periods=9, adjust=False) .alias('ema_9') ) shape: (20, 2) ┌────────┬────────────┐ │ value ┆ ema_9 │ │ --- ┆ --- │ │ f64 ┆ f64 │ ╞════════╪════════════╡ │ 143.15 ┆ null │ │ 143.1 ┆ null │ │ 143.06 ┆ null │ │ 143.01 ┆ null │ │ 143.03 ┆ null │ │ 143.09 ┆ null │ │ 143.14 ┆ null │ │ 143.18 ┆ null │ │ 143.2 ┆ 143.106667 │ │ 143.2 ┆ 143.125333 │ │ 143.2 ┆ 143.140267 │ │ 143.31 ┆ 143.174213 │ │ 143.38 ┆ 143.215371 │ │ 143.35 ┆ 143.242297 │ │ 143.34 ┆ 143.261837 │ │ 143.25 ┆ 143.25947 │ │ 143.33 ┆ 143.273576 │ │ 143.3 ┆ 143.278861 │ │ 143.33 ┆ 143.289089 │ │ 143.36 ┆ 143.303271 │ └────────┴────────────┘ | 6 | 7 |
77,141,633 | 2023-9-20 | https://stackoverflow.com/questions/77141633/prefect-importerror-cannot-import-name-secretfield-from-pydantic | I'm currently using prefect to orchestrate some simple task in python. It's working fine until I get this error : Traceback (most recent call last): File "test.py", line 2, in <module> from prefect import flow File "/Users/.../.venv/lib/python3.8/site-packages/prefect/__init__.py", line 25, in <module> from prefect.states import State ... ImportError: cannot import name 'SecretField' from 'pydantic' (/Users/.../.venv/lib/python3.8/site-packages/pydantic/__init__.py) It's seems I have the module install in my venv : (.venv) User@user % pip show pydantic Name: pydantic Version: 2.3.0 Summary: Data validation using Python type hints Home-page: None Author: None Author-email: Samuel Colvin <[email protected]>, Eric Jolibois <[email protected]>, Hasan Ramezani <[email protected]>, Adrian Garcia Badaracco <[email protected]>, Terrence Dorsey <[email protected]>, David Montague <[email protected]> License: None Location: /Users.../.venv/lib/python3.8/site-packages Requires: annotated-types, pydantic-core, typing-extensions Required-by: prefect, fastapi Where this could come from ? | That's because Prefect isn't compatible with pydantic>2 yet. So, they've pinned the reqs to the versions less than 2 (check PR10144 for more details). Try to install the latest version of Perfect or downgrade your pydantic's to 1.10.11 : pip install -U prefect # or pip install pydantic==1.10.11 | 5 | 14 |
77,173,196 | 2023-9-25 | https://stackoverflow.com/questions/77173196/how-to-rename-a-conda-environment-using-micromamba | I now use micromamba instead of conda or mamba. I would like to rename/move an environment. Using conda, I can rename via: conda rename -n CURRENT_ENV_NAME NEW_ENV_NAME But this doesn't work with neither mamba nor micromamba: $ /opt/homebrew/Caskroom/miniforge/base/condabin/mamba rename -n flu_frequencies_test flu_frequencies Currently, only install, create, list, search, run, info, clean, remove, update, repoquery, activate and deactivate are supported through mamba. $ micromamba rename -n flu_frequencies_test flu_frequencies The following arguments were not expected: flu_frequencies rename Run with --help for more information. | Micromamba As stated in the answer to Cloning environment with micromamba, --clone is not an option in micromamba. Instead, export the environment yaml and create a new environment using the --file flag. Then remove the old environment. export the env dependencies: micromamba env export oldenv > oldenv.yaml create a new env with a new name: micromamba create -n newenv --file oldenv.yaml remove the old env: micromamba env remove -n oldenv delete the yaml file: rm oldenv.yaml | 4 | 4 |
77,159,136 | 2023-9-22 | https://stackoverflow.com/questions/77159136/efficiently-using-hugging-face-transformers-pipelines-on-gpu-with-large-datasets | I'm relatively new to Python and facing some performance issues while using Hugging Face Transformers for sentiment analysis on a relatively large dataset. I've created a DataFrame with 6000 rows of text data in Spanish, and I'm applying a sentiment analysis pipeline to each row of text. Here's a simplified version of my code: import pandas as pd import torch from tqdm import tqdm from transformers import pipeline data = { 'TD': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'text': [ # ... (your text data here) ] } df_model = pd.DataFrame(data) device = 0 if torch.cuda.is_available() else -1 py_sentimiento = pipeline("sentiment-analysis", model="finiteautomata/beto-sentiment-analysis", tokenizer="finiteautomata/beto-sentiment-analysis", device=device, truncation=True) tqdm.pandas() df_model['py_sentimiento'] = df_model['text'].progress_apply(py_sentimiento) df_model['py_sentimiento'] = df_model['py_sentimiento'].apply(lambda x: x[0]['label']) However, I've encountered a warning message that suggests I should use a dataset for more efficient processing. The warning message is as follows: "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset." I have a two questions: What does this warning mean, and why should I use a dataset for efficiency? How can I modify my code to batch my data and use parallel computing to make better use of my GPU resources, what code or function or library should be used with hugging face transformers? I'm eager to learn and optimize my code. | I think you can ignore this message. I found it being reported on different websites this year, but if I get it correctly, this Github issue on the Huggingface transformers (https://github.com/huggingface/transformers/issues/22387) shows that the warning can be safely ignored. In addition, batching or using datasets might not remove the warning or automatically use the resources in the best way. You can do call_count = 0 in here (https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1100) to ignore the warning, as explained by Martin Weyssow above. How can I modify my code to batch my data and use parallel computing to make better use of my GPU resources: You can add batching like this: py_sentimiento = pipeline("sentiment-analysis", model="finiteautomata/beto-sentiment-analysis", tokenizer="finiteautomata/beto-sentiment-analysis", batch_size=8, device=device, truncation=True) and most importantly, you can experiment with the batch size that will result to the highest GPU usage possible on your device and particular task. Huggingface provides here some rules to help users figure out how to batch: https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching. Making the best resource/GPU usage possible might take some experimentation and it depends on the use case you work on every time. What does this warning mean, and why should I use a dataset for efficiency? This means the GPU utilization is not optimal, because the data is not grouped together and it is thus not processed efficiently. Using a dataset from the Huggingface library datasets will utilize your resources more efficiently. However, it is not so easy to tell what exactly is going on, especially considering that we don’t know exactly how the data looks like, what the device is and how the model deals with the data internally. The warning might go away by using the datasets library, but that does not necessarily mean that the resources are optimally used. What code or function or library should be used with hugging face transformers? Here is a code example with pipelines and the datasets library: https://huggingface.co/docs/transformers/v4.27.1/pipeline_tutorial#using-pipelines-on-a-dataset. It mentions that using iterables will fill your GPU as fast as possible and batching might also help with computational time improvements. In your case it seems you are doing a relatively small POC (doing inference for under 10,000 documents with a medium size model), so I don’t think you need to use pipelines. I assume the sentiment analysis model is a classifier and you want to keep using Pandas as shown in the post, so here is how you can combine both. This is usually fast enough for my experiments and prints no warnings about the resources. from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch as t import pandas as pd model = AutoModelForSequenceClassification.from_pretrained("finiteautomata/beto-sentiment-analysis") tokenizer = AutoTokenizer.from_pretrained("finiteautomata/beto-sentiment-analysis") def classify_dataframe_row( example: pd.Series, ): output = model(**tokenizer(example["text"], return_tensors="pt")) prediction = t.argmax(output[0]).detach().numpy() return prediction dataset = pd.read_csv("file") dataset = dataset.assign( prediction=dataset.progress_apply(classify_dataframe_row, axis=1) ) As soon as your inference starts, either with this snippet or with the datasets library code, you can run nvidia-smi in a terminal and check what the GPU usage is and play around with the parameters to optimize it. Beware that running the code on your local machine with a GPU vs running it on a larger machine, e.g., a Linux server with perhaps a more powerful GPU might lead to different performance and might need different tuning. If you wish to run the code for larger document collections, you can split the data in order to avoid GPU memory errors locally, or in order to speed up the inference with concurrent runs in a server. | 17 | 17 |
77,181,602 | 2023-9-26 | https://stackoverflow.com/questions/77181602/number-of-loop-tokens-used-in-a-function-of-python | def my_func(): t = 10 while(t > 0): t = t - 1 for item in range(10): pass Is there some easy way in Python to know that two loop constructs are getting used in the above code? I am trying to build a coding platform where I want to restrict users to using just one loop to solve a question, so this information can be precious for checking and restricting the users. | As others mentioned, you could use ast. code = """ def my_func(): t = 10 while(t > 0): t = t - 1 for item in range(10): pass """ import ast from typing import List, Union class LoopVisitor(ast.NodeVisitor): def __init__(self): self._loops: List[Union[ast.While, ast.For]] = [] @property def count(self) -> int: return len(self._loops) @property def loops(self) -> List[Union[ast.While, ast.For]]: return self._loops def visit_While(self, node: ast.While) -> None: self._loops.append(node) self.generic_visit(node) def visit_For(self, node: ast.For) -> None: self._loops.append(node) self.generic_visit(node) tree = ast.parse(code) visitor = LoopVisitor() visitor.visit(tree) print(visitor.count) More information about ast.NodeVisitor here and here. Alternatively you could also use tokenize. However, it will only work with syntactically valid Python code. import io import tokenize def count_loops(code: str) -> int: count = 0 for token in tokenize.tokenize(io.BytesIO(code.encode("utf-8")).readline): if token.type == tokenize.NAME and token.string in ("for", "while"): count += 1 return count print(count_loops(code)) More information about tokenize here and here If you want to use a third-party library, you could also investigate parso using parso.python.tokenize.tokenize: from parso.python import tokenize def count_loops(code: str) -> int: count = 0 for token in tokenize.tokenize(code, version_info=(3, 9, 0)): if token.type == tokenize.NAME and token.string in ("for", "while"): count += 1 return count print(count_loops(code)) Or, by using parso.parse, however, please note that I only tested the example against the code example in your question. It may not work universally. from parso import parse from parso.tree import BaseNode from parso.python.tree import ForStmt, WhileStmt node: BaseNode = parse(code, version="3.9") def count_loops(node: BaseNode) -> int: count = 0 for child in node.children: if isinstance(child, (ForStmt, WhileStmt)): count += 1 elif isinstance(child, BaseNode): count += count_loops(child) return count print(count_loops(node)) Here is also a solution using asttokens: import asttokens def count_loops(code: str) -> int: count = 0 for token in asttokens.ASTTokens(code).tokens: if token.string in ("for", "while"): count += 1 return count print(count_loops(code)) | 3 | 5 |
77,177,457 | 2023-9-26 | https://stackoverflow.com/questions/77177457/how-to-statically-protect-against-str-enum-comparisons | More often than not, I make the following mistake: import enum class StatusValues(enum.Enum): one = "one" two = "two" def status_is_one(status: str): return status == StatusValues.one String will never be an enum class. The problem is that it should be StatusValues.one.value. Is there a strcmp(status, StatusValues.one)-ish function so that my pyright will error on the line that I am comparing string with a class? Is there a good way to protect against such mistakes? | Yes, this can be a gotcha: by default we cannot directly compare strings with enum members: >>> status = "two" >>> status == StatusValues.two False One has to remember to compare a string to the enum member's .value, which is also kind of verbose: >>> "two" == StatusValues.two.value True Fortunately, there is a solution, which is even mentioned in the docs, but it used to be (pre- Python 3.11) in a section that was somewhat easy to miss (IMHO). We can mix in a type of enum members: class IntEnum(int, Enum): pass This demonstrates how similar derived enumerations can be defined; for example a StrEnum that mixes in str instead of int. The solution is therefore to define an enum with a str base class: class StatusValues(str, Enum): one = "one" two = "two" >>> status = "two" >>> status == StatusValues.two # no .value True In Python 3.11+ StrEnum is already included in the enum module, but you mentioned that you still need to support Python 3.7. | 4 | 5 |
77,165,100 | 2023-9-23 | https://stackoverflow.com/questions/77165100/only-the-first-row-of-annotations-displayed-on-seaborn-heatmap | As it's usually advised, I have managed to reduce my problem to a minimal reproducible example: import numpy as np import seaborn as sns import matplotlib.pyplot as plt matrix = np.array([[0.1234, 1.4567, 0.7890, 0.1234], [0.9876, 0, 0.5432, 0.6789], [0.1111, 0.2222, 0, 0.3333], [0.4444, 0.5555, 0.6666, 0]]) sns.heatmap(matrix, annot=True) plt.show() Vaguely based on Seaborn official documentation. Unfortunately, unlike what would be expected (all numbers visible), I get only the numbers in the top row visible: As there is not really much room for error in this one, I'm out of ideas and google/SO doesn't seem to have this question asked before. Is this a bug? I am running: Seaborn 0.12.2 Matplotlib 3.8.0 PyCharm 2023.1.4 Windows 10 | Just ran into the issue myself, I was on Seaborn 0.12.2. Ran pip install seaborn --upgrade and now have 0.13.0 Restarted vscode and annotations appeared. | 46 | 42 |
77,178,370 | 2023-9-26 | https://stackoverflow.com/questions/77178370/how-to-retrieve-source-documents-via-langchains-get-relevant-documents-method-o | I am making a chatbot which accesses an external knowledge base docs. I want to get the relevant documents the bot accessed for its answer, but this shouldn't be the case when the user input is something like "hello", "how are you", "what's 2+2", or any answer that is not retrieved from the external knowledge base docs. In this case, I want retriever.get_relevant_documents(query) or any other line to return an empty list or something similar. import os from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import FAISS from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate os.environ['OPENAI_API_KEY'] = '' custom_template = """ This is conversation with a human. Answer the questions you get based on the knowledge you have. If you don't know the answer, just say that you don't, don't try to make up an answer. Chat History: {chat_history} Follow Up Input: {question} """ CUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template) llm = ChatOpenAI( model_name="gpt-3.5-turbo", # Name of the language model temperature=0 # Parameter that controls the randomness of the generated responses ) embeddings = OpenAIEmbeddings() docs = [ "Buildings are made out of brick", "Buildings are made out of wood", "Buildings are made out of stone", "Buildings are made out of atoms", "Buildings are made out of building materials", "Cars are made out of metal", "Cars are made out of plastic", ] vectorstore = FAISS.from_texts(docs, embeddings) retriever = vectorstore.as_retriever() memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) qa = ConversationalRetrievalChain.from_llm( llm, retriever, condense_question_prompt=CUSTOM_QUESTION_PROMPT, memory=memory ) query = "what are cars made of?" result = qa({"question": query}) print(result) print(retriever.get_relevant_documents(query)) I tried setting a threshold for the retriever but I still get relevant documents with high similarity scores. And in other user prompts where there is a relevant document, I do not get back any relevant documents. retriever = vectorstore.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .9}) | To solve this problem, I had to change the chain type to RetrievalQA and introduce agents and tools. import os from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import FAISS from langchain.chains import RetrievalQA from langchain.memory import ConversationBufferMemory from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.agents import AgentExecutor, Tool,initialize_agent from langchain.agents.types import AgentType os.environ['OPENAI_API_KEY'] = '' system_message = """ "You are the XYZ bot." "This is conversation with a human. Answer the questions you get based on the knowledge you have." "If you don't know the answer, just say that you don't, don't try to make up an answer." """ llm = ChatOpenAI( model_name="gpt-3.5-turbo", # Name of the language model temperature=0 # Parameter that controls the randomness of the generated responses ) embeddings = OpenAIEmbeddings() docs = [ "Buildings are made out of brick", "Buildings are made out of wood", "Buildings are made out of stone", "Buildings are made out of atoms", "Buildings are made out of building materials", "Cars are made out of metal", "Cars are made out of plastic", ] vectorstore = FAISS.from_texts(docs, embeddings) retriever = vectorstore.as_retriever() memory = ConversationBufferMemory(memory_key="chat_history", input_key='input', return_messages=True, output_key='output') qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever(), verbose=True, return_source_documents=True ) tools = [ Tool( name="doc_search_tool", func=qa, description=( "This tool is used to retrieve information from the knowledge base" ) ) ] agent = initialize_agent( agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, tools=tools, llm=llm, memory=memory, return_source_documents=True, return_intermediate_steps=True, agent_kwargs={"system_message": system_message} ) query1 = "what are buildings made of?" result1 = agent(query1) query2 = "who are you?" result2 = agent(query2) if result accessed sources, it will have values for the key "intermediate_steps" then source documents can be accessed through result1["intermediate_steps"][0][1]["source_documents"] otherwise, when the query didn't need sources, result2["intermediate_steps"] will be empty. | 3 | 1 |
77,146,194 | 2023-9-20 | https://stackoverflow.com/questions/77146194/pyarrow-breaks-pyodbc-mysql | I have a Docker container with MySQL ODBC driver, unixODBC, and a bunch of Python stuff installed. My MySQL driver works through isql, and it works when connecting from Python with pyodbc, if I do so in a fresh Python process: sh-4.4# python Python 3.8.16 (default, May 31 2023, 12:44:21) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pyodbc >>> pyodbc.connect("DRIVER=MySQL ODBC 8.1 ANSI Driver;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh") <pyodbc.Connection object at 0x7f6fd94dac70> But, if I import pyarrow before establishing the connection, I get this: sh-4.4# python Python 3.8.16 (default, May 31 2023, 12:44:21) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pyarrow >>> import pyodbc >>> pyodbc.connect("DRIVER=MySQL ODBC 8.1 ANSI Driver;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh") Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/usr/lib64/libmyodbc8a.so' : file not found (0) (SQLDriverConnect)") I get the same if I specify the path to the driver directly: sh-4.4# python Python 3.8.16 (default, May 31 2023, 12:44:21) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pyarrow >>> import pyodbc >>> pyodbc.connect("DRIVER=/usr/lib64/libmyodbc8a.so;SERVER=host.docker.internal;PORT=3306;UID=root;PWD=shh") Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/usr/lib64/libmyodbc8a.so' : file not found (0) (SQLDriverConnect)") Having run into the same/similar error message from unixODBC in the past if a transitive dependency of the library was missing, I tried this in an attempt to see if something messed up the loader search path. Not sure if it's a valid test, but nothing seems amiss: >>> import os >>> os.system('./lddtree.sh /usr/lib64/libmyodbc8a.so') libmyodbc8a.so => /usr/lib64/libmyodbc8a.soreadelf: /usr/lib64/libmyodbc8a.so: Warning: Section '.interp' was not dumped because it does not exist! (interpreter => none) readelf: /usr/lib64/libmyodbc8a.so: Warning: Section '.interp' was not dumped because it does not exist! libpthread.so.0 => /lib64/libpthread.so.0 libdl.so.2 => /lib64/libdl.so.2 libssl.so.1.1 => /lib64/libssl.so.1.1 libz.so.1 => /lib64/libz.so.1 libcrypto.so.1.1 => /lib64/libcrypto.so.1.1 libresolv.so.2 => /lib64/libresolv.so.2 librt.so.1 => /lib64/librt.so.1 libm.so.6 => /lib64/libm.so.6 libodbcinst.so.2 => /lib64/libodbcinst.so.2 libltdl.so.7 => /lib64/libltdl.so.7 libstdc++.so.6 => /lib64/libstdc++.so.6 libgcc_s.so.1 => /lib64/libgcc_s.so.1 libc.so.6 => /lib64/libc.so.6 ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 0 I tried upgrading pyodbc and pyarrow to latest, and behavior is the same: pyarrow 13.0.0 pyodbc 4.0.39 I'm not sure if the issue is around pyarrow specifically, but based on this bug reporting similar behavior when importing protobuf, I searched my libraries for anything referencing 'protobuf' and pyarrow popped up with a header file including that in its name. Probably a coincidence, as that was in an older version of pyarrow and the latest version no longer even has that file. FWIW, the container also has other ODBC drivers that don't experience this issue. I assume pyarrow init is changing something in the environment, but I'm not enough of a Pythonista to know how to identify what; any tips to debug further? | In the end, this turned out to be exhaustion of the 2048 bytes allocated to TLS (Thread-Local Storage) for dynamically-loaded libraries. libarrow.so associated with pyarrow is a pig when it comes to this block of memory, and loading it prior to loading the MySQL driver via pyodbc caused libmyodbc8a.so to push usage over that limit. Statically preloading libarrow.so by adding it to the LD_PRELOAD environment variable resolved the issue for me. (I first tried preloading libmyodbc8a.so, but that led to some other issues I didn't bother to track down - might as well focus on pyarrow since it's the memory hog anyway.) (libtool is really unhelpful with diagnostics. I ended up compiling a version locally with macro LT_DEBUG_LOADERS set and running with that to get the root cause error printed to STDERR): "cannot allocate memory in static TLS block".) | 3 | 2 |
77,173,549 | 2023-9-25 | https://stackoverflow.com/questions/77173549/replacing-a-simplestring-inside-a-libcst-function-definition-dataclasses-froze | Context While trying to use the libcst module, I am experiencing some difficulties updating a documentation of a function. MWE To reproduce the error, the following minimal working example (MWE) is included: from libcst import ( # type: ignore[import] Expr, FunctionDef, IndentedBlock, MaybeSentinel, SimpleStatementLine, SimpleString, parse_module, ) original_content: str = """ \"\"\"Example python file with a function.\"\"\" from typeguard import typechecked @typechecked def add_three(*, x: int) -> int: \"\"\"ORIGINAL This is a new docstring core. that consists of multiple lines. It also has an empty line inbetween. Here is the emtpy line.\"\"\" return x + 2 """ new_docstring_core: str = """\"\"\"This is a new docstring core. that consists of multiple lines. It also has an empty line inbetween. Here is the emtpy line.\"\"\"""" def replace_docstring( original_content: str, func_name: str, new_docstring: str ) -> str: """Replaces the docstring in a Python function.""" module = parse_module(original_content) for node in module.body: if isinstance(node, FunctionDef) and node.name.value == func_name: print("Got function node.") # print(f'node.body={node.body}') if isinstance(node.body, IndentedBlock): if isinstance(node.body.body[0], SimpleStatementLine): simplestatementline: SimpleStatementLine = node.body.body[ 0 ] print("Got SimpleStatementLine") print(f"simplestatementline={simplestatementline}") if isinstance(simplestatementline.body[0], Expr): print( f"simplestatementline.body={simplestatementline.body}" ) simplestatementline.body = ( Expr( value=SimpleString( value=new_docstring, lpar=[], rpar=[], ), semicolon=MaybeSentinel.DEFAULT, ), ) replace_docstring( original_content=original_content, func_name="add_three", new_docstring=new_docstring_core, ) print("done") Error: Running python mwe.py yields: Traceback (most recent call last): File "/home/name/git/Hiveminds/jsonmodipy/mwe0.py", line 68, in <module> replace_docstring( File "/home/name/git/Hiveminds/jsonmodipy/mwe0.py", line 56, in replace_docstring simplestatementline.body = ( ^^^^^^^^^^^^^^^^^^^^^^^^ File "<string>", line 4, in __setattr__ dataclasses.FrozenInstanceError: cannot assign to field 'body' Question How can one replace the docstring of a function named: add_three in some Python code file_content using the libcst module? Partial Solution I found the following solution for a basic example, however, I did not test it on different functions inside classes, with typed arguments, typed returns etc. from pprint import pprint import libcst as cst import libcst.matchers as m src = """\ import foo from a.b import foo_method class C: def do_something(self, x): \"\"\"Some first line documentation Some second line documentation Args:something. \"\"\" return foo_method(x) """ new_docstring:str = """\"\"\"THIS IS A NEW DOCSTRING Some first line documentation Some second line documentation Args:somethingSTILLCHANGED. \"\"\"""" class ImportFixer(cst.CSTTransformer): def leave_SimpleStatementLine(self, orignal_node, updated_node): """Replace imports that match our criteria.""" if m.matches(updated_node.body[0], m.Expr()): expr=updated_node.body[0] if m.matches(expr.value, m.SimpleString()): simplestring=expr.value print(f'GOTT={simplestring}') return updated_node.with_changes(body=[ cst.Expr(value=cst.SimpleString(value=new_docstring)) ]) return updated_node source_tree = cst.parse_module(src) transformer = ImportFixer() modified_tree = source_tree.visit(transformer) print("Original:") print(src) print("\n\n\n\nModified:") print(modified_tree.code) For example, this partial solution fails on: src = """\ import foo from a.b import foo_method class C: def do_something(self, x): \"\"\"Some first line documentation Some second line documentation Args:something. \"\"\" return foo_method(x) def do_another_thing(y:List[str]) -> int: \"\"\"Bike\"\"\" return 1 """ because the solution does not verify the name of the function in which the SimpleString occurs. | Why were you getting the "FrozenInstanceError" ? As you saw, the CST produced by libcst is a graph made of immutable nodes (each one representing a part of the Python language). If you want to change a node, you actually need to make a new copy of it. This is done using node.with_changes() method. So you could do this in your first code snippet. However, there are more "elegant" ways to achieve this, partly documented in libcst's tutorial, as you just started doing in your partial solution. How can one replace the docstring of a function named: add_three in some Python code file_content using the libcst module? Use the libcst.CSTTransformer to navigate your way through: You need to find, in the CST, the node representing your function (libcst.FunctionDef) You then need to find the node representing the documentation of your function (licst.SimpleString) Update this documentation node import libcst class DocUpdater(libcst.CSTTransformer): """Upodate the docstring of the function `add_three`""" def __init__(self) -> None: super().__init__() self._docstring: str | None = None def visit_FunctionDef(self, node: libcst.FunctionDef) -> Optional[bool]: """Trying to find the node defining function `add_three`, and get its docstring""" if node.name.value == 'add_three': self._docstring = f'"""{node.get_docstring(clean=False)}"""' """Unfortunatly, get_docstring doesn't return the exact docstring node value: you need to add the docstring's triple quotes""" return True return False def leave_SimpleString( self, original_node: libcst.SimpleString, updated_node: libcst.SimpleString ) -> libcst.BaseExpression: """Trying to find the node defining the docstring of your function, and update the docstring""" if original_node.value == self._docstring: return updated_node.with_changes(value='"""My new docstring"""') return updated_node And finally: test = r''' import foo from a.b import foo_method class C: def add_three(self, x): """Some first line documentation Some second line documentation Args:something. """ return foo_method(x) def do_another_thing(y: list[str]) -> int: """Bike""" return 1 ''' cst = libcst.parse_module(test) updated_cst = cst.visit(DocUpdater()) print(updated_cst.code) Output: import foo from a.b import foo_method class C: def add_three(self, x): """My new docstring""" return foo_method(x) def do_another_thing(y: list[str]) -> int: """Bike""" return 1 | 4 | 5 |
77,181,503 | 2023-9-26 | https://stackoverflow.com/questions/77181503/how-to-understand-the-output-of-scipys-quadratic-assignment-function | I'm trying to use scipy's quadratic_assignment function but I can't understand how the output can describe an optimal solution. Here is a minimal example where I compare a small matrix to itself: import numpy as np from scipy.optimize import quadratic_assignment # Parameters n = 5 p = np.log(n)/n # Approx. 32% # Definitions A = np.random.rand(n,n)<p # Quadratic assignment res = quadratic_assignment(A, A) print(res.col_ind) and the results seem to be random assignments: [3 0 1 4 2] [3 2 4 1 0] [3 2 1 0 4] [4 3 1 0 2] [2 3 0 1 4] ... However, according to the docs col_ind is supposed to be the Column indices corresponding to the best permutation found of the nodes of B. Since the input matrices are equal (B==A), I would thus expect the identity assignment [0 1 2 3 4] to pop out. Changing n to larger values does not help. Is there something I am getting wrong? | The quadratic_assignment function (approximately) solves two types of problem: quadratic assignment and graph matching. These are mathematically very similar: the difference is that quadratic assignment involves minimising an objective function, whereas graph matching involves maximising that same function. (Reference: scipy docs). To distinguish between these two problems, the function accepts an option maximize (passed as a key:value pair in the options argument of the function). If not supplied, this defaults to False (i.e. the solution corresponds to the minimum of the function, i.e. the quadratic assignment problem rather than the graph matching problem). quadratic_assignment(A, B) # solves quadratic assignment quadratic_assignment(A, B, options={'maximize': False}) # solves quadratic assignment quadratic_assignment(A, B, options={'maximize': True}) # solves graph matching Now, when you say "Since the input matrices are equal (B==A), I would thus expect the identity assignment [0 1 2 3 4] to pop out" – that's what would be expected for graph matching, i.e. the identity assignment is indeed the permutation of B that matches A better than any other (or strictly speaking: at least as well as any other). So you would get your expected result by specifying options={'maximize': True} so that you get the solution to the graph matching problem rather than the default quadratic assignment: import numpy as np from scipy.optimize import quadratic_assignment # Parameters n = 5 p = np.log(n)/n # Approx. 32% # Definitions A = np.random.rand(n,n)<p # Graph matching res = quadratic_assignment(A, A, options={'maximize': True}) print(res.col_ind) When I run this, I get the expected [0 1 2 3 4] around 97% of the time. It's not 100% because (a) the algorithm is approximate and "not guaranteed to be optimal" (per the docs) and (b) the way that you construct A could create degenerate cases where there are multiple equally good solutions. | 2 | 6 |
77,181,517 | 2023-9-26 | https://stackoverflow.com/questions/77181517/python-open-cv-find-lines-in-noisy-image | I want to find the dark gray diagonal line in this the background is quite noisy and has a gradient in brightness. (The line is barely visible when opening the .png but if I read it as grayscale the line becomes more pronounced.) I tried different combinations of bluring, thresholding and canny edge detection. The best I could come up with was: img_blur = cv.bilateralFilter(img, 3, 120, 120) thresh = cv.adaptiveThreshold(img_blur, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 501, 3) which results in this . Here there is still quite alot of noise in the background and the line is interrupted. I tried some morphological operations (dilate, erode, open, close) but with no real improvement. Applying something like lines = cv.HoughLines(thresh, rho=1, theta=np.pi / 180, threshold=130) left me with this . Which is what I wanted, but the threshold of 130 doesn't work for similar images and will find no or too many lines. | You can try to first detect ridges in the image using a Hessian, and then threshold. This seems to work okay on your example image. import cv2 import numpy as np import matplotlib.pyplot as plt from skimage.feature import hessian_matrix, hessian_matrix_eigvals def detect_ridges(img: np.ndarray, sigma: int = 3) -> np.ndarray: img = cv2.equalizeHist(img.astype(np.uint8)) elements = hessian_matrix(img, sigma, use_gaussian_derivatives=False) eigvals = hessian_matrix_eigvals(elements) cv2.normalize(eigvals[0], eigvals[0], 0, 255, cv2.NORM_MINMAX).astype(np.uint8) return eigvals[0] original_image = cv2.imread("zkN7m.png", cv2.IMREAD_GRAYSCALE) ridges = detect_ridges(original_image) thresholded = cv2.adaptiveThreshold((255-ridges).astype(np.uint8), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 3, 2) plt.imshow(thresholded, cmap="bone") | 3 | 3 |
77,182,937 | 2023-9-26 | https://stackoverflow.com/questions/77182937/typeerror-get-loc-got-an-unexpected-keyword-argument-method | In pandas 2 with the following code: for time in tfo_dates: dt=pd.to_datetime(time) indx_time.append(df.index.get_loc(dt,method='nearest')) I get this error: TypeError: get_loc() got an unexpected keyword argument 'method' This worked in version 1.5 but if we look at the version 2 documentation there is no method argument anymore. What method I can use now to get nearest index of timestamp inside time index list? | IIUC, no need for a loop : matches = df.index.get_indexer(pd.to_datetime(tfo_dates), method="nearest") out = df.iloc[matches] Output : print(out) B A 2023-09-01 17:05:30 0.548814 2023-09-03 13:05:30 0.461479 2023-09-04 23:05:30 0.681820 Used inputs : B A 2023-09-01 17:05:30 0.548814 2023-09-01 19:05:30 0.715189 2023-09-01 21:05:30 0.602763 2023-09-01 23:05:30 0.544883 2023-09-02 01:05:30 0.423655 ... ... 2023-09-04 15:05:30 0.617635 2023-09-04 17:05:30 0.612096 2023-09-04 19:05:30 0.616934 2023-09-04 21:05:30 0.943748 2023-09-04 23:05:30 0.681820 [40 rows x 1 columns] np.random.seed(0) dtr = pd.date_range("20230901 170530", "20230905 002000", freq="2H") df = pd.DataFrame({"B": np.random.rand(len(dtr))}, index=pd.Index(dtr, name="A")) tfo_dates = ["2023-09-01 12:00:00", "2023-09-03 13:00:00", "2023-09-05 14:00:00"] | 4 | 2 |
77,154,846 | 2023-9-22 | https://stackoverflow.com/questions/77154846/when-using-a-conda-environment-vs-code-terminal-and-the-python-extension-v202 | I'm trying to use conda, python with VS Code. The shell I'm using in the integrated terminal is PowerShell. Everything works well in windows terminal, but after I relaunch vscode terminal, every conda commands doesn't work on vscode terminal (except activate and deactivate). conda command works at first activation Fist activtion Error After Relaunch (dl) C:\Users\{USERNAME}\Documents\VScode Workspace\pytorch>conda env list # >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "C:\Users\{USERNAME}\anaconda3\Lib\site-packages\conda\exception_handler.py", line 17, in __call__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\{USERNAME}\anaconda3\Lib\site-packages\conda\cli\main.py", line 54, in main_subshell parser = generate_parser(add_help=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\{USERNAME}\anaconda3\Lib\site-packages\conda\cli\conda_argparse.py", line 127, in generate_parser configure_parser_plugins(sub_parsers) File "C:\Users\{USERNAME}\anaconda3\Lib\site-packages\conda\cli\conda_argparse.py", line 354, in configure_parser_plugins else set(find_commands()).difference(plugin_subcommands) ^^^^^^^^^^^^^^^ File "C:\Users\{USERNAME}\anaconda3\Lib\site-packages\conda\cli\find_commands.py", line 71, in find_commands for entry in os.scandir(dir_path): ^^^^^^^^^^^^^^^^^^^^ OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '.C:\\WINDOWS\\system32' `$ C:\Users\{USERNAME}\anaconda3\Scripts\conda-script.py env list` environment variables: CIO_TEST=<not set> CONDA_DEFAULT_ENV=dl CONDA_EXE=C:\Users\{USERNAME}\anaconda3\condabin\..\Scripts\conda.exe CONDA_EXES="C:\Users\{USERNAME}\anaconda3\condabin\..\Scripts\conda.exe" CONDA_PREFIX=C:\Users\{USERNAME}\anaconda3\envs\dl CONDA_PROMPT_MODIFIER=(dl) CONDA_PYTHON_EXE=C:\Users\{USERNAME}\anaconda3\python.exe CONDA_ROOT=C:\Users\{USERNAME}\anaconda3 CONDA_SHLVL=1 CURL_CA_BUNDLE=<not set> HOMEPATH=\Users\{USERNAME} LD_PRELOAD=<not set> PATH=C:\Users\{USERNAME}\anaconda3\envs\dl;C:\Users\LAPTOP- PNE\anaconda3\envs\dl\Library\mingw-w64\bin;C:\Users\LAPTOP- PNE\anaconda3\envs\dl\Library\usr\bin;C:\Users\LAPTOP- PNE\anaconda3\envs\dl\Library\bin;C:\Users\LAPTOP- PNE\anaconda3\envs\dl\Scripts;C:\Users\LAPTOP- PNE\anaconda3\envs\dl\bin;C:\Users\{USERNAME}\anaconda3\condabin;C:\WI NDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32 \WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Program Files\Bandizip;C:\Program Files\Microsoft SQL Server\150\Tools\Binn;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn;C:\Program Files\dotnet;C:\Users\LAPTOP- PNE\AppData\Local\Microsoft\WindowsApps;C:\Users\LAPTOP- PNE\.dotnet\tools;C:\Users\{USERNAME}\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\{USERNAME}\anaconda3;C:\Users\LAPTOP- PNE\anaconda3\Library;C:\Users\{USERNAME}\anaconda3\Scripts;.C:\WINDOW S\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\Win dowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\Bandizip\;C:\Program Files\Microsoft SQL Server\150\Tools\Binn\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files\dotnet\;C:\Users\LAPTOP- PNE\AppData\Local\Microsoft\WindowsApps;C:\Users\LAPTOP- PNE\.dotnet\tools;C:\Users\{USERNAME}\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\{USERNAME}\anaconda3;C:\Users\LAPTOP- PNE\anaconda3\Library;C:\Users\{USERNAME}\anaconda3\Scripts; PSMODULEPATH=C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\Windows PowerShell\v1.0\Modules PYTHONIOENCODING=utf-8 PYTHONUNBUFFERED=1 PYTHONUTF8=1 REQUESTS_CA_BUNDLE=<not set> SSL_CERT_FILE=C:\Users\{USERNAME}\anaconda3\envs\dl\Library\ssl\cacert.pem active environment : dl active env location : C:\Users\{USERNAME}\anaconda3\envs\dl shell level : 1 user config file : C:\Users\{USERNAME}\.condarc populated config files : C:\Users\{USERNAME}\.condarc conda version : 23.7.4 conda-build version : 3.26.1 python version : 3.11.5.final.0 virtual packages : __archspec=1=x86_64 __cuda=11.2=0 __win=0=0 base environment : C:\Users\{USERNAME}\anaconda3 (writable) conda av data dir : C:\Users\{USERNAME}\anaconda3\etc\conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : C:\Users\{USERNAME}\anaconda3\pkgs C:\Users\{USERNAME}\.conda\pkgs C:\Users\{USERNAME}\AppData\Local\conda\conda\pkgs envs directories : C:\Users\{USERNAME}\anaconda3\envs C:\Users\{USERNAME}\.conda\envs C:\Users\{USERNAME}\AppData\Local\conda\conda\envs platform : win-64 user-agent : conda/23.7.4 requests/2.31.0 CPython/3.11.5 Windows/10 Windows/10.0.22621 administrator : False netrc file : None offline mode : False I think it's conda env or python extension problem. When conda env activated, system paths are modified appending single dot at the last of path. And Python Extension also modify system path by appending {envPath} to it. Concatanation of dot and envpath create an invalid path (i.e.,".C:\System32") Disabling Python Extension's Activate Environment option could prevent error, but it also disable conda env auto activation. Plus, I found that dot '.' at the last of system path doesn't removed after deactivate conda >echo %PATH% C:\Users\{USERNAME}\anaconda3\condabin; (...) C:\Users\{USERNAME}\anaconda3\Scripts; >conda activate (base)>echo %PATH% C:\Users\{USERNAME}\anaconda3; (...) C:\Users\{USERNAME}\anaconda3\Scripts;. (base)>conda deactivate >echo %PATH% C:\Users\{USERNAME}\anaconda3\condabin; (...) C:\Users\{USERNAME}\anaconda3\Scripts;. | This is a bug: A drive with the name '.C' does not exist #22047. The fix has been made in Make sure PATH ends with a separator before prepending #22046 in the pre-release channel of the Python extension. If you switch to the pre-release channel and reload VS Code and still get this problem, the maintainers have provided some more instructions to help them work out the problem. Another (worse) solution (workaround): One user, kwikwag, has found a temporary workaround to put $env:Path = $env:Path.replace('.c','.;c') in their profile.ps1 file. The bug was related to changes to environment variable modification code contributed by the Python extension using VS Code's shell-integration-related features and APIs like EnvironmentVariableMutator. There were recent changes to this, which you can read about at https://code.visualstudio.com/updates/v1_82#_terminal-activation-using-environment-variables. Note that you can out of that experiment by putting "python.experiments.optOutFrom": ["pythonTerminalEnvVarActivation"] in your settings.json file. See also the related python.terminal.activateEnvInCurrentTerminal and python.terminal.activateEnvironment settings. | 4 | 4 |
77,182,455 | 2023-9-26 | https://stackoverflow.com/questions/77182455/is-there-a-numpy-function-similar-to-np-isin-that-allows-for-a-tolerance-instead | I have two arrays of different sizes and want to determine where elements of one array can be found in the other array. I would like to be able to allow for a tolerance between elements. The goal would be something like this: array1 = [1, 2, 3, 4, 5, 6] array2 = [2, 8, 1.00001, 1.1] places = [mystery function](array1, array2, tolerance = 0.001) returning indices places = [0,1] in array1. The closest I can get is with np.isin, which allows for arrays of different sizes and orders, but not the tolerance. (have also tried np.allclose but the shape and order mismatch is an issue there). Of course this can be done with a loop, but my actual arrays are thousands of elements long so a loop is not practical. (it also doesn't have to be a numpy function--really just anything more efficient than a loop) Thanks in advance for the help! | With custom function to find the absolute difference between 2 arrays and positions which fit the given tolerance: array1 = np.array([1, 2, 3, 4, 5, 6]) array2 = np.array([2, 8, 1.00001, 1.1]) tolerance = 0.001 def argwhere_close(a, b, tol): return np.where(np.any(np.abs(a - b[:, None]) <= tolerance, axis=0))[0] print(argwhere_close(array1, array2, tolerance)) [0 1] | 3 | 4 |
77,178,058 | 2023-9-26 | https://stackoverflow.com/questions/77178058/how-to-set-safety-parameters-for-text-generation-model-in-google-cloud-vertex-ai | I am working on a research project where I need to summarize news articles using the Google Palm2 Text Generation Model. I have encountered an issue with certain news articles in my dataset where I'm getting empty responses along with safety attributes that block the output. Here is the code I'm using: from vertexai.language_models import TextGenerationModel parameters = { # default values 'max_output_tokens': 256, 'temperature': 0.0, 'top_p': 1.0, 'top_k': 40, } prompt = "..." model = TextGenerationModel.from_pretrained('text-bison@001') response = model.predict( prompt, **parameters, ) The following is an example prediction: Prediction(predictions=[{'content': '', 'citationMetadata': None, 'safetyAttributes': {'blocked': True, 'errors': [253.0]}}], deployed_model_id='', model_version_id='', model_resource_name='', explanations=None) The issue seems to be related to safety parameters preventing the model from generating a summary for certain news articles. I've been trying to find documentation on how to configure these safety parameters using the Python API, but I could not locate the relevant information. Could someone please provide guidance on how to set the safety parameters for the TextGenerationModel? Any help or pointers to documentation would be greatly appreciated. Thank you! | I'm not sure about Vertex AI but you can set the safety_settings of the PaLM model (from google generative AI) by the following: import google.generativeai as palm completion = palm.generate_text( model=model, prompt=prompt, safety_settings=[ { "category": safety_types.HarmCategory.HARM_CATEGORY_DEROGATORY, "threshold": safety_types.HarmBlockThreshold.BLOCK_NONE, }, { "category": safety_types.HarmCategory.HARM_CATEGORY_VIOLENCE, "threshold": safety_types.HarmBlockThreshold.BLOCK_NONE, }, ] ) You should checkout this guide to get complete details of the safety catalogue and how to set threshold for each category as there are multiple categories and different threshold levels. NOTE: To use the PaLM API from generative AI, you'd need to install it first via: pip install -q google-generativeai and then set an API key which you'll get from here: import google.generativeai as palm palm.configure(api_key='YOUR_API_KEY') and then to access the same text-bison-001 model: models = [m for m in palm.list_models() if 'generateText' in m.supported_generation_methods] model = models[0].name # use this model on the first code snippet print(model) # prints 'models/text-bison-001' | 3 | 1 |
77,180,558 | 2023-9-26 | https://stackoverflow.com/questions/77180558/how-can-i-prevent-the-anti-virus-from-detecting-my-app-as-a-virus-or-malware-whe | recently I've made an automated file sorter based on file extension using Python and Tkinter GUI module, after I was done I compiled the the Python code into an executable using PyInstaller via windows terminal, then I put a "README" text file into the executable folder and compiled that folder as a setup executable using Inno Setup Compiler, it seems to work fine on my computer but when I try to send it to someone, the Anti-virus of the user detects it as a malware and blocks it or sets it in "Quarantine". So I'd really appreciate it if I can resolve this problem so other users can benefit from it without any problems. Thank you for your time <3 • Note: I am using Python 3.8 and customtkinter along with the normal Tkinter (both latest versions) I didn't know how to resolve it and couldn't think of a better place to ask than here. | This is both an interesting question and a question without answer ;-) You indeed have a real problem and correctly explain what happens. Unfortunately converting a Python program to a Windows executable will almost always raises anti-malware warnings. The underlying reason, is that your executable uses a bootstrap code, that has to prepare an environment for the embedded executable to execute the Python script. And most anti-malware tools choke when they see a program extract something from its data to execute it, because it is a well known malware pattern where a genuine program is wrapped into something that first tries to spread a virus and in the end execute the original code. That means that I cannot propose any way to really solve your problem :-( What can be done: if you only give your program to friends, just explain them that it will be falsely detected - you even now can explain the reason - and that they have to set an exception in their anti-virus if you want to distibute it wildly, and work in a major organization, contact the major anti-virus companies to explain how the program was built (they may require the source and the build instructions) so that they explicitely allow it if neither of the previous ways are applicable, just explain the problem and provide the source code and build instructions to allow power users to build from source I am aware that no of those workarounds is a nice solution, but it is the best I can give you... | 3 | 2 |
77,179,993 | 2023-9-26 | https://stackoverflow.com/questions/77179993/transposing-columns-to-rows-pandas | I have a situation like following: ID Old value.Carbon Old value.Dioxide New value.Carbon New Value.Dioxide 123 34.89 13.45 56.66 11.11 456 12.13 55.66 66.88 12.33 My output that i want should be:( i want the column to be transposed to rows like following, i have renamed the fields as Old Value. and New Value. because i thought this might help but if you have any other ideas i am more than happy to try something different. Thanks a lot for all the help! ID Field New Value Old Value 123 Carbon 56.66 34.89 123 Dioxide 11.11 13.45 456 Dioxide 12.33 12.33 456 Carbon 66.88 12.13 | Using stack and a MultiIndex to reshape: tmp = df.set_index('ID') out = (tmp.set_axis(tmp.columns.str.capitalize() .str.split('\.', expand=True), axis=1) .rename_axis(columns=(None, 'Field')) .stack() .reset_index() ) Output: ID Field Old value New value 0 123 carbon 34.89 56.66 1 123 dioxide 13.45 11.11 2 456 carbon 12.13 66.88 3 456 dioxide 55.66 12.33 | 2 | 3 |
77,177,877 | 2023-9-26 | https://stackoverflow.com/questions/77177877/what-is-except-syntax-in-python-trystar-in-ast-module | I came across this documentation in the ast module for a version of the try/except block with an extra asterisk. The documentation doesn't explain what it is, and gives a completely generic example: class ast.TryStar(body, handlers, orelse, finalbody) try blocks which are followed by except* clauses. The attributes are the same as for Try but the ExceptHandler nodes in handlers are interpreted as except* blocks rather then except. print(ast.dump(ast.parse(""" try: ... except* Exception: ... """), indent=4)) What is except* and what is it for? Is it deprecated, or up-and-coming? (And perhaps more importantly, what is the feature called? except-star? except-glob? except-asterisk? try-star?) | except*(aka except-star) is one of the feature/syntax introduced in Python 3.11 and it is used to handle ExceptionGroup. Quoting from the official documentation except* clause The except* clause(s) are used for handling ExceptionGroups. The exception type for matching is interpreted as in the case of except, but in the case of exception groups we can have partial matches when the type matches some of the exceptions in the group. This means that multiple except* clauses can execute, each handling part of the exception group. Each clause executes at most once and handles an exception group of all matching exceptions. Each exception in the group is handled by at most one except* clause, the first that matches it. try: raise ExceptionGroup("eg", [ValueError(1), TypeError(2), OSError(3), OSError(4)]) except* TypeError as e: print(f'caught {type(e)} with nested {e.exceptions}') except* OSError as e: print(f'caught {type(e)} with nested {e.exceptions}') # caught <class 'ExceptionGroup'> with nested (TypeError(2),) # caught <class 'ExceptionGroup'> with nested (OSError(3), OSError(4)) # + Exception Group Traceback (most recent call last): # | File "<stdin>", line 2, in <module> # | ExceptionGroup: eg # +-+---------------- 1 ---------------- # | ValueError: 1 # +------------------------------------ | 4 | 6 |
77,168,202 | 2023-9-24 | https://stackoverflow.com/questions/77168202/calculating-total-tokens-for-api-request-to-chatgpt-including-functions | Hello Stack Overflow community, I've been working on integrating ChatGPT's API into my project, and I'm having some trouble calculating the total number of tokens for my API requests. Specifically, I'm passing both messages and functions in my API calls. I've managed to figure out how to calculate the token count for the messages, but I'm unsure about how to account for the tokens used by the functions. Could someone please guide me on how to properly calculate the total token count, including both messages and functions, for a request to ChatGPT's API? Any help or insights would be greatly appreciated! Thank you in advance. I have been working on brute forcing a solution by formatting the data in the call in different ways. I have been using the tokenizer, and Tiktokenizer to test my formats. | I am going to walk you through calculating the tokens for gpt-3.5 and gpt-4. You can apply a similar method to other models you just need to find the right settings. We are going to calculate the tokens used by the messages and the functions separately then adding them together at the end to get the total. Messages Start by getting the tokenizer using tiktoken. We will use this to tokenize all the custom text in the messages and functions. Also add constants for the extra tokens the API will add to the request. enc = tiktoken.encoding_for_model(model) Make a variable to hold the total tokens for the messages and set it to 0. msg_token_count = 0 Loop through the messages, and for each message add 3 to msg_token_count. Then loop through each element in the message and encode the value, adding the length of the encoded object to msg_token_count. If the dictionary has the "name" key set add an additional token to msg_token_count. for message in messages: msg_token_count += 3 # Add tokens for each message for key, value in message.items(): msg_token_count += len(enc.encode(value)) # Add tokens in set message if key == "name": msgTokenCount += 1 # Add token if name is set Finally we need to add 3 to msg_token_count, for the ending tokens. msgTokenCount += 3 # Add tokens to account for ending Functions Now we are going to calculate the number of tokens the functions will take. Start by making a variable to hold the total tokens used by functions and set it to 0. func_token_count = 0 Next we are going to loop through the functions and add tokens to func_token_count. Loop through the functions and add 7 to func_token_count for each function. Then add the length of the encoded name and description. For each function, if it has properties, add 3 to func_token_count. Then for each key in the properties add another 3 and the length of the encoded property, making sure to subtract 3 if it has an "enum" key, and adding 3 for each item in the enum section. Finally, add 12 to func_token_count to account for the tokens at the end of all the functions. for function in functions: func_token_count += 7 # Add tokens for start of each function f_name = function["name"] f_desc = function["description"] if f_desc.endswith("."): f_desc = f_desc[:-1] line = f_name + ":" + f_desc func_token_count += len(enc.encode(line)) # Add tokens for set name and description if len(function["parameters"]["properties"]) > 0: func_token_count += 3 # Add tokens for start of each property for key in list(function["parameters"]["properties"].keys()): func_token_count += 3 # Add tokens for each set property p_name = key p_type = function["parameters"]["properties"][key]["type"] p_desc = function["parameters"]["properties"][key]["description"] if "enum" in function["parameters"]["properties"][key].keys(): func_token_count += 3 # Add tokens if property has enum list for item in function["parameters"]["properties"][key]["enum"]: func_token_count += 3 func_token_count += len(enc.encode(item)) if p_desc.endswith("."): p_desc = p_desc[:-1] line = f"{p_name}:{p_type}:{p_desc}" func_token_count += len(enc.encode(line)) func_token_count += 12 Here is the full code. Please note that instead of hard coding the additional token counts I used a constant to hold the value. def get_token_count(model, messages, functions): # Initialize message settings to 0 msg_init = 0 msg_name = 0 msg_end = 0 # Initialize function settings to 0 func_init = 0 prop_init = 0 prop_key = 0 enum_init = 0 enum_item = 0 func_end = 0 if model in [ "gpt-3.5-turbo-0613", "gpt-4-0613" ]: # Set message settings for above models msg_init = 3 msg_name = 1 msg_end = 3 # Set function settings for the above models func_init = 7 prop_init = 3 prop_key = 3 enum_init = -3 enum_item = 3 func_end = 12 enc = tiktoken.encoding_for_model(model) msg_token_count = 0 for message in messages: msg_token_count += msg_init # Add tokens for each message for key, value in message.items(): msg_token_count += len(enc.encode(value)) # Add tokens in set message if key == "name": msg_token_count += msg_name # Add tokens if name is set msg_token_count += msg_end # Add tokens to account for ending func_token_count = 0 if len(functions) > 0: for function in functions: func_token_count += func_init # Add tokens for start of each function f_name = function["name"] f_desc = function["description"] if f_desc.endswith("."): f_desc = f_desc[:-1] line = f_name + ":" + f_desc func_token_count += len(enc.encode(line)) # Add tokens for set name and description if len(function["parameters"]["properties"]) > 0: func_token_count += prop_init # Add tokens for start of each property for key in list(function["parameters"]["properties"].keys()): func_token_count += prop_key # Add tokens for each set property p_name = key p_type = function["parameters"]["properties"][key]["type"] p_desc = function["parameters"]["properties"][key]["description"] if "enum" in function["parameters"]["properties"][key].keys(): func_token_count += enum_init # Add tokens if property has enum list for item in function["parameters"]["properties"][key]["enum"]: func_token_count += enum_item func_token_count += len(enc.encode(item)) if p_desc.endswith("."): p_desc = p_desc[:-1] line = f"{p_name}:{p_type}:{p_desc}" func_token_count += len(enc.encode(line)) func_token_count += func_end return msg_token_count + func_token_count Please let me know if something is not clear, or if you have a suggestion to make my post better. | 2 | 4 |
77,170,039 | 2023-9-25 | https://stackoverflow.com/questions/77170039/how-would-i-solve-a-linear-diophantine-congruence-in-python | I was given a challenge where the solution involves solving a series of linear modular equations in 14 variables. The following is a selection of these equations: 3a + 3b + 3c + 3d + 3e + 3f + 3g + h + i + j + k + l + m + n = 15 7a + 9b + 17c + 11d + 6e + 5f + g = 3 13a + 2b + 9c + 8d + 12f + 13g = 17 5a + 2b + 16c + 12d + 5e + 7f + g = 11 6a + 4b + 9c + 6d + 4e + 9f + 6g + h + 7i + 11j + k + 7l + 11m + n = 8 10a + 15b + 13c + 10d + 15e + 13f + 10g + 12h + 18i + 8j + 12k + 18l + 8m + 12n = 18 9a + 12b + 14c + 4d + 9e + 16f + 3g + 7h + 17i + 11j + 14k + 3l + 18m + n = 15 9a + 12b + 16c + 15d + e + 14f + 6g + 11h + 2i + 9j + 12k + 16l + 15m + n = 14 I ended up copying them into this modular equation solver to get a parameterized solution. However, I want to be able to do this automatically in a Python program, without depending on that website. Preferably, I should be able to do this with an arbitrary group of (linear) equations, not just these specific ones. Part of my solution for the challenge required me to write a few different equations for different scenarios, and swap them out as needed. I'm aware that SymPy has a Diophantine equation solver that almost does what I want it to do, but in the docs I didn't see a way to get it to enforce a certain modulus (mod 19, mod 23, etc). | The word diophantine is misleading here. What you want to do is linear algebra over Z_p meaning the integers modulo a prime p. SymPy can do this if you use DomainMatrix instead of Matrix. This is how to do it in SymPy 1.12: In [1]: from sympy import * In [2]: from sympy.polys.matrices import DomainMatrix In [3]: a, b, c, d, e, f, g, h, i, j, k, l, m, n = syms = symbols('a:n') In [4]: eqs = ''' ...: 3a + 3b + 3c + 3d + 3e + 3f + 3g + h + i + j + k + l + m + n = 15 ...: 7a + 9b + 17c + 11d + 6e + 5f + g = 3 ...: 13a + 2b + 9c + 8d + 12f + 13g = 17 ...: 5a + 2b + 16c + 12d + 5e + 7f + g = 11 ...: 6a + 4b + 9c + 6d + 4e + 9f + 6g + h + 7i + 11j + k + 7l + 11m + n = 8 ...: 10a + 15b + 13c + 10d + 15e + 13f + 10g + 12h + 18i + 8j + 12k + 18l + 8m + 12n = 18 ...: 9a + 12b + 14c + 4d + 9e + 16f + 3g + 7h + 17i + 11j + 14k + 3l + 18m + n = 15 ...: 9a + 12b + 16c + 15d + e + 14f + 6g + 11h + 2i + 9j + 12k + 16l + 15m + n = 14 ...: ''' In [5]: eqs = [parse_expr(eq, transformations='all').subs(I, i) for eq in eqs.strip().splitlines()] In [6]: M, b = linear_eq_to_matrix(eqs, syms) In [7]: M Out[7]: ⎡3 3 3 3 3 3 3 1 1 1 1 1 1 1 ⎤ ⎢ ⎥ ⎢7 9 17 11 6 5 1 0 0 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢13 2 9 8 0 12 13 0 0 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢5 2 16 12 5 7 1 0 0 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢6 4 9 6 4 9 6 1 18 0 1 7 11 1 ⎥ ⎢ ⎥ ⎢10 15 13 10 15 13 10 12 26 0 12 18 8 12⎥ ⎢ ⎥ ⎢9 12 14 4 9 16 3 7 28 0 14 3 18 1 ⎥ ⎢ ⎥ ⎣9 12 16 15 1 14 6 11 11 0 12 16 15 1 ⎦ In [8]: b Out[8]: ⎡15⎤ ⎢ ⎥ ⎢3 ⎥ ⎢ ⎥ ⎢17⎥ ⎢ ⎥ ⎢11⎥ ⎢ ⎥ ⎢8 ⎥ ⎢ ⎥ ⎢18⎥ ⎢ ⎥ ⎢15⎥ ⎢ ⎥ ⎣14⎦ In [9]: Z_17 = GF(17, symmetric=False) In [12]: dM = DomainMatrix.from_Matrix(M).convert_to(Z_17).to_dense() In [13]: db = DomainMatrix.from_Matrix(b).convert_to(Z_17).to_dense() In [14]: dM Out[14]: DomainMatrix([[3 mod 17, 3 mod 17, 3 mod 17, 3 mod 17, 3 mod 17, 3 mod 17, 3 mod 17, 1 mod 17, 1 mod 17, 1 mod 17, 1 mod 17, 1 mod 17, 1 mod 17, 1 mod 17], [7 mod 17, 9 mod 17, 0 mod 17, 11 mod 17, 6 mod 17, 5 mod 17, 1 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17], [13 mod 17, 2 mod 17, 9 mod 17, 8 mod 17, 0 mod 17, 12 mod 17, 13 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17], [5 mod 17, 2 mod 17, 16 mod 17, 12 mod 17, 5 mod 17, 7 mod 17, 1 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17, 0 mod 17], [6 mod 17, 4 mod 17, 9 mod 17, 6 mod 17, 4 mod 17, 9 mod 17, 6 mod 17, 1 mod 17, 1 mod 17, 0 mod 17, 1 mod 17, 7 mod 17, 11 mod 17, 1 mod 17], [10 mod 17, 15 mod 17, 13 mod 17, 10 mod 17, 15 mod 17, 13 mod 17, 10 mod 17, 12 mod 17, 9 mod 17, 0 mod 17, 12 mod 17, 1 mod 17, 8 mod 17, 12 mod 17], [9 mod 17, 12 mod 17, 14 mod 17, 4 mod 17, 9 mod 17, 16 mod 17, 3 mod 17, 7 mod 17, 11 mod 17, 0 mod 17, 14 mod 17, 3 mod 17, 1 mod 17, 1 mod 17], [9 mod 17, 12 mod 17, 16 mod 17, 15 mod 17, 1 mod 17, 14 mod 17, 6 mod 17, 11 mod 17, 11 mod 17, 0 mod 17, 12 mod 17, 16 mod 17, 15 mod 17, 1 mod 17]], (8, 14), GF(17)) In [15]: db Out[15]: DomainMatrix([[15 mod 17], [3 mod 17], [0 mod 17], [11 mod 17], [8 mod 17], [1 mod 17], [15 mod 17], [14 mod 17]], (8, 1), GF(17)) Now you have dM and db as matrices over Z_17 and you can use whatever matrix operations you want and convert back to Matrix with to_Matrix: In [16]: dM.hstack(db).rref()[0].to_Matrix() # RREF of augmented matrix Out[16]: ⎡1 0 0 0 0 0 0 0 11 11 5 2 2 6 0 ⎤ ⎢ ⎥ ⎢0 1 0 0 0 0 0 0 14 2 11 12 14 13 6 ⎥ ⎢ ⎥ ⎢0 0 1 0 0 0 0 0 3 7 10 11 15 0 5 ⎥ ⎢ ⎥ ⎢0 0 0 1 0 0 0 0 2 2 1 16 16 13 5 ⎥ ⎢ ⎥ ⎢0 0 0 0 1 0 0 0 13 13 16 7 14 0 15⎥ ⎢ ⎥ ⎢0 0 0 0 0 1 0 0 16 10 5 11 15 11 7 ⎥ ⎢ ⎥ ⎢0 0 0 0 0 0 1 0 8 10 6 13 1 0 7 ⎥ ⎢ ⎥ ⎣0 0 0 0 0 0 0 1 4 6 9 6 8 8 16⎦ In [17]: dM.nullspace().to_Matrix() # nullspace of M Out[17]: ⎡15 16 1 12 10 11 14 7 11 0 0 0 0 0 ⎤ ⎢ ⎥ ⎢15 12 8 12 10 9 9 2 0 11 0 0 0 0 ⎥ ⎢ ⎥ ⎢13 15 9 6 11 13 2 3 0 0 11 0 0 0 ⎥ ⎢ ⎥ ⎢12 4 15 11 8 15 10 2 0 0 0 11 0 0 ⎥ ⎢ ⎥ ⎢12 16 5 11 16 5 6 14 0 0 0 0 11 0 ⎥ ⎢ ⎥ ⎣2 10 0 10 0 15 0 14 0 0 0 0 0 11⎦ You can use these to construct your parametric solution: In [30]: p = dM.hstack(db).rref()[0].to_Matrix()[:,-1] In [31]: p = Matrix.vstack(p, p.zeros(6, 1)) In [32]: p.transpose() Out[32]: [0 6 5 5 15 7 7 16 0 0 0 0 0 0] In [33]: (M*p - b).applyfunc(lambda e: e % 17).is_zero_matrix Out[33]: True In [34]: N = dM.nullspace().to_Matrix() In [35]: N = dM.nullspace().transpose().to_Matrix() In [36]: (M*N).applyfunc(lambda e: e % 17).is_zero_matrix Out[36]: True The parametric solution is generated from the particular solution and nullspace: In [38]: sol = p + N*Matrix(symbols('alpha:6')) In [39]: sol Out[39]: ⎡ 15⋅α₀ + 15⋅α₁ + 13⋅α₂ + 12⋅α₃ + 12⋅α₄ + 2⋅α₅ ⎤ ⎢ ⎥ ⎢16⋅α₀ + 12⋅α₁ + 15⋅α₂ + 4⋅α₃ + 16⋅α₄ + 10⋅α₅ + 6⎥ ⎢ ⎥ ⎢ α₀ + 8⋅α₁ + 9⋅α₂ + 15⋅α₃ + 5⋅α₄ + 5 ⎥ ⎢ ⎥ ⎢12⋅α₀ + 12⋅α₁ + 6⋅α₂ + 11⋅α₃ + 11⋅α₄ + 10⋅α₅ + 5⎥ ⎢ ⎥ ⎢ 10⋅α₀ + 10⋅α₁ + 11⋅α₂ + 8⋅α₃ + 16⋅α₄ + 15 ⎥ ⎢ ⎥ ⎢11⋅α₀ + 9⋅α₁ + 13⋅α₂ + 15⋅α₃ + 5⋅α₄ + 15⋅α₅ + 7 ⎥ ⎢ ⎥ ⎢ 14⋅α₀ + 9⋅α₁ + 2⋅α₂ + 10⋅α₃ + 6⋅α₄ + 7 ⎥ ⎢ ⎥ ⎢ 7⋅α₀ + 2⋅α₁ + 3⋅α₂ + 2⋅α₃ + 14⋅α₄ + 14⋅α₅ + 16 ⎥ ⎢ ⎥ ⎢ 11⋅α₀ ⎥ ⎢ ⎥ ⎢ 11⋅α₁ ⎥ ⎢ ⎥ ⎢ 11⋅α₂ ⎥ ⎢ ⎥ ⎢ 11⋅α₃ ⎥ ⎢ ⎥ ⎢ 11⋅α₄ ⎥ ⎢ ⎥ ⎣ 11⋅α₅ ⎦ Let's check that: In [47]: (M*sol)._rep.convert_to(Z_17[symbols('alpha:6')]).to_Matrix() == b.applyfunc(lambda e: e % 17) Out[47]: True | 4 | 2 |
77,172,130 | 2023-9-25 | https://stackoverflow.com/questions/77172130/how-to-find-the-distance-to-next-non-nan-value-in-numpy-array | Consider the following array: arr = np.array( [ [10, np.nan], [20, np.nan], [np.nan, 50], [15, 20], [np.nan, 30], [np.nan, np.nan], [10, np.nan], ] ) For every cell in each column in arr I need to find the distance to the next non-NaN value. That is, the expected outcome should look like this: expected = np.array( [ [1, 2], [2, 1], [1, 1], [3, 1], [2, np.nan], [1, np.nan], [np.nan, np.nan] ] ) | Using pandas, you can compute a reverse cumcount, with mask and shift: out = (pd.DataFrame(arr).notna()[::-1] .apply(lambda s: s.groupby(s.cumsum()).cumcount().add(1) .where(s.cummax()).shift()[::-1]) .to_numpy() ) Output: array([[ 1., 2.], [ 2., 1.], [ 1., 1.], [ 3., 1.], [ 2., nan], [ 1., nan], [nan, nan]]) | 3 | 1 |
77,171,252 | 2023-9-25 | https://stackoverflow.com/questions/77171252/pandas-select-matching-multi-index-with-different-number-of-levels | I have a data frame with 3 index levels: d a b c 1 9 4 1 2 8 2 4 3 7 5 2 4 6 4 5 5 5 6 3 6 4 5 6 7 3 7 4 8 2 6 7 9 1 8 5 and I have a multi index object with only 2 levels: MultiIndex([(1, 9), (2, 8), (3, 7), (4, 6), (9, 1)], names=['a', 'b']) How can I select the entries on the data frame that match this multi index? Toy code: import pandas df1 = pandas.DataFrame( dict( a = [1,2,3,4,5,6,7,8,9], b = [9,8,7,6,5,4,3,2,1], c = [4,2,5,4,6,5,7,6,8], d = [1,4,2,5,3,6,4,7,5], ) ).set_index(['a','b','c']) select_this = multi_idx = pandas.MultiIndex.from_tuples([(1, 9), (2, 8), (3, 7), (4, 6), (9, 1)], names=['a', 'b']) selected = df1.loc[select_this] print(select_this) print(df1) print(selected) which produces ValueError: operands could not be broadcast together with shapes (5,2) (3,) (5,2). What I want to do can be achieved with selected = df1.reset_index('c').loc[select_this].set_index('c', append=True) However, this forces me to do this extra reset_index and then set_index. I want to avoid this. | You can filter different levels by Index.difference and filter by boolean indexing with Index.isin: lvl = df1.index.names.difference(multi_idx.names) out = df1[df1.index.droplevel(lvl).isin(multi_idx)] print (out) d a b c 1 9 4 1 2 8 2 4 3 7 5 2 4 6 4 5 9 1 8 5 | 3 | 3 |
77,169,369 | 2023-9-25 | https://stackoverflow.com/questions/77169369/tkinter-menu-bar-goes-invisible-when-checking-topmost-attribute | I have a larger tkinter app that I wanted to dinamically set the topmost attribute. I am able to achieve what I want, but everytime I check the state of topmost, the selected menu bar on the screen goes invisible. To reproduce this, consider the MRE below and upon running the code, click the "menu" button on the menu bar, the cascade opens and the exit button shows, watch it vanish after the check_topmost function runs, but the "menu" button is still pressed somehow. Commenting out either the line that checks the attribute or the line that sets it as True stops the behaviour import tkinter as tk def check_topmost(): print(app.attributes('-topmost')) # comment this app.after(1000, check_topmost) app = tk.Tk() menu_bar = tk.Menu(app) sub_menu = tk.Menu(menu_bar, tearoff=0) sub_menu.add_command(label = 'exit', command = app.destroy) menu_bar.add_cascade(label = "menu", menu = sub_menu) app.config(menu = menu_bar) app.attributes('-topmost', 1) # comment this app.after(1000, check_topmost) app.mainloop() What am I doing wrong? | This is not a fix for your issue, but rather a workaround. It seems like the topmost attribute is really weird... First, it is platform specific. Second, once set, it doesn't just change. If you set "-topmost", 1 for another window both will have the same attribute. From the documentation I found "-topmost gets or sets whether this is a topmost window". I assume, that once you call the attribute, it not only checks the attribute, but reassigns the value, therefore giving you the weird behavior with the drop-down menu. Interestingly, if you get all attributes at the same time by app.attributes(), your drop-down menu is not affected. So you can use this as workaround to check the attribute and only set it if necessary. Most likely, this is not the best solution, but it is the best I could come up with. import tkinter as tk def set_topmost(): app2.attributes('-topmost', 1) print('window 1', app.attributes()) print('window 2', app2.attributes()) # as you can see both windows can have the same topmost attribute app.attributes('-topmost', 0) # try the behavior after commenting this out def check_topmost(): print(app.attributes()) # this does not affect your window/drop-down menu att = str(app.attributes()) # making it a string as workaround if "'-topmost', 1" in att: print('already set as topmost') else: print('switch topmost') app.attributes('-topmost', 1) # this will still disrupt your drop down menu app.after(5000, check_topmost) # i set it to 5 seconds to better see the effects app = tk.Tk() menu_bar = tk.Menu(app) sub_menu = tk.Menu(menu_bar, tearoff=0) sub_menu.add_command(label = 'exit', command = app.destroy) menu_bar.add_cascade(label = "menu", menu = sub_menu) app.config(menu = menu_bar) app.attributes('-topmost', 1) # comment this app2 = tk.Toplevel() app2.geometry('500x500') btn = tk.Button(app, text='set topmost', command=set_topmost) btn.pack() app.after(1000, check_topmost) app.mainloop() | 3 | 2 |
77,170,645 | 2023-9-25 | https://stackoverflow.com/questions/77170645/pandas-2-1-0-warning-the-method-keyword-in-series-replace-is-deprecated-and | I have a pandas line of code that gives me a future deprecation warning as stated in the title and I can't find in the pandas documentation how to modify it in order to remove the warning. The line of code is the following: df['temp_open']=df['temp_open'].replace('',method='ffill') Any help would be greatly appreciated. I tried to fill blanks and it works, but I would like to get rid of the warning. | You can do this instead : df["temp_open"] = df["temp_open"].replace("", None).ffill() And if you want to keep the nulls (if any) untouched, you can use : df["temp_open"] = ( df["temp_open"].replace("", None).ffill().where(df["temp_open"].notnull()) ) Output : print(df) temp_open 0 A 1 A 2 NaN 3 B 4 C 5 C Used input : df = pd.DataFrame({"temp_open": ["A", "", None, "B", "C", ""]}) | 7 | 11 |
77,169,204 | 2023-9-24 | https://stackoverflow.com/questions/77169204/pythonic-way-of-dropping-columns-used-in-assign-i-e-pandas-equivalent-of-kee | In dplyr package of R, there's the option .keep = "unused" when creating new columns with the function mutate() (which is their equivalent of assign). An example, for those who haven't used it: > head(iris) Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa # any column used in creating `new_col` is dropped afterwards automatically > mutate(.data = head(iris), new_col = Sepal.Length + Petal.Length * Petal.Width, .keep = "unused") Sepal.Width Species new_col 1 3.5 setosa 5.38 2 3.0 setosa 5.18 3 3.2 setosa 4.96 4 3.1 setosa 4.90 5 3.6 setosa 5.28 6 3.9 setosa 6.08 I say they are equivalent, but there doesn't appear to be the option for doing this with assign in the Pandas documentation so I assume it doesn't exist. I was curious about creating a way of doing something similar then. One way I can think of to do this is to create a list of names beforehand, and drop them afterwards, like this: from sklearn import datasets import pandas as pd used_columns = ['sepal length (cm)', 'petal length (cm)', 'petal width (cm)'] iris = pd.DataFrame(datasets.load_iris().data, columns=datasets.load_iris().feature_names) iris.assign(new_col = lambda x: x['sepal length (cm)'] + x['petal length (cm)'] * x['petal width (cm)']).drop(used_columns, axis=1) or iris.assign(new_col = lambda x: x[used_columns[0]] + x[used_columns[1]] * x[used_columns[2]]).drop(used_columns, axis=1) Which seems ~fine~, but requires a separate list, and with the first one, keeping two things updated, and with the second, the cognitive load of keeping track of what the nth list item is in my head. So I was curious if there's another way I'm not aware of of doing this, that would be easier to maintain? Both of the ones above seem not very Pythonic? Research I've done: I did a bunch of googling around this, with no luck. It seems there's plenty of ways of dropping columns, but none I've found seem particularly well-suited to this type of situation. Any help you could provide would be much appreciated! Answers which use other Python packages (e.g. janitor) are okay too. | I've never used R but based on the definition of unused and AFIK, to simulate the same behaviour in pandas, you will need to pop each column from a copy of the original DataFrame : "unused" retains only the columns not used in ... to create new columns. This is useful if you generate new columns, but no longer need the columns used to generate them. DataFrame.pop(item) returns item and drops from frame. Raises KeyError if not found. ( iris.copy().assign( new_col= lambda x: x.pop('sepal length (cm)') + x.pop('petal length (cm)') * x.pop('petal width (cm)')) ) Output : sepal width (cm) new_col 0 3.5 5.38 1 3.0 5.18 2 3.2 4.96 3 3.1 4.90 4 3.6 5.28 .. ... ... 145 3.0 18.66 146 2.5 15.80 147 3.0 16.90 148 3.4 18.62 149 3.0 15.08 [150 rows x 2 columns] | 7 | 9 |
77,169,051 | 2023-9-24 | https://stackoverflow.com/questions/77169051/how-to-loop-over-pivot-table-to-create-list-of-dictionaries-taking-the-index-and | I have this table here and i'm trying to take the value and device category from each row in each column so I can have data like below. series: [{ name: 'engaged_sessions', data: [{ name: 'Desktop', y: 7765, }, { name: 'Mobile', y: 388 },... name: 'event_count', data: [{ name: 'Desktop', y: 51325, }, { name: 'Mobile', y: 4349 },... And basically go through each column taking the device category and value into a list of dictionaries Heres the pivot table, engaged_sessions event_count new_users total_revenue total_users device_category Desktop 7765 51325 6593 9 8021 Mobile 388 4349 795 0 412 Smart Tv 2 38 1 250 9 Tablet 87 111 37 0 97 I've tried using a for loop and put each iteration into a list but it wasn't quite right. The closest I've gotten is with to_dict() method and I think that has the best bet so far. This question here (Pandas to_dict data structure, using column as dictionary index) is very similar but I'm trying group by each column and if I use groupby(df.cloumns) or groupby(['column'],['column']) it gives me objects with numbers in it but no reference to what they are | If df contains the pivoted dataframe from your question you can do: out = [] for c in df: out.append( {"name": c, "data": [{"name": k, "y": v} for k, v in df[c].to_dict().items()]} ) print(out) Prints: [ { "name": "engaged_sessions", "data": [ {"name": "Desktop", "y": 7765}, {"name": "Mobile", "y": 388}, {"name": "Smart Tv", "y": 2}, {"name": "Tablet", "y": 87}, ], }, { "name": "event_count", "data": [ {"name": "Desktop", "y": 51325}, {"name": "Mobile", "y": 4349}, {"name": "Smart Tv", "y": 38}, {"name": "Tablet", "y": 111}, ], }, { "name": "new_users", "data": [ {"name": "Desktop", "y": 6593}, {"name": "Mobile", "y": 795}, {"name": "Smart Tv", "y": 1}, {"name": "Tablet", "y": 37}, ], }, { "name": "total_revenue", "data": [ {"name": "Desktop", "y": 9}, {"name": "Mobile", "y": 0}, {"name": "Smart Tv", "y": 250}, {"name": "Tablet", "y": 0}, ], }, { "name": "total_users", "data": [ {"name": "Desktop", "y": 8021}, {"name": "Mobile", "y": 412}, {"name": "Smart Tv", "y": 9}, {"name": "Tablet", "y": 97}, ], }, ] | 3 | 3 |
77,168,615 | 2023-9-24 | https://stackoverflow.com/questions/77168615/how-to-separate-row-entries-in-a-csv-file-into-distinct-columns-with-duplicate-l | I need to restructure a .csv file that follows this structure: a b 1 2 3 4 5 6 ... Into this new structure: a b a b a b ... 1 2 3 4 5 6 ... I don't know how to do this efficiently without iterating over each row. Is there a more efficient way of doing this in Python? | Here is one possible solution: import pandas as pd # read the .csv to a dataframe df = pd.read_csv('your_file.csv') # create a new df new_df = pd.DataFrame([df.to_numpy().ravel()], columns=df.columns.to_list() * len(df)) print(new_df) Prints: a b a b a b 0 1 2 3 4 5 6 To save the new dataframe: new_df.to_csv("out.csv", index=False) | 2 | 3 |
77,167,537 | 2023-9-24 | https://stackoverflow.com/questions/77167537/extract-tuple-items-from-list-without-accessing-index | I am trying to extract the tuple items which consist of another embedded tuple without using the index. items = [(('a', 'b'), 'a'), (('a', 'b'), 'b'), (('a', 'b'), 'c'), (('a', 'b'), 'd')] # Currently accessing using index for xy,z in items: print(xy[0],xy[1],z) # Trying without index but getting error for x, y, z in items: print(x,y,z) for item in items: x, y, z = item print(x,y,z) Error: ValueError: not enough values to unpack (expected 3, got 2) Is it possible? | Yes, you just need to wrap the nested tuple to have an extra layer of unpacking: for (x, y), z in items: print(x,y,z) Note how (x, y) causes the inner tuple to be unpacked. | 2 | 3 |
77,165,974 | 2023-9-24 | https://stackoverflow.com/questions/77165974/nameerror-name-exceptiongroup-is-not-defined | I work through Python tutorial and ran against a sample in https://docs.python.org/3/tutorial/errors.html in section 8.9. Raising and Handling Multiple Unrelated Exceptions which doesn't work by me: $ python Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> excs = [OSError('error 1'), SystemError('error 2')] >>> raise ExceptionGroup('there were problems', excs) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'ExceptionGroup' is not defined >>> Why? Isn't a ExceptionGroup a built-in exception? The compiler doesn't give any errors and IDE pops up the documentation for this class... The next thought was - I have to import something: >>> from builtins import ExceptionGroup Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'ExceptionGroup' from 'builtins' (unknown location) What's wrong? | According to the official docs, ExceptionGroup was only added in Python 3.11. Make sure you check your Python version | 3 | 6 |
77,162,216 | 2023-9-23 | https://stackoverflow.com/questions/77162216/can-you-resize-overflowdirect-window-in-tkinter-without-flicker-when-adjusting-p | I have an overridredirect window that I need to resize on mouse drag. The example is functioning, but when I drag any of: sw, w, nw, n and ne handles it causes flicker (most noticeable when dragging fast). This is likely due to adjusting position and size simultaneously. I tried update_idletasks to see if that would smooth out the transition but, it didn't. import tkinter as tk class Box(tk.Tk): def __init__(self): super().__init__() self.drag_point_x = None self.drag_point_y = None self.drag_margin = 15 self.width = 400 self.height = 200 self.x = 300 self.y = 300 self.geometry(f"{self.width}x{self.height}+{self.x}+{self.y}") self.overrideredirect(1) self.bind("<B1-Motion>", self.on_dragging) self.bind("<Button-1>", self.on_click) self.bind("<Motion>", self.on_motion) def on_dragging(self, event): mouse_x = self.winfo_pointerx() mouse_y = self.winfo_pointery() delta_width = self.x - mouse_x delta_height = self.y - mouse_y if self.drag_point == 'e': self.width = mouse_x - self.x elif self.drag_point == 'se': self.width = mouse_x - self.x self.height = mouse_y - self.y elif self.drag_point == 's': self.height = mouse_y - self.y elif self.drag_point == 'sw': self.height = mouse_y - self.y self.width += delta_width self.x = mouse_x elif self.drag_point == 'w': self.width += delta_width self.x = mouse_x elif self.drag_point == 'nw': self.width += delta_width self.height += delta_height self.x = mouse_x self.y = mouse_y elif self.drag_point == 'n': self.height += delta_height self.y = mouse_y elif self.drag_point == 'ne': self.width = mouse_x - self.x self.height += delta_height self.y = mouse_y self.geometry(f"{self.width}x{self.height}+{self.x}+{self.y}") def on_click(self, event): self.drag_point_x = event.x self.drag_point_y = event.y def on_motion(self, event): if event.x < self.drag_margin: if event.y < self.drag_margin: self.config(cursor="size_nw_se") self.drag_point = "nw" elif event.y > self.height - self.drag_margin: self.config(cursor="size_ne_sw") self.drag_point = "sw" else: self.config(cursor="size_we") self.drag_point = "w" elif event.x > self.width - self.drag_margin: if event.y < self.drag_margin: self.config(cursor="size_ne_sw") self.drag_point = "ne" elif event.y > self.height - self.drag_margin: self.config(cursor="size_nw_se") self.drag_point = "se" else: self.config(cursor="size_we") self.drag_point = "e" elif event.y < self.drag_margin: self.config(cursor="size_ns") self.drag_point = "n" elif event.y > self.height - self.drag_margin: self.config(cursor="size_ns") self.drag_point = "s" else: self.config(cursor="") self.drag_point = None if __name__ == "__main__": box = Box() box.mainloop() UPDATE: Here's code based on OneMadGypsy's solution that adds multiple monitor support: import tkinter as tk from PIL import ImageGrab from screeninfo import get_monitors class Overlay(tk.Toplevel): def __init__(self, monitor, overlays): tk.Toplevel.__init__(self) self.b_x = 0 self.b_y = 0 self.e_x = 0 self.e_y = 0 self.drag = False self.overlays = overlays self.monitor = monitor self.configure(cursor='cross') self.attributes('-alpha', 0.3) self.attributes('-topmost', True) self.overrideredirect(True) self.geometry(f"{self.monitor.width}x{self.monitor.height}+{self.monitor.x}+{self.monitor.y}") self.canvas = tk.Canvas(self, bg='dark gray') self.canvas.pack(fill='both', expand=True) self.canvas.bind('<B1-Motion>', self.on_drag) self.canvas.bind('<ButtonPress-1>', self.on_press) self.canvas.bind('<ButtonRelease-1>', self.on_release) rect = dict( outline='#0052d6', fill='white', tags='snip_rect', width=2, ) self.canvas.create_rectangle(0, 0, 0, 0, **rect) def on_press(self, event): self.b_x = event.x self.b_y = event.y def on_drag(self, event): self.drag = True self.e_x = event.x self.e_y = event.y self.canvas.coords('snip_rect', self.b_x, self.b_y, self.e_x, self.e_y) def on_release(self, event): self.destroy() for window in self.overlays: if window: window.destroy() self.overlays.clear() if not self.drag: return adjusted_b_x = self.b_x + self.monitor.x adjusted_b_y = self.b_y + self.monitor.y adjusted_e_x = self.e_x + self.monitor.x adjusted_e_y = self.e_y + self.monitor.y image = ImageGrab.grab(bbox=(min(adjusted_b_x, adjusted_e_x), min(adjusted_b_y, adjusted_e_y), max(adjusted_b_x, adjusted_e_x), max(adjusted_b_y, adjusted_e_y)), all_screens=True) image.show() class Snipper: def __init__(self): self.image = None self.overlays = [] def snip(self): monitors = get_monitors() for monitor in monitors: overlay = Overlay(monitor, self.overlays) self.overlays.append(overlay) if __name__ == '__main__': class App(tk.Tk): def __init__(self, **kwargs): tk.Tk.__init__(self, **kwargs) self.image = None self.windows = [] self.snipper = Snipper() tk.Button(self, text='snip', width=50, command=self.snipper.snip).pack() app = App() app.mainloop() | Now that you have made it clear what you actually want to do, may I suggest this method. The short explanation is: When "snip" button is pressed, the entire screen is taken over by a semi-transparent tk.Toplevel with a tk.Canvas child. You drag out a square on the canvas (ie, no window resizing), and release to lock it in. Then click "save" to save the image within the rect you just drew. It doesn't have "after-the-fact" controls for adjustment but, it works very smooth and emulates much of what you explained in the comments. Use this code all you like and in any way you like. Please note that I wrote this like 5 years ago. I see things I would definitely do differently now, but it's all "code style" related. I don't see any major refactors here. import cv2 import numpy as np import tkinter as tk from os import mkdir, path, getcwd from time import strftime from PIL import ImageGrab class Button(tk.Button): def __init__(self, master, text, command, row, column, **kwargs): self.default = {**dict( foreground = 'gray60', background = 'gray10', activebackground = 'gray60', activeforeground = 'gray10', font = 'Helvetica 28 bold', padx = 10, pady = 10, border = 2, ), **kwargs} tk.Button.__init__(self, master, text=text, command=command, **self.default) self.grid(row=row, column=column, sticky='nswe') self.hover = self.default.copy() self.hover['background'] = 'steel blue' self.hover['foreground'] = 'black' self.bind("<Enter>", self.on_enter) self.bind("<Leave>", self.on_leave) def on_enter(self, event): if self['state'] != 'disabled': self.configure(**self.hover) def on_leave(self, event): self.configure(**self.default) class ScreenSnip(tk.Toplevel): def __init__(self, master): tk.Toplevel.__init__(self, master) #config toplevel self.configure(cursor='cross') self.attributes('-fullscreen', True) self.attributes('-alpha', 0.2) #init vars self.b_x = 0 self.b_y = 0 self.e_x = 0 self.e_y = 0 self.drag= False #create and config canvas self.canvas = tk.Canvas(self, bg='dark gray') self.canvas.pack(fill='both', expand=True) self.canvas.bind('<B1-Motion>', self.on_drag) self.canvas.bind('<ButtonPress-1>', self.on_press) self.canvas.bind('<ButtonRelease-1>', self.on_release) rect = dict( outline = '#0052d6', fill = 'white', tags = 'snip_rect', width = 2, ) self.canvas.create_rectangle(0, 0, 0, 0, **rect) def on_press(self, event): #store initial click coordinates self.b_x = event.x self.b_y = event.y def on_drag(self, event): #gather drag coordinates and tag current rect in canvas self.drag = True self.e_x = event.x self.e_y = event.y self.canvas.coords('snip_rect', self.b_x, self.b_y, self.e_x, self.e_y) def on_release(self, event): self.destroy() #if the user didn't drag, grab an 800x600 image with mouse position as center if not self.drag: self.b_x -= 400 self.b_y -= 300 self.e_x = self.b_x + 800 self.e_y = self.b_y + 600 #get image based on drag coordinates self.master.image = ImageGrab.grab(bbox=(min(self.b_x, self.e_x), min(self.b_y, self.e_y), max(self.b_x, self.e_x), max(self.b_y, self.e_y))) cv2.imshow('Captured Image Preview', cv2.cvtColor(np.array(self.master.image), cv2.COLOR_BGR2RGB)) class App(tk.Tk): def __init__(self, **kwargs): tk.Tk.__init__(self, **kwargs) #for storing a snip reference self.image = None #create directory to store snips, if it doesn't exist d = path.join(getcwd(), 'snips/') if not path.isdir(d): mkdir(d) #snip button Button(self, 'Snip', self.snip, 0, 0) #save button self.save_btn = Button(self, 'Save', self.save, 0, 1) self.save_btn.configure(state='disabled') def snip(self): self.save_btn.configure(state='normal') #enable save button ScreenSnip(self) #start snipper def save(self): cv2.destroyAllWindows() #destroy image preview #save image to disk and reset reference if self.image: self.image.save(f'snips/snip_{strftime("%a_%b_%d_%Y_%I_%M_%S")}.png') self.image = None if __name__ == '__main__': app = App() app.title("Snip-N-Save") app.resizable(width=False, height=False) app.mainloop() It may be notable that you could use a hybrid version of your method and my method. You just need to adjust my code so you can adjust the rect. You aren't likely to get a bunch of "edge jumping" with a Canvas rect, and in the long run you don't need an entire window. You just need an adjustable rect. This could be that. If you don't like having a haze over the entire window while you make a selection, do this: class ScreenSnip(tk.Toplevel): def __init__(self, master): tk.Toplevel.__init__(self, master) #this ~ you can use any color. I chose this one because it is not likely to be used with any seriousness. self.wm_attributes("-transparentcolor", '#000001') self['background'] = '#000001' ... #and this self.canvas = tk.Canvas(self, bg='#000001') From this example, everything that is the color '#000001' will be transparent clean down to the desktop. Make sure your rect doesn't use that color and voila', you now have something that looks like what you already have, but it should work much better because you are just moving a rect around on a Canvas. Keep in mind that you will still have a tk.Toplevel covering your entire screen while in "select mode". In other words ... ::click, click, click:: "Why isn't my mouse working?" is in your future if you don't keep this in mind. | 3 | 1 |
77,162,718 | 2023-9-23 | https://stackoverflow.com/questions/77162718/pandas-dataframe-style-format-not-printing-specified-precision | I am trying to format the dataframe using https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html. But, I am not getting the desired result: >>> df = pd.DataFrame([[np.nan, 1.534231, 'A'], [2.32453251, np.nan, 3.0]]) >>> df.style.format(na_rep='MISS', precision=3) <pandas.io.formats.style.Styler object at 0x7f0b32eff740> >>> print(df.head()) 0 1 2 0 NaN 1.534231 A 1 2.324533 NaN 3.0 >>> print(pd.__version__) 1.5.3 with python --version Python 3.12.0rc3 What am I doing wrong here? | Styler.format is used to apply as specific formatting to a pandas DataFrame. So the data present in the dataframe doesn't change but when you display it in a specific environment, the styling will be applied to the data. So when you write the following code, >>> df = pd.DataFrame([[np.nan, 1.534231, 'A'], [2.32453251, np.nan, 3.0]]) >>> df.style.format(na_rep='MISS', precision=3) <pandas.io.formats.style.Styler object at 0x7f0b32eff740> Notice the last line, where a Styler object is returned. You can hold this Styler object in another variable as follows. Now, to display this formatted df, let's make the environment into a HTML web browser environment. As both Styler and pandas.DataFrame have a to_html() method, lets convert the df into html code and view into in the web browser. >>> import webbrowser >>> with open('str.html', 'w') as f: s=df.style.format(na_rep='MISS', precision=3) s.to_html(f) >>> webbrowser.open_new_tab('str.html') True The above code pops up a new tab with the page 'str.html' stored locally, where you can easily view the formatted df. | 3 | 2 |
77,158,832 | 2023-9-22 | https://stackoverflow.com/questions/77158832/dash-suppress-error-a-nonexistent-object-was-used-in-an-input-of-a-dash-callb | I have a complex Dash app where different layouts are displayed depending on some inputs. These output layouts (created in callbacks) contain some inputs. Because those inputs are not defined yet in the layout, I have the error "A nonexistent object was used in an Input of a Dash callback." Is there a way nowadays to get rid of this error other than playing with the CSS style "visibility" to make them exist in the HTML code, but invisible to the user? I tried to add suppress_callback_exceptions=True when creating the app but it is not working. I am using Dash v2.9.3. from dash import dcc, Dash, html, Input, Output, callback app = Dash(__name__, suppress_callback_exceptions=True) app.layout = html.Div( children=[ dcc.Store(id="store_filters", storage_type="memory"), dcc.RadioItems( id="choice_filters", options=["filters1", "filters2", "filters3"], value="filters1", inline=True, ), html.Div(id="filters"), ] ) @callback(Output("filters", "children"), Input("choice_filters", "value")) def update_filters(choice_filters): if choice_filters == "filters1": layout = [dcc.Input(id="input1", type="text")] elif choice_filters == "filters2": layout = [dcc.Input(id="input2", type="text")] elif choice_filters == "filters3": layout = [dcc.Input(id="input3", type="text")] else: layout = [] return layout @callback( Output("store_filters", "data"), [ Input("input1", "value"), Input("input2", "value"), Input("input3", "value"), ], ) def filters_result(input1, input2, input3): filters = {"filter1": input1, "filter2": input2, "filter3": input3} print("Last filters entered : ") print(filters) return filters if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=8222) | Building off of @Dmitry's answer (👍+1), I modified and extended a few aspects of the code: The functionality of the pattern-matching is printed live to the app for the sake of easier demonstration Using dcc.Store to persist the dynamically generated dcc.Input components' user-entered text values Contextualizing the code into a fully working demo app E.g., the following: from dash import dcc, Dash, html, Input, Output, callback, ALL, State app = Dash(__name__, suppress_callback_exceptions=True) app.layout = html.Div( children=[ dcc.Store(id="store_filters", storage_type="session", data={}), dcc.RadioItems( id="choice_filters", options=[ {"label": "Filter 1", "value": "Filter 1"}, {"label": "Filter 2", "value": "Filter 2"}, {"label": "Filter 3", "value": "Filter 3"}, ], value="Filter 1", inline=True, ), html.Div(id="filters"), html.Div(id="filters_output"), ], style={"margin": "10%", "textAlign": "center"}, ) @callback( Output("filters", "children"), Input("choice_filters", "value"), State("store_filters", "data"), ) def update_filters(choice_filters, stored_data): stored_value = stored_data.get(choice_filters, "") filter_input = dcc.Input( id={"type": "filter", "name": choice_filters}, type="text", placeholder=f"Enter value for {choice_filters}", value=stored_value, ) return [filter_input] @callback( Output("filters_output", "children"), Output("store_filters", "data"), Input({"type": "filter", "name": ALL}, "value"), State({"type": "filter", "name": ALL}, "id"), State("store_filters", "data"), prevent_initial_call=True, ) def filters_result(filter_values, filter_ids, stored_data): filters = { id_value["name"]: value for id_value, value in zip(filter_ids, filter_values) } for key, value in filters.items(): stored_data[key] = value filters_output = [ html.Div([html.H2(f"{key}:"), html.P(f"{value}")]) for key, value in filters.items() ] return filters_output, stored_data if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=8222) results in: | 2 | 2 |
77,154,241 | 2023-9-22 | https://stackoverflow.com/questions/77154241/chunk-a-json-array-of-objects-until-each-array-item-is-of-byte-length-a-static | I have a list of dict that follow a consistent structure where each dict has a list of integers. However, I need to make sure each dict has a bytesize (when converted to a JSON string) less than a specified threshold. If the dict exceeds that bytesize threshold, I need to chunk that dict's integer list. Attempt: import json payload: list[dict] = [ {"data1": [1,2,3,4]}, {"data2": [8,9,10]}, {"data3": [1,2,3,4,5,6,7]} ] # Max size in bytes we can allow. This is static and a hard limit that is not variable. MAX_SIZE: int = 25 def check_and_chunk(arr: list): def check_size_bytes(item): return True if len(json.dumps(item).encode("utf-8")) > MAX_SIZE else False def chunk(item, num_chunks: int=2): for i in range(0, len(item), num_chunks): yield item[i:i+num_chunks] # First check if the entire payload is smaller than the MAX_SIZE if not check_size_bytes(arr): return arr # Lets find the items that are small and items that are too big, respectively small, big = [], [] # Find the indices in the payload that are too big big_idx: list = [i for i, j in enumerate(list(map(check_size_bytes, arr))) if j] # Append these items respectively to their proper lists item_append = (small.append, big.append) for i, item in enumerate(arr): item_append[i in set(big_idx)](item) # Modify the big items until they are small enough to be moved to the small_items list for i in big: print(i) # This is where I am unsure of how best to proceed. I'd like to essentially split the big dictionaries in the 'big' list such that it is small enough where each element is in the 'small' result. Example of a possible desired result: payload: list[dict] = [ {"data1": [1,2,3,4]}, {"data2": [8,9,10]}, {"data3": [1,2,3,4]}, {"data3": [5,6,7]} ] | As discussed in the comments, a simple approach to this task is to recursively split the input list until the output dict meets the size requirement. This will give more evenly sized lists in the output, but may result in more dicts than are absolutely necessary (and would be produced by one of the length accumulation approaches). import json def split_list_dict(dl, limit): def split_dict_list(dd, limit): def json_len(ll): return sum(map(len, map(str, ll))) + 2 * len(ll) # 2 * len(ll) allows for [] and , ll = next(iter(dd.values())) key = next(iter(dd.keys())) dict_jsonlen = len(json.dumps(dd)) if dict_jsonlen <= limit: yield dd return list_jsonlen = json_len(ll) keylen = dict_jsonlen - list_jsonlen split_point = len(ll) // 2 yield from split_dict_list({ key : ll[:split_point] }, limit) yield from split_dict_list({ key : ll[split_point:] }, limit) for dd in dl: yield from split_dict_list(dd, limit) MAX_SIZE: int = 25 payload: list[dict] = [ {"data1": [1, 2, 3, 4]}, {"long_data_name": [1, 2, 3, 4]}, {"data3": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]}, {"data4": [100, 200, -1, -10, 200, 300, 12, 13]}, ] print(list(split_list_dict(payload, MAX_SIZE))) Output: [ {'data1': [1, 2, 3, 4]}, {'long_data_name': [1]}, {'long_data_name': [2]}, {'long_data_name': [3]}, {'long_data_name': [4]}, {'data3': [1, 2, 3]}, {'data3': [4, 5, 6]}, {'data3': [7, 8, 9]}, {'data3': [10, 11, 12]}, {'data4': [100, 200]}, {'data4': [-1, -10]}, {'data4': [200, 300]}, {'data4': [12, 13]} ] | 2 | 1 |
77,152,114 | 2023-9-21 | https://stackoverflow.com/questions/77152114/for-a-function-decorator-how-can-a-single-optional-kwarg-item-be-type-constrain | I have a decorator that looks for an optional kwargs item, foo, with all other *args and **kwargs passed through to the decorated function. I want the typing hints to specify which type that specific item must be, if it is present. For the following snippet, how can I create the typing partial constraint on FnParams such that IFF foo item is present, then its type MUST be str ? from typing import Callable, ParamSpec, TypeVar, TypedDict, Required, NotRequired, Unpack, cast, Any FnRet = TypeVar("FnRet") # how can this be partially constrained to _required_ type `str`, # if `foo` item is present? FnParams = ParamSpec("FnParams") FnType = Callable[FnParams, FnRet] def decorator(fn: FnType) -> FnType: def wrapper(*args: FnParams.args, **kwargs: FnParams.kwargs) -> FnRet: # type: ignore[type-var] if "foo" in kwargs: assert type(kwargs["foo"]) == str return cast(FnRet, fn(*args, **kwargs)) return wrapper ## Should pass: @decorator def ok_1() -> int: return 42 class RequiredFooKwArgs(TypedDict): foo: Required[str] @decorator def ok_required_foo(*_args: Any, **kwargs: Unpack[RequiredFooKwArgs]) -> int: return 42 class NotRequiredFooKwArgs(TypedDict): foo: NotRequired[str] @decorator def ok_not_required_foo(*_args: Any, **kwargs: Unpack[NotRequiredFooKwArgs]) -> int: return 42 ## The Badies: class BadTypeRequiredFooKwArgs(TypedDict): foo: Required[int] # <--- this should be flagged as the wrong type @decorator def bad_type_required_foo(*_args: Any, **kwargs: Unpack[BadTypeRequiredFooKwArgs]) -> int: return 42 class BadTypeNotRequiredFooKwArgs(TypedDict): foo: NotRequired[int] # <--- this should be flagged as the wrong type @decorator def bad_type_not_required_foo(*_args: Any, **kwargs: Unpack[BadTypeNotRequiredFooKwArgs]) -> int: return 42 When I type check this code with mypy 1.5.1, I get: mypy --enable-incomplete-feature=Unpack ~/tmp.py Success: no issues found in 1 source file What I'd like is to augment FnParams with a foo field type constraint and then have some python type-checker fail the static analysis. | It appears this isn't available yet: https://peps.python.org/pep-0612/#concatenating-keyword-parameters However, there is room for this in the future. | 3 | 0 |
77,153,141 | 2023-9-21 | https://stackoverflow.com/questions/77153141/why-variable-assignment-behaves-differently-in-python | I am new to Python and I am quite confused about the following code: Example 1: n = 1 m = n n = 2 print(n) # 2 print(m) # 1 Example 2: names = ["a", "b", "c"] visitor = names names.pop() print(names) # ['a', 'b'] print(visitor) # ['a', 'b'] Example 1 shows that n is 1 and m is 1. However, example 2 shows that names is ['a', 'b'], and visitor is also ['a', 'b']. To me, example 1 and example 2 are similar, so I wonder why the results are so different? Thank you. | Integers are not mutable. So the only way to change a variable that is assigned to an integer is through assignment. When you assign m=n, m is assigned to the same address that n currently is assigned. Both m and n refer to the address of the integer 1. m will continue to point to 1 even if n is assigned to a different value be it an integer or something else entirely. This behavior based on assignment is common to both mutable and immutable types. By contrast, lists are mutable so you can modify by means that are not only assignment. When you assign visitor = names, visitor points to the same list that names is assigned to, just like the above case. And were you to change the value of names through assignment visitor would not be impacted. The difference emerges if you mutate the list. If you call names.pop() for example this mutates the list and visitor (and any other variable that was assigned to this list) will see the mutation. In your case you mutated the list with a pop operation, there are others which we won't go into here. | 2 | 1 |
77,154,762 | 2023-9-22 | https://stackoverflow.com/questions/77154762/fixing-overlapping-time-tick-labels-in-matplotlib-for-a-pandas-dataframe-plot | I have a DataFrame named airtep_df with columns 'AIRTEP' (Int64), 'AIRTEP_qc' (Int64), and 'UTCTime'. I'm trying to create a plot from this data, but the time tick labels on the x-axis are overlapping, making the plot unreadable. I attempted to rotate the tick labels using plt.xticks(rotation=45), but the plot disappeared. Can someone explain what went wrong and provide a solution for formatting the time tick labels with datetime correctly? This picture shows the details about the data frame I'm using: This is the script I used: import matplotlib.pyplot as plt import pandas as pd index = (airtep_df['AIRTEP'] != 9999) & (airtep_df['AIRTEP_qc'] == 1) selected_y = airtep_df.loc[index,'AIRTEP'] y = selected_y.values selected_x = time_df.loc[index,'UTCTime'] x = selected_x.values plt.plot(x[30],y[30]) #This is the result I have, the the x-axis are overlapping: #Then I trid to rotate the xticks plt.xticks(rotation=45) This is the result I have, as the picture: How can I plot datetime data and format it such that the labels are not overlapping? | Datetime-formatted rotated x-axis tick labels You could try something like: import numpy as np import pandas as pd import matplotlib.dates as mdates import matplotlib.pyplot as plt ### Sample data generation date_rng = pd.date_range(start="2016-01-01", end="2016-01-02", freq="H") temps = np.random.randint(17, 41, size=(len(date_rng))) qc_values = np.ones(len(date_rng), dtype=int) airtep_df = pd.DataFrame( {"UTCTime": date_rng, "AIRTEP": temps, "AIRTEP_qc": qc_values} ) print(airtep_df) ### Plotting plt.figure(dpi=300) plt.plot(airtep_df.UTCTime, airtep_df.AIRTEP, marker="o", linestyle="-") # Rotating the x-tick labels plt.xticks(rotation=45) # Adjusting the date format ax = plt.gca() ax.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M")) plt.tight_layout() plt.xlabel("Time") plt.ylabel("Temperature (℃)") plt.grid(True, which="both", linestyle="--", linewidth=0.5) plt.show() which produces: UTCTime AIRTEP AIRTEP_qc 0 2016-01-01 00:00:00 30 1 1 2016-01-01 01:00:00 27 1 2 2016-01-01 02:00:00 37 1 3 2016-01-01 03:00:00 36 1 4 2016-01-01 04:00:00 18 1 5 2016-01-01 05:00:00 26 1 6 2016-01-01 06:00:00 27 1 7 2016-01-01 07:00:00 20 1 8 2016-01-01 08:00:00 27 1 9 2016-01-01 09:00:00 30 1 10 2016-01-01 10:00:00 37 1 11 2016-01-01 11:00:00 32 1 12 2016-01-01 12:00:00 29 1 13 2016-01-01 13:00:00 29 1 14 2016-01-01 14:00:00 23 1 15 2016-01-01 15:00:00 24 1 16 2016-01-01 16:00:00 37 1 17 2016-01-01 17:00:00 19 1 18 2016-01-01 18:00:00 19 1 19 2016-01-01 19:00:00 26 1 20 2016-01-01 20:00:00 35 1 21 2016-01-01 21:00:00 29 1 22 2016-01-01 22:00:00 24 1 23 2016-01-01 23:00:00 21 1 24 2016-01-02 00:00:00 39 1 | 3 | 1 |
77,154,695 | 2023-9-22 | https://stackoverflow.com/questions/77154695/are-comparisons-really-allowed-to-return-numbers-instead-of-booleans-and-why | I've found a surprising sentence in the Python documentation under Truth Value Testing: Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated. Relational operators seem to have no excepting statements, so IIUC they could return 0 and 1 instead of False and True on values of some built-in types (e.g. 7 < 3), even nondeterministically. Thus, in order to satisfy a specification requiring of my code to produce values of type bool or for defensive programming (whenever that's important), should I wrap logical expressions in calls to bool? Additional question: why does this latitude exist? Does it make things easier somehow for CPython or another implementation? EDIT The question has been answered and I've accepted, but I'd like to add that in PEP 285 – Adding a bool type I've found the following statements: All built-in operations that conceptually return a Boolean result will be changed to return False or True instead of 0 or 1; for example, comparisons, the “not” operator, and predicates like isinstance(). All built-in operations that are defined to return a Boolean result will be changed to return False or True instead of 0 or 1. In particular, this affects comparisons (<, <=, ==, !=, >, >=, is, is not, in, not in), the unary operator ‘not’, the built-in functions callable(), hasattr(), isinstance() and issubclass(), the dict method has_key(), the string and unicode methods endswith(), isalnum(), isalpha(), isdigit(), islower(), isspace(), istitle(), isupper(), and startswith(), the unicode methods isdecimal() and isnumeric(), and the ‘closed’ attribute of file objects. The predicates in the operator module are also changed to return a bool, including operator.truth(). The only thing that changes is the preferred values to represent truth values when returned or assigned explicitly. Previously, these preferred truth values were 0 and 1; the PEP changes the preferred values to False and True, and changes built-in operations to return these preferred values. However, PEPs seem to be less authoritative than the documentation (of which the language and library references are the main parts) and there are numerous deviations from PEPs (many of them mentioned explicitly). So until the team updates it, the stronger guarantees of the PEP aren't to be trusted, I think. EDIT I've reported it on GH as a Python issue | So, it is probably important to understand that bool objects are int objects, since issubclass(bool, int) is true. So isinstance(True, int) and isinstance(False, int) is true. Prior to version 2.3 which was released back in 2002, Python lacked a bool type. PEP 285 was the accepted proposal. Prior to this, these operations would return 0 or 1. You can read in "WhatsNew" for 2.3: Most of the standard library modules and built-in functions have been changed to return Booleans. So, I can only surmise that the language was kept to include 0 and 1 while the standard library etc caught up with this change. But by now, I think everything in built-ins and every part of the standard library I've used will return bool when the return value is meant to be a boolean. But keep in mind, from the release notes: To sum up True and False in a sentence: they’re alternative ways to spell the integer values 1 and 0, with the single difference that str() and repr() return the strings 'True' and 'False' instead of '1' and '0'. | 3 | 5 |
77,153,903 | 2023-9-21 | https://stackoverflow.com/questions/77153903/finding-the-largest-power-of-10-that-divides-evenly-into-a-list-of-numbers | I'm trying to scale down a set of numbers to feed into a DP subset-sum algorithm. (It blows up if the numbers are too large.) Specifically, I need to find the largest power of 10 I can divide into the numbers without losing precision. I have a working routine but since it will run often in a loop, I'm hoping there's a faster way than the brute force method I came up with. My numbers happen to be Decimals. from decimal import Decimal import math def largest_common_power_of_10(numbers: list[Decimal]) -> int: """ Determine the largest power of 10 in list of numbers that will divide into all numbers without losing a significant digit left of the decimal point """ min_exponent = float('inf') for num in numbers: if num != 0: # Count the number of trailing zeros in the number exponent = 0 while num % 10 == 0: num //= 10 exponent += 1 min_exponent = min(min_exponent, exponent) # The largest power of 10 is 10 raised to the min_exponent return int(min_exponent) decimal_numbers = [Decimal("1234"), Decimal("5000"), Decimal("200")] result = largest_common_power_of_10(decimal_numbers) assert(result == 0) decimal_numbers = [Decimal(470_363_000.0000), Decimal(143_539_000.0000), Decimal(1_200_000.0000)] result = largest_common_power_of_10(decimal_numbers) assert(result == 3) divisor = 10**result # Later processing can use scaled_list scaled_list = [x/divisor for x in decimal_numbers] assert(scaled_list == [Decimal('470363'), Decimal('143539'), Decimal('1200')]) reconstituted_list = [x * divisor for x in scaled_list] assert(reconstituted_list == decimal_numbers) | This can be done all at once with the math library if your list is all integers. import math def largest_common_power_of_10(numbers): # get the greatest power of 10 that divides all numbers in list gcd_nums = math.gcd(*numbers) gcd = [x for x in range(1, len(str(gcd_nums))) if gcd_nums % math.pow(10, x) == 0] if len(gcd) == 0: return 0 else: gcp = max(gcd) return gcp Examples: numbers = [11230000,125,44500000] largest_common_power_of_10(numbers) # 0 numbers = [11230000,1540,44500000] largest_common_power_of_10(numbers) # 1 numbers = [11230000,1540000,44500000] largest_common_power_of_10(numbers) # 4 Note: This may only work with python 3.9 and greater because math.gcd() starrted accepting lists in that release. | 4 | 3 |
77,153,334 | 2023-9-21 | https://stackoverflow.com/questions/77153334/how-to-revert-n-choose-2-combinations | I have a set of pairs that can contain the result of mutliple "N choose 2" combinations: inputs = { ('id1', 'id2'), ('id1', 'id3'), ('id1', 'id4'), ('id2', 'id3'), ('id2', 'id4'), ('id3', 'id4'), ('id3', 'id5'), ('id4', 'id5'), ('id5', 'id6'), } And I would like to reverse those combinations, like this: recombinations = [ ('id1', 'id2', 'id3', 'id4'), ('id3', 'id4', 'id5'), ('id5', 'id6'), ] I managed to do it using brute-force: ids = list(sorted( {i for i in itertools.chain(*inputs)} )) excludes = set() recombinations = {tuple(i) for i in map(sorted, inputs)} for i in range(3, len(ids)+1): for subset in itertools.combinations(ids, i): for j in range(i-1, len(subset)): combs = set(itertools.combinations(subset, j)) if all(tup in recombinations for tup in combs): recombinations.add(subset) excludes = excludes.union(combs) for tup in excludes: recombinations.remove(tup) print(recombinations) {('id1', 'id2', 'id3', 'id4'), ('id3', 'id4', 'id5'), ('id5', 'id6')} Is there a smarter way to do it? Or some optimizations that I can add to the code? | Using the networkx library, this is quite simple, since it has a function that does exactly what you're asking for: finding all maximal cliques in a graph. Here: import networkx as nx G = nx.Graph() pairs_of_connected_nodes = [ ('id1', 'id2'), ('id1', 'id3'), ('id1', 'id4'), ('id2', 'id3'), ('id2', 'id4'), ('id3', 'id4'), ('id3', 'id5'), ('id4', 'id5'), ('id5', 'id6') ] G.add_edges_from(pairs_of_connected_nodes) maximal_cliques = list(nx.find_cliques(G)) for clique in maximal_cliques: print(clique) Output: ['id3', 'id4', 'id2', 'id1'] ['id3', 'id4', 'id5'] ['id6', 'id5'] Of course if your assignment is to implement the Bron-Kerbosch algorithm yourself, you'd have to code it up - but then asking on StackOverflow kind of defeats the purpose, unless you have a specific problem with your solution that you need help with? If you're just asking for a review of your code, ask on Code Review Stack Exchange, but expect to get told to use networkx as well. | 3 | 5 |
77,144,665 | 2023-9-20 | https://stackoverflow.com/questions/77144665/how-can-i-run-python-as-root-or-sudo-while-still-using-my-local-pip | pip is recommended to be run under local user, not root. This means if you do sudo python ..., you will not have access to your Python libs installed via pip. How can I run Python (or a pip installed bin/ command) under root / sudo (when needed) while having access to my pip libraries? | So the issue you're likely having is that you have pip installed the packages you want to a user specific site-packages directory, which then isn't being included in the sys.path module search list generated by sites.py when Python starts up for the root user. There are a couple of workaround for this to add the additional paths to be searched for modules. Use the PYTHONPATH environment variable. This should be set to semi-colon delimited and terminated list of paths. Directly modify sys.path at the start of your script before running any other imports. I often see this proposed on SO as a workaround around by people who haven't really grasped absolute vs relative imports. I have an example here using both methods. I'm on Windows in a virtual environment, but the principles would be the same in Linux. Setup: (venv) ~/PythonVenv/playground311 (main) $ export PYTHONPATH="C:\Path\From\PythonPath\;E:\Second\PythonPath\Path\;" Start of script: import sys # Insert at position 1 as position 0 is this script's directory. sys.path.insert(1, r"C:\Path\Inserted\By\Script") for path in sys.path: print(path) Output c:\Users\nighanxiety\PythonVenv\playground311 C:\Path\Inserted\By\Script C:\Path\From\PythonPath E:\Second\PythonPath\Path C:\Users\nighanxiety\PythonVenv\playground311 C:\python311\python311.zip C:\python311\DLLs C:\python311\Lib C:\python311 C:\Users\nighanxiety\PythonVenv\playground311\venv C:\Users\nighanxiety\PythonVenv\playground311\venv\Lib\site-packages If I weren't in a virtual environment, the last few entries are different: C:\Users\nighanxiety\PythonVenv\playground311 C:\Path\Inserted\By\Script C:\Path\From\PythonPath E:\Second\PythonPath\Path C:\Users\nighanxiety\PythonVenv\playground311 C:\Python311\python311.zip C:\Python311\Lib C:\Python311\DLLs C:\Python311 C:\Python311\Lib\site-packages I strongly recommend trying a virtual environment and using pip to install the desired packages there first, as Sauron suggests in his answer. The root user should still correctly use the virtual environment paths as long as the environment is activated. See How does activating a python virtual environment modify sys.path for more info. If that doesn't work, then configuring PYTHONPATH with the correct absolute path to your site-packages should be a better option than hacking sys.path. Modifying sys.path makes sense if a specific override is needed for that specific script, but is otherwise a bit of a hack. | 5 | 2 |
77,153,150 | 2023-9-21 | https://stackoverflow.com/questions/77153150/how-to-select-first-n-key-ordered-values-of-column-within-a-grouping-variable-in | I have a dataset: import pandas as pd data = [ ('A', 'X'), ('A', 'X'), ('A', 'Y'), ('A', 'Z'), ('B', 1), ('B', 1), ('B', 2), ('B', 2), ('B', 3), ('B', 3), ('C', 'L-7'), ('C', 'L-9'), ('C', 'L-9'), ('T', 2020), ('T', 2020), ('T', 2025) ] df = pd.DataFrame(data, columns=['ID', 'SEQ']) print(df) I want to create a key grouping ID and SEQ in order to select the first 2 rows of each different SEQ within each ID Group For instance the ID A, has 3 distinct keys "A X", "A Y" and "A Z" in the order of the dataset the first two keys are "A X" and "A Y" thus I must select the first two rows (if available) of each thus "A X", "A X", "A Y" why? because "A Z" is another key. I've tried using the groupby and head functions, but I couldn't find a way to achieve this specific result. What can I try next? (df .groupby(['ID','SEQ']) .head(2) ) This code is returning the original dataset and I wonder if I can solve this problem using method chaining, as it is my preferred style in Pandas. The final correct output is: | Your approach of using groupby and then head(2) is on the right track for getting the first 2 rows of each different SEQ within each ID group. However, the additional requirement is to get only the first 2 unique SEQ groups within each ID. To achieve this, you can: Create a new column that has the rank of unique SEQ within each ID group. Use this rank to filter out the data. Finally, use your original approach to get the first 2 rows of each SEQ within each ID group. Here's a solution using method chaining: result = (df .assign(rank=df.groupby('ID')['SEQ'].transform(lambda x: x.rank(method='dense'))) .query('rank <= 2') .groupby(['ID', 'SEQ']) .head(2) .drop(columns=['rank']) ) print(result) This should give you the desired output. | 2 | 1 |
77,152,143 | 2023-9-21 | https://stackoverflow.com/questions/77152143/is-it-possible-for-a-snowpark-stored-procedure-to-produce-a-dataframe-as-its-out | I want to create an snowflake stored procedure using snowpark. Inside of the procedure I will create some dataframe and performs some operations like filter, sort, join etc. At the end I just want to return that "dataframe", does snowpark supports that? Can someone provide an example of how to achieve this? | It is supported - Writing Snowpark Code in Python Worksheets Python Worksheet Run + Deploy Code is wrapped as Stored Procedure and ready to be used Call in SQL CALL ProcName(); Call in Python | 2 | 4 |
77,152,147 | 2023-9-21 | https://stackoverflow.com/questions/77152147/how-to-find-the-position-of-same-rows-from-one-2d-array-to-another-2d-array | import numpy as np # Create two sample dataframes df1 = np.array([[0.000000,0.000000,0.000000], [0.090000,0.000000,0.000000], [0.190000,0.000000,0.000000], [0.280000,0.000000,0.000000], [0.380000,0.000000,0.000000], [0.470000,0.000000,0.000000], [0.570000,0.000000,0.000000], [0.660000,0.000000,0.000000], [0.760000,0.000000,0.000000], [0.850000,0.000000,0.000000]]) df2 = np.array([[0.470000,0.000000,0.000000], [0.570000,0.000000,0.000000], [0.660000,0.000000,0.000000], [0.760000,0.000000,0.000000], [0.850000,0.000000,0.000000] ]) df3 = np.where(np.isclose(df1[:, np.newaxis], df2))[0] print(df3) I want to find the postion of df2 in df1 and the correct answer is [5, 6, 7, 8, 9] but the python output is [0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9] which does not make sense. How can I change the code to find the postion of df2 in df1 correctly? | If you only want to match the non-zeros, you can remove them: np.where(np.isclose(np.where(df1!=0, df1, np.nan)[:, None], df2))[0] Output: array([5, 6, 7, 8, 9]) If you want a full match, including zeros (row per row), you can use: out = np.where(np.isclose(df1[:, None], df2).all(2))[0] Output: array([5, 6, 7, 8, 9]) | 4 | 0 |
77,151,521 | 2023-9-21 | https://stackoverflow.com/questions/77151521/most-pythonic-method-of-conditionally-extracting-data-from-multiple-lists-of-dic | I'm trying to use two lists of dictionaries to build an object, the dictionaries are built from a TAP connection to two different databases. Because of the data sources I cannot guarantee that any of the dictionaries will contain the information I need, so I've chosen a primary dictionary and if the information is not in there then I extract it from the second dictionary. Because I'm pulling the data from two different data sources, the field names from TAP differ for both sources, so I can't just do an intersection of the dictionaries. At the moment I can get it to work, but I'm not happy with the solution: for result in eeuresults: name=result['target_name'] axes=result['semi_major_axis'] period=result['period'] radius=result['radius'] if math.isnan(period): for r in eparesults: if r['pl_name'] == name: period=r['pl_orbper'] if math.isnan(axes): for r in eparesults: if r['pl_name'] == name: axes=r['pl_orbsmax'] if math.isnan(radius): for r in eparesults: if r['pl_name'] == name: radius=r['pl_radj'] I've tried using dictionary.get() to make it simpler, but it falls down if the value isn't in the second list of dictionaries. axes=result.get('semi_major_axis',[r['pl_orbper'] for r in eparesults if r['pl_name']==name][0]) Edit: Solution Chosen I ended up using rioV8's solution to reduce code duplication, it's not the most efficient solution, but it is elegant, readable and expandable; which is what I really wanted. Full function is below: def getsystemdata(name, epaname=None): def search_epa(name, value, key): if math.isnan(value): for r in eparesults: if r['pl_name'] == name: return r[key] return value if epaname == None: epaname=name service=pyvo.dal.TAPService("https://exoplanetarchive.ipac.caltech.edu/TAP") eparesults=service.search(f"select pl_name, pl_radj, pl_orbper, pl_orbsmax from pscomppars where hostname = '{epaname}' ") service=pyvo.dal.TAPService("http://voparis-tap-planeto.obspm.fr/tap") eeuresults=service.search(f"select target_name, radius, period, semi_major_axis from exoplanet.epn_core where star_name = '{name}'") planets=[] for result in eeuresults: name=result['target_name'] axes=search_epa(name, result['semi_major_axis'], 'pl_orbsmax') period=search_epa(name, result['period'], 'pl_orbper') radius=search_epa(name, result['radius'], 'pl_radj') planets.append(Planet(name.split(' ')[-1], axes, period, radius)) return planets | remove code duplication def find_in_eparesults(name, value, epa_key): if math.isnan(value): for r in eparesults: if r['pl_name'] == name: value = r[epa_key] break return value for result in eeuresults: name=result['target_name'] axes=find_in_eparesults(name, result['semi_major_axis'], 'pl_orbper') period=find_in_eparesults(name, result['period'], 'pl_orbsmax') radius=find_in_eparesults(name, result['radius'], 'pl_radj') | 2 | 1 |
77,150,937 | 2023-9-21 | https://stackoverflow.com/questions/77150937/incorrect-fourier-coefficients-signs-resulting-from-scipy-fft-fft | I analysed a triangle wave using the Fourier series by hand, and then using the scipy.fft package. What I'm getting are the same absolute values of the coefficients, but with opposite signs. By hand: I took the interval [-1,1], calculated the integral to get a0 = 1, then a1, a3 and a5 using the result of integration: And all the bn are 0. Then I simply constructed the series, where the 1st term is a0/2, the second a1×cos(n Pi t/T) and so on, and plotted these waves, which summed give a good approximation of the original signal. The coefficients are: a0 = 0.5 a1 = -0.4053 a3 = -0.0450 a5 = -0.0162 Scipy.fft: I defined the sampling frequency fs=50, and created the space for the function with space=np.linspace(-1,1,2*fs). Then defined the fft1 = fft.fft(triang(space)), where "triang" is the function which generates the signal. I immediately scaled the results by dividing by the number of samples (100) and multiplying each term except the 0th by 2. Next were the frequencies freq1 = fft.fftfreq(space.shape[-1], d=1/fs). The resulting coefficients (the real parts pertaining to an) are: a0 = 0.5051 a1 = 0.4091 a3 = 0.0452 a5 = 0.0161 As you can see, the absolute values are correct, but the signs are not. What am I missing? And one bonus question if I may - when doing it this way (using scipy) and plotting each wave separately, should I add the phase angle from np.angle(fft1[n]) into each term, like a1×cos(n Pi t/T + theta) + b1×sin(n Pi t/T + theta)? I'd say yes, but in this example all the bn are 0, and so are the phase angles, so I couldn't really test it. | Note that when you call fft(), you don’t pass in the time axis (space). So how is the function supposed to know you sampled from -1 to 1? It doesn’t! It assumes you sampled at 0, 1, 2, 3, … N-1. This means that the FFT function sees the signal as shifted compared to your manual calculation, and a shift means the phase in the frequency domain changes. If you sample your triangle function from 0 to 2, instead of -1 to 1, you should get a matching phase. The difference between sampling with a frequency of 1, or any other frequency (ie samples not being at integer locations) is taken care of by fftfreq(), which just tells you the frequency that corresponds to each bin given your sampling frequency. | 2 | 4 |
77,148,259 | 2023-9-21 | https://stackoverflow.com/questions/77148259/omitting-middle-characters-if-string-is-too-long-when-display-a-pandas-dataframe | When display a DataFrame in Jupyter Notebook, if the string value is too long, the last characters will be omitted: df = pd.DataFrame({'A':['ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ']}) display(df) output: A 0 ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRST... I want to change the behavior and only ommit the middle characters of the string when it's too long: A 0 ABCDEFGHIJKLMNOPQRSTUVW...DEFGHIJKLMNOPQRSTUVWXYZ Is it possible? | From the documentation, you can build a custom formatter. def formatter(x): if isinstance(x, str): if len(x) > pd.options.display.max_colwidth: display_width = pd.options.display.max_colwidth split = int((display_width-3) / 2) if display_width % 2: # if odd number return x[:split] + "..." + x[-split:] else: return x[:(split+1)] + "..." + x[-split:] else: return x return None And use it as such: df.style.format(formatter) A 0 ABCDEFGHIJKLMNOPQRSTUVW...DEFGHIJKLMNOPQRSTUVWXYZ This is only a display format so the data is not changed. | 2 | 5 |
77,146,349 | 2023-9-20 | https://stackoverflow.com/questions/77146349/obtaining-grouped-max-or-min-in-pandas-without-skipping-nans | Consider a sample dateframe df = pd.DataFrame({'group' : [1, 2, 2], 'x' : [1, 2, 3], 'y' : [2, 3, np.nan]}) If I want to get the max value of variable 'y' without skipping NANs, I would use the function: df.y.max(skipna = False) The returned results is nan as expected However, if I want to calculate the grouped max value by 'group', as follows: df.groupby('group').y.max(skipna = False) I got an error message: TypeError: max() got an unexpected keyword argument 'skipna' Seems like the DataFrameGroupBy.max() does not have the argument to skip nas. What would be the best way to get the desired result? | You can try to apply pd.Series.max: x = df.groupby("group")["y"].apply(pd.Series.max, skipna=False) print(x) Prints: group 1 2.0 2 NaN Name: y, dtype: float64 | 3 | 3 |
77,142,806 | 2023-9-20 | https://stackoverflow.com/questions/77142806/how-to-read-cursorresult-to-pandas-in-sqlalchemy | How does one convert a CursorResult-object into a Pandas Dataframe? The following code results in a CursorResult-object: from sqlalchemy.orm import Session from sqlalchemy import create_engine engine = create_engine(f"mssql+pyodbc://{db_server}/{db_name}?trusted_connection=yes&driver={db_driver}") q1 = "SELECT * FROM my_schema.my_table" with Session(engine) as session: results = session.execute(q1) session.commit() type(results) >sqlalchemy.engine.cursor.CursorResult As I couldn't find a way to extract relevant information from CursorResult, it attempted the following instead: # Extracting data as we go with Session(engine) as session: results = session.execute(q1) description = results.cursor.description rows = results.all() session.commit() # Extracting column names colnames = [elem[0] for elem in description] # Extracting types types = [elem[1] for elem in description] # Creating dataframe import pandas as pd pd.DataFrame(rows, columns=colnames) But what about the dtypes? It doesn't work if I just put them in, though it looks like they are all python types. For my use case I MUST use Session, so I cannot use the first suggestion of doing the classic: # I cannot use pandas.read_sql(q1, engine) The reason for this is that I have to do multi-batch queries within the same context, which is why I am using the Session class. | IIUC, just use pd.DataFrame constructor. dtypes are correctly set. # sqlalchemy==2.0.16 # pandas==2.0.2 from sqlalchemy.sql import text with Session(engine) as session: results = session.execute(text(q1)) df = pd.DataFrame(results) # session.commit() # commit is irrelevant if you don't write data Test on my database: >>> df.head() Scenario Attribute Process Period Region Vintage PV 0 WithHHP16HinsHE0CCS109LHP VAR_Cap EVTRANS_H-L 2014 FR None 296.071141 1 WithHHP16HinsHE0CCS109LHP VAR_Cap EVTRANS_H-M 2014 FR None 11.770909 2 WithHHP16HinsHE0CCS109LHP VAR_Cap IMPELCHIGA 2014 FR None 11851.674497 3 WithHHP16HinsHE0CCS109LHP VAR_Cap EVTRANS_H-L 2015 FR None 296.071141 4 WithHHP16HinsHE0CCS109LHP VAR_Cap EVTRANS_H-M 2015 FR None 11.770909 >>> df.dtypes Scenario object Attribute object Process object Period int64 Region object Vintage object PV float64 dtype: object Edit: rec = results.fetchone() >>> rec ('WithHHP16HinsHE0CCS109LHP', 'VAR_Cap', 'EVTRANS_H-L', 2014, 'FR', None, 296.071141357762) # python int --^ python float --^ >>> type(rec) sqlalchemy.engine.row.Row | 2 | 4 |
77,142,566 | 2023-9-20 | https://stackoverflow.com/questions/77142566/python-multiprocessing-imap-iterates-over-whole-itarable | In my code I am trying to achieve the following: I get each result as soon as any of the processes finish Next iteration must only be called whenever it is necessary (if it is converted into a list, I will have RAM issues) To my knowledge imap from multiprocessing module should be perfect for this task, but this code: import os import time def txt_iterator(): for i in range(8): yield i print('iterated', i) def func(x): time.sleep(5) return x if __name__ == '__main__': import multiprocessing pool = multiprocessing.Pool(processes=4) for i in pool.imap( func, txt_iterator() ): print('P2', i) pool.close() Has this output: iterated 0 iterated 1 ... iterated 7 # 5 second pause P2 0 P2 1 P2 2 P2 3 # 5 second pause P2 4 P2 5 P2 6 P2 7 Meaning that it iterates through the whole iterable and only then starts assigning tasks to processes. As far as I could find in the docs, this behavior is only expected from .map (the iteration part). The expected output is (may vary because they run concurrently, but you get the idea): iterated 0 ... iterated 3 # 5 second pause P2 0 ... P2 3 iterated 4 ... iterated 7 # 5 second pause P2 4 ... P2 7 I am sure that I am missing something here but in case I completely misunderstand how this function works, I would appreciate any alternative that will work as intended. | imap doesn't guarantee consuming the input iterator at the same pace the workers finish their tasks. You can use a threading.BoundedSemaphore (even if you're only using a single thread) to have the input generator wait until the for loop has consumed an item: import multiprocessing import threading import time def txt_iterator(sem: threading.BoundedSemaphore): for i in range(30): sem.acquire() yield i print("iterated", i) def func(x): print("starting work on", x) time.sleep(1) return x if __name__ == "__main__": sem = threading.BoundedSemaphore(4) pool = multiprocessing.Pool(processes=4) for i in pool.imap(func, txt_iterator(sem)): sem.release() print("P2", i) pool.close() | 2 | 3 |
77,142,174 | 2023-9-20 | https://stackoverflow.com/questions/77142174/use-both-put-and-post-methods-from-same-api-using-fastapi | I'm about to create an API using FastApi wherein I have to search for 'user_name' in db. If 'user_name' exists then I have to update the user_details. If the 'user_name' doesn't exist, then I have to create entry for the user. In this case, I think both PUT and POST methods need to be applied for the same API endpoint. Is this possible? can anyone brief me how to do it? | Yes, it is possible. You just need to use different decorator for the same path. from fastapi import FastAPI app = FastAPI() @app.put("/") async def put_root(): return {"message": "Hello World from put"} @app.post("/") async def post_root(): return {"message": "Hello World from post"} the code is oversimplify but it shows the concept. | 3 | 6 |
77,115,274 | 2023-9-15 | https://stackoverflow.com/questions/77115274/copy-a-dataframe-to-new-variable-with-method-chaining | Is it possible to copy a dataframe in the middle of a method chain to a new variable? Something like: import pandas as pd df = (pd.DataFrame([[2, 4, 6], [8, 10, 12], [14, 16, 18], ]) .assign(something_else=100) .div(2) .copy_to_new_variable(df_imag) # Imaginated method to copy df to df_imag. .div(10) ) print(df_imag) would then return: 0 1 2 something_else 0 1.0 2.0 3.0 50.0 1 4.0 5.0 6.0 50.0 2 7.0 8.0 9.0 50.0 .copy_to_new_variable(df_imag) could be replaced by df_imag = df.copy() but this would result in compromising the method chain. | Actually, this is what I was looking for. Check the link, the idea is from Matt Harrison (who wrote multiple books about pandas) for debugging of method chains. This way is also recommended in this great article 4 Pandas Anti-Patterns to Avoid and How to Fix Them by Aidan Cooper. import pandas as pd def to_df(df, name): globals()[name] = df.copy() return df df = (pd.DataFrame([[1, 2, 3], [10, 10, 10], ], columns=["A", "B", "C"] ) .set_index("C") .pipe(to_df, "df_imag") .sum() ) df_imag is then the intermediate dataframe as described in the question. In jupyter notebooks, if you would like to view the dataframe midway through the chain without interrupting the rest of the chain, you can use .pipe(lambda df_: display(df_) or df_), also explained in the mentioned article: import pandas as pd df = ( pd.DataFrame( [ [2, 4, 6], [8, 10, 12], [14, 16, 18], ] ) .assign(something_else=100) .div(2) .pipe(lambda df_: display(df_) or df_) .div(10) ) | 2 | 0 |
77,110,560 | 2023-9-15 | https://stackoverflow.com/questions/77110560/python-polars-calculate-time-difference-from-first-element-in-each-repeating | I have a polars.DataFrame like: df = pl.DataFrame({ "timestamp": ['2009-04-18 11:30:00', '2009-04-18 11:40:00', '2009-04-18 11:50:00', '2009-04-18 12:00:00', '2009-04-18 12:10:00', '2009-04-18 12:20:00', '2009-04-18 12:30:00'], "group": ["group_1", "group_1", "group_1", "group_2", "group_2", "group_1", "group_1"]}) df = df.with_columns( pl.col("timestamp").str.to_datetime().dt.replace_time_zone("UTC"), ) ┌─────────────────────────┬─────────┐ │ timestamp ┆ group │ │ --- ┆ --- │ │ datetime[μs, UTC] ┆ str │ ╞═════════════════════════╪═════════╡ │ 2009-04-18 11:30:00 UTC ┆ group_1 │ │ 2009-04-18 11:40:00 UTC ┆ group_1 │ │ 2009-04-18 11:50:00 UTC ┆ group_1 │ │ 2009-04-18 12:00:00 UTC ┆ group_2 │ │ 2009-04-18 12:10:00 UTC ┆ group_2 │ │ 2009-04-18 12:20:00 UTC ┆ group_1 │ <- reappearance of group_1 │ 2009-04-18 12:30:00 UTC ┆ group_1 │ <- reappearance of group_1 └─────────────────────────┴─────────┘ I want to calculate the time difference between the timestamp of the first element in each group to the timestamp of the elements in a group. Important is, that 'group' is defined as a (chronologically) consecutive appearance of the same group label. Like in the example shown group labels can occur later in time with the same group label but should by then be treated as a new group. With that, the result should look something like this: ┌─────────────────────────┬─────────┬─────────┐ │ timestamp ┆ group │ timediff│ │ --- ┆ --- │ --- │ │ datetime[μs, UTC] ┆ str │ int(?) │ ╞═════════════════════════╪═════════╪═════════╡ │ 2009-04-18 11:30:00 UTC ┆ group_1 │ 0 │ │ 2009-04-18 11:40:00 UTC ┆ group_1 │ 10 │ │ 2009-04-18 11:50:00 UTC ┆ group_1 │ 20 │ │ 2009-04-18 12:00:00 UTC ┆ group_2 │ 0 │ │ 2009-04-18 12:10:00 UTC ┆ group_2 │ 10 │ │ 2009-04-18 12:20:00 UTC ┆ group_1 │ 0 │ <- reappearance of group_1 │ 2009-04-18 12:30:00 UTC ┆ group_1 │ 10 │ <- reappearance of group_1 └─────────────────────────┴─────────┴─────────┘ | .rle_id() ("Run-length encoding") can be used to identify the groups. This is especially useful when you want to define groups by runs of identical values df.with_columns(group_id = pl.col("group").rle_id()) shape: (7, 3) ┌─────────────────────────┬─────────┬──────────┐ │ timestamp ┆ group ┆ group_id │ │ --- ┆ --- ┆ --- │ │ datetime[μs, UTC] ┆ str ┆ u32 │ ╞═════════════════════════╪═════════╪══════════╡ │ 2009-04-18 11:30:00 UTC ┆ group_1 ┆ 0 │ │ 2009-04-18 11:40:00 UTC ┆ group_1 ┆ 0 │ │ 2009-04-18 11:50:00 UTC ┆ group_1 ┆ 0 │ │ 2009-04-18 12:00:00 UTC ┆ group_2 ┆ 1 │ │ 2009-04-18 12:10:00 UTC ┆ group_2 ┆ 1 │ │ 2009-04-18 12:20:00 UTC ┆ group_1 ┆ 2 │ │ 2009-04-18 12:30:00 UTC ┆ group_1 ┆ 2 │ └─────────────────────────┴─────────┴──────────┘ You can then run the calculation .over() each group. df.with_columns( (pl.col("timestamp") - pl.col("timestamp").first()) .over(pl.col("group").rle_id()) .alias("time_diff") #.dt.total_minutes() ) shape: (7, 3) ┌─────────────────────────┬─────────┬──────────────┐ │ timestamp ┆ group ┆ time_diff │ │ --- ┆ --- ┆ --- │ │ datetime[μs, UTC] ┆ str ┆ duration[μs] │ ╞═════════════════════════╪═════════╪══════════════╡ │ 2009-04-18 11:30:00 UTC ┆ group_1 ┆ 0µs │ │ 2009-04-18 11:40:00 UTC ┆ group_1 ┆ 10m │ │ 2009-04-18 11:50:00 UTC ┆ group_1 ┆ 20m │ │ 2009-04-18 12:00:00 UTC ┆ group_2 ┆ 0µs │ │ 2009-04-18 12:10:00 UTC ┆ group_2 ┆ 10m │ │ 2009-04-18 12:20:00 UTC ┆ group_1 ┆ 0µs │ │ 2009-04-18 12:30:00 UTC ┆ group_1 ┆ 10m │ └─────────────────────────┴─────────┴──────────────┘ | 4 | 4 |
77,126,001 | 2023-9-18 | https://stackoverflow.com/questions/77126001/calculation-of-expected-value-in-shap-explanations-of-xgboost-classifier | How do we make sense of SHAP explainer.expected_value? Why is it not the same with y_train.mean() after sigmoid transformation? Below is a summary of the code for quick reference. Full code available in this notebook: https://github.com/MenaWANG/ML_toy_examples/blob/main/explain%20models/shap_XGB_classification.ipynb model = xgb.XGBClassifier() model.fit(X_train, y_train) explainer = shap.Explainer(model) shap_test = explainer(X_test) shap_df = pd.DataFrame(shap_test.values) #For each case, if we add up shap values across all features plus the expected value, we can get the margin for that case, which then can be transformed to return the predicted prob for that case: np.isclose(model.predict(X_test, output_margin=True),explainer.expected_value + shap_df.sum(axis=1)) #True But why isn't the below true? Why after sigmoid transformation, the explainer.expected_value is not the same with y_train.mean() for XGBoost classifiers? expit(explainer.expected_value) == y_train.mean() #False | SHAP is guaranteed to be additive in raw space (logits). To understand why additivity in raw scores doesn't extend to additivity in class predictions you may think for a while why exp(x+y) != exp(x) + exp(y) Re: Just keen to understand how was explainer.expected_value calculated for XGBoost classifier. Do you happen to know? As I stated in comments expected value comes either from the model trees or from your data. Let's try reproducible: from sklearn.model_selection import train_test_split import xgboost import shap X, y = shap.datasets.adult() X_display, y_display = shap.datasets.adult(display=True) # create a train/test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7) d_train = xgboost.DMatrix(X_train, label=y_train) d_test = xgboost.DMatrix(X_test, label=y_test) params = { "eta": 0.01, "objective": "binary:logistic", "subsample": 0.5, "base_score": np.mean(y_train), "eval_metric": "logloss", } model = xgboost.train( params, d_train, num_boost_round=5000, evals=[(d_test, "test")], verbose_eval=100, early_stopping_rounds=20, ) Case 1. No data available, trees only. explainer = shap.TreeExplainer(model) ev_trees = explainer.expected_value[0] from shap.explainers._tree import XGBTreeModelLoader xgb_loader = XGBTreeModelLoader(model) ts = xgb_loader.get_trees() v = [] for t in ts: v.append(t.values[0][0]) sv = sum(v) import struct from scipy.special import logit size = struct.calcsize('f') buffer = model.save_raw().lstrip(b'binf') v = struct.unpack('f', buffer[0:0+size])[0] # if objective "binary:logistic" or "reg:logistic" bv = logit(v) ev_trees_raw = sv+bv np.isclose(ev_trees, ev_trees_raw) True Case 2. Background data set supplied. background = X_train[:100] explainer = shap.TreeExplainer(model, background) ev_background = explainer.expected_value Take a note that: np.isclose(ev_trees, ev_background) False but d_train_background = xgboost.DMatrix(background, y_train[:100]) preds = model.predict(d_train_background, pred_contribs = True) np.isclose(ev_background, preds.sum(1).mean()) True or simply output_margin = model.predict(d_train_background, output_margin=True) np.isclose(ev_background, output_margin.mean()) True | 2 | 3 |
77,134,254 | 2023-9-19 | https://stackoverflow.com/questions/77134254/specify-dependencies-in-pyproject-toml-with-install-url-or-with-index-url | I like to have my package installable with pip install ... and to use the pyproject.toml standard. I can specify dependencies to install from git, with: dependencies = [ 'numpy>=1.21', 'psychopy @ git+https://github.com/psychopy/psychopy', ] But how can I specify a dependency to install from a different indexer, equivalent to: python -m pip install --pre --only-binary :all: -i https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy scipy With or without the pre-release flag? And how can I specify a dependency to install from a URL, e.g. https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl I tried with no luck: dependencies = [ 'wxPython @ https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl; python_version == "3.10"; sys_platform == "linux"' ] | I recommend reading the article "install_requires vs. requirements files" in the "Python packaging user guide". This article is partly outdated but the principle of abstract dependencies vs. concrete dependencies is unchanged. Concrete dependencies do not belong in packaging metadata, i.e. concrete dependencies can not be added to the dependencies list in the [project] table of the pyproject.toml file. As you have noted, it is possible to add "direct references" (for example psychopy @ git+https://github.com/psychopy/psychopy), but not all packaging tools support this and notably PyPI rejects the upload of packages containing such dependencies. What I often recommend, is that if there are dependencies that are not on PyPI then the documentation should reflect that very clearly and prominently. Where to get the dependencies and some examples of how to install them should be in the documentation, for example the URLs of the alternative package repositories. And another thing that I recommend on top of the documentation is to provide one or more requirements.txt files that contain a list of concrete dependencies that have been tested and are known to work well. For example: MyPackage psychopy @ git+https://github.com/psychopy/psychopy wxPython @ https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl; python_version == "3.10"; sys_platform == "linux" And if I am not mistaken pip can install directly from the URL of such a requirements file: python -m pip install --requirement https://host.example/my-package/requirements.txt so such a command could be added to the documentation as well. Related: https://stackoverflow.com/a/76548420 | 11 | 6 |
77,114,363 | 2023-9-15 | https://stackoverflow.com/questions/77114363/snowpark-udf-with-row-input-type | I would like to define a Snowpark UDF with input type snowflake.snowpark.Row. The reason for this is that I would like to mimic the pandas.apply approach where I can define my business logic in some class, and then apply the logic to each row of the Snowpark dataframe. Each column can be easily mapped to a class attribute with asDict For example (running from the Snowflake Python worksheet): import snowflake.snowpark as snowpark from snowflake.snowpark.functions import udf from snowflake.snowpark import Row from snowflake.snowpark.types import IntegerType from dataclasses import dataclass @dataclass class MyEvent: attribute1: str = 'dummy' attribute2: str = 'unknown' def someCalculation(self) -> int: return len(self.attribute1) + len(self.attribute2.strip()) def testSomeCalculation(): inputDict = {'attribute1': 'foo', 'attribute2': 'baz'} event = MyEvent(**inputDict) print(event.someCalculation()) def main(session: snowpark.Session): some_logic = udf(lambda row: MyEvent(**(row.asDict())).someCalculation() , return_type=IntegerType() , input_types=[Row]) However, when I try to use snowpark.Row as input type, I get an unsupported data type: File "snowflake/snowpark/_internal/udf_utils.py", line 972, in create_python_udf_or_sp input_sql_types = [convert_sp_to_sf_type(arg.datatype) for arg in input_args] File "snowflake/snowpark/_internal/udf_utils.py", line 972, in <listcomp> input_sql_types = [convert_sp_to_sf_type(arg.datatype) for arg in input_args] File "snowflake/snowpark/_internal/type_utils.py", line 195, in convert_sp_to_sf_type raise TypeError(f"Unsupported data type: {datatype.__class__.__name__}") TypeError: Unsupported data type: type I see that all the UDF examples use basic types from snowpark.types. Is there any fundamental reason why the input type cannot be a snowpark.Row ? I know I could list explicitly all MyEvent attributes in input_type=[], but that is going to be error prone and defeating the purpose of designing my code around a class representing my business object. | Solution tested in Snowflake Python worksheet, based on the suggestion by @felipe-hoffa above import snowflake.snowpark as snowpark from snowflake.snowpark.functions import col, udf from snowflake.snowpark.types import IntegerType, VariantType from dataclasses import dataclass import json def main(session: snowpark.Session): @dataclass class MyEvent: attribute1: str = 'dummy' attribute2: str = 'unknown' def someText(self) -> str: return f"someText {len(self.attribute1)} : {self.attribute1=}, {self.attribute2=}" def wrap_some_text(x) -> str: return MyEvent(**json.loads(x)).someText() my_event_get_text = udf(lambda x: wrap_some_text(x), return_type=VariantType(), input_types=[VariantType()]) df = session.create_dataframe(['{"attribute1":"value1", "attribute2":"value20"}']).to_df("col1") df = df.select(col("col1"), my_event_get_text(col("col1").astype("variant")).as_("my_event_get_text") ).show() return df | 3 | 0 |
77,100,890 | 2023-9-13 | https://stackoverflow.com/questions/77100890/pydantic-v2-custom-type-validators-with-info | I'm trying to update my code to pydantic v2 and having trouble finding a good way to replicate the custom types I had in version 1. I'll use my custom date type as an example. The original implementation and usage looked something like this: from datetime import date from pydantic import BaseModel class CustomDate(date): # Override POTENTIAL_FORMATS and fill it with date format strings to match your data POTENTIAL_FORMATS = [] @classmethod def __get_validators__(cls): yield cls.validate_date @classmethod def validate_date(cls, field_value, values, field, config) -> date: if type(field_value) is date: return field_value return to_date(field.name, field_value, cls.POTENTIAL_FORMATS, return_str=False) class ExampleModel(BaseModel): class MyDate(CustomDate): POTENTIAL_FORMATS = ['%Y-%m-%d', '%Y/%m/%d'] dt: MyDate I tried to follow the official docs and the examples laid out here below and it mostly worked, but the info parameter does not have the fields I need (data and field_name). Attempting to access them gives me an AttributeError. info.field_name *** AttributeError: No attribute named 'field_name' Both the Annotated and __get_pydantic_core_schema__ approaches have this issue from datetime import date from typing import Annotated from pydantic import BaseModel, BeforeValidator from pydantic_core import core_schema class CustomDate: POTENTIAL_FORMATS = [] @classmethod def validate(cls, field_value, info): if type(field_value) is date: return field_value return to_date(info.field_name, field_value, potential_formats, return_str=False) @classmethod def __get_pydantic_core_schema__(cls, source, handler) -> core_schema.CoreSchema: return core_schema.general_plain_validator_function(cls.validate) def custom_date(potential_formats): """ :param potential_formats: A list of datetime format strings """ def validate_date(field_value, info) -> date: if type(field_value) is date: return field_value return to_date(info.field_name, field_value, potential_formats, return_str=False) CustomDate = Annotated[date, BeforeValidator(validate_date)] return CustomDate class ExampleModel(BaseModel): class MyDate(CustomDate): POTENTIAL_FORMATS = ['%Y-%m-%d', '%Y/%m/%d'] dt: MyDate dt2: custom_date(['%Y-%m-%d', '%Y/%m/%d']) If I just include the validate_date function as a regular field_validator I get info with all the fields I need, it's only when using it with custom types that I see this issue. How do I write a custom type that has access to previously validated fields and the name of the field being validated? | As of version 2.4 you can get the field_name and data together. See the updated docs here. Now the first version of my custom data type looks like: class CustomDate: POTENTIAL_FORMATS = [] @classmethod def validate(cls, field_value, info): if type(field_value) is date: return field_value return to_date(info.field_name, field_value, cls.POTENTIAL_FORMATS, return_str=False) @classmethod def __get_pydantic_core_schema__(cls, source, handler) -> core_schema.CoreSchema: return core_schema.with_info_before_validator_function( cls.validate, handler(date), field_name=handler.field_name ) Where all I needed to change was which core_schema validator function I was using. The second version of my custom data type (the one using Annotated) now works as is with no changes. Before Pydantic 2.4 It looks like accessing info.data and info.field_name inside a custom type validator is not currently possible in v2 according to this feature request. If all you need is info.data, then it looks like you can define your validator with core_schema.field_before_validator_function (I'd guess all the field_* validators work), although you will need to make up a field name: from dataclasses import dataclass from typing import Annotated, List, Any, Callable from pydantic import ValidationError, BaseModel, Field, BeforeValidator, field_validator, GetCoreSchemaHandler from pydantic_core import core_schema, CoreSchema def fn(v: str, info: core_schema.ValidationInfo, *args, **kwargs) -> str: try: print(f'Validating {info.field_name}') return info.data['use_this'] except AttributeError as err: return 'No data' class AsFieldB4Method(str): @classmethod def __get_pydantic_core_schema__( cls, source_type: Any, handler: GetCoreSchemaHandler, *args, **kwargs ) -> CoreSchema: return core_schema.field_before_validator_function(fn, 'not_the_real_field_name', core_schema.str_schema()) class MyModel(BaseModel): use_this: str core_schema_field_b4_method: AsFieldB4Method # Partially works From the comments, it sounds like the pydantic team want to make it work with non-field validators and to make accessing info.field_name possible, so hopefully that happens. I'll update this answer when the change happens, but check that link in case I missed it. | 8 | 2 |
77,104,125 | 2023-9-14 | https://stackoverflow.com/questions/77104125/no-module-named-keras-wrappers | I have this code on google colab which allows me to optimise an LSTM model using gridsearchCV, but recently an error message has appeared: ModuleNotFoundError: No module named 'keras.wrappers'. is there another module other than 'keras.wrappers' that allows the code to be restarted? Code: from keras.layers import Dense, LSTM, Dropout from keras import optimizers from sklearn.model_selection import GridSearchCV from keras.wrappers.scikit_learn import KerasRegressor def create_model(unit, dropout_rate, lr ): model=Sequential() model.add(LSTM(unit,return_sequences=True, input_shape=(1,5))) model.add(Dropout(dropout_rate)) model.add(LSTM(unit)) model.add(Dropout(dropout_rate)) model.add(Dense(1)) adam= optimizers.Adam(lr) model.compile(optimizer=adam, loss='mean_squared_error') return model my_regressor = KerasRegressor(build_fn=create_model, verbose=2) grid_param_LSTM = { 'unit': [50, 70, 120], 'batch_size': [12, 24, 48], 'epochs': [200], 'lr': [0.001, 0.01, 0.1], 'dropout_rate':[0.1, 0.2, 0.3] } grid_GBR = GridSearchCV(estimator=my_regressor, param_grid = grid_param_LSTM, scoring = 'neg_root_mean_squared_error', cv = 2) grid_GBR.fit(X_train, y_train) print("Best: %f using %s" % (grid_GBR.best_score_, grid_GBR.best_params_)) | This works for me pip install keras==2.12.0 Another Approach you can try pip uninstall tensorflow pip install tensorflow==2.12.0 | 9 | 12 |
77,124,879 | 2023-9-18 | https://stackoverflow.com/questions/77124879/pip-extras-require-must-be-a-dictionary-whose-values-are-strings-or-lists-of | I tried running pip install gym==0.21.0 but got the cryptic error: Collecting gym==0.21.0 Using cached gym-0.21.0.tar.gz (1.5 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [1 lines of output] error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. What may be causing this error? | Based on the comments in the issue section at https://github.com/openai, these are the changes that need to be made - pip install setuptools==65.5.0 pip==21 # gym 0.21 installation is broken with more recent versions pip install wheel==0.38.0 I was facing the same issue and after doing the above, it was resolved. The problem has been addressed here and here. | 7 | 27 |
77,134,535 | 2023-9-19 | https://stackoverflow.com/questions/77134535/migrate-postgresdsn-build-from-pydentic-v1-to-pydantic-v2 | I have simple Config class from FastAPI tutorial. But it seems like it uses old pydantic version. I run my code with pydantic v2 version and get a several errors. I fix almost all of them, but the last one I cannot fix yet. This is part of code which does not work: from pydantic import AnyHttpUrl, HttpUrl, PostgresDsn, field_validator from pydantic_settings import BaseSettings from pydantic_core.core_schema import FieldValidationInfo load_dotenv() class Settings(BaseSettings): ... POSTGRES_SERVER: str = 'localhost:5432' POSTGRES_USER: str = os.getenv('POSTGRES_USER') POSTGRES_PASSWORD: str = os.getenv('POSTGRES_PASSWORD') POSTGRES_DB: str = os.getenv('POSTGRES_DB') SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None @field_validator("SQLALCHEMY_DATABASE_URI", mode='before') @classmethod def assemble_db_connection(cls, v: Optional[str], info: FieldValidationInfo) -> Any: if isinstance(v, str): return v postgres_dsn = PostgresDsn.build( scheme="postgresql", username=info.data.get("POSTGRES_USER"), password=info.data.get("POSTGRES_PASSWORD"), host=info.data.get("POSTGRES_SERVER"), path=f"{info.data.get('POSTGRES_DB') or ''}", ) return str(postgres_dsn) That is the error which I get: sqlalchemy.exc.ArgumentError: Expected string or URL object, got MultiHostUrl('postgresql://user:password@localhost:5432/database') I check a lot of places, but cannot find how I can fix that, it looks like build method pass data to the sqlalchemy create_engine method as a MultiHostUrl instance instead of string. How should I properly migrate this code to use pydantic v2? UPDATE I have fixed that issue by changing typing for SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None to SQLALCHEMY_DATABASE_URI: Optional[str] = None. Because pydantic makes auto conversion of result for some reason. But I am not sure if that approach is the right one, maybe there are better way to do that? | You can use unicode_string() to stringify your URI as follow: from sqlalchemy.ext.asyncio import create_async_engine create_async_engine(settings.POSTGRES_URI.unicode_string()) Check documentation page here for additional explanation. | 13 | 8 |
77,126,601 | 2023-9-18 | https://stackoverflow.com/questions/77126601/currency-conversion-with-forex-python-converter | I wrote a script that converts currencies based on the current exchange rate. It seems to work fine, except when trying to convert EUR to USD, as it never gets the exchange rate correctly. For example, it tells me 1000 EUR are worth 64 USD, whereas the reality would be around 950...The actual exchange rate is 1.06 but forex mistakes it for 0.06 How can I fix it? Here is my code: Module 1: from forex_python.converter import CurrencyRates def exchange_rate(c1,c2, time): c = CurrencyRates() return c.get_rate(c1,c2, time) Module 2: from currency_rates import exchange_rate from datetime import datetime # List of available currencies list_of_currencies = ["USD", "EUR", "GBP", "ILS", "DKK", "CAD", "IDR", "BGN", "JPY", "HUF", "RON", "MYR", "SEK", "SGD", "HKD", "AUD", "CHF", "KRW", "CNY", "TRY", "HRK", "NZD", "THB", "LTL", "NOK", "RUB", "INR", "MXN", "CZK", "BRL", "PLN", "PHP", "ZAR"] # Get user input for currency 1, amount and currency 2 c1 = input(f"""Which currency would you like to convert from?\n {list_of_currencies}\n""") if c1 not in list_of_currencies: print("Invalid answer.") value_c1 = input("How much of that currency?\n") c2 = input(f"""Which currency would you like to convert to? \n {list_of_currencies}\n""") if c2 not in list_of_currencies: print("Invalid answer.") # Calculate result based on exchange rate # Get current time now = datetime.now() rate = exchange_rate(c1,c2, now) result = float(rate) * float(value_c1) # Print result print(f"{value_c1} {c1} are worth {result} {c2}.") Something needs to be done with the rate itself. But adding 1 when it's below 1 is not a solution as some other exchange rates might be below 1 for real. | There is no issue with your code or the forex_python.converter library. This is a bug with the :latest version of theforexapi which is used by the forex_python.converter, see link to raised issue: latest/?base=EUR&symbols=USD returns base=ILS #18 The returned "base" is "ILS" instead of "EUR" as expected. This bug only occurs for latest/?base=EUR&symbols=USD. If you specify a date instead of latest, or if you use any other combination of base and symbols, the bug does not happen. | 4 | 4 |
77,132,356 | 2023-9-19 | https://stackoverflow.com/questions/77132356/rq-job-terminated-unexpectedly | I have defined the rq task on module task.py as: import django_rq import logging import requests log = logging.getLogger(__name__) def req_call(): url = 'https://jsonplaceholder.typicode.com/posts' response = requests.get(url) return response @django_rq.job('default') def test_request_call_on_rq_job(): response = req_call() log.warning(f"REQUEST's RESPONSE {response.json()}") When I've offloaded a task in another module as: if __name__ == '__main__': # rq job test log.setLevel(logging.DEBUG) test_post_request_on_rq_job.delay() I am getting error as: [INFO] *** Listening on default... [worker:702] [INFO] default: rq_test_module.tasks.test_request_call_on_rq_job() (676c1945-9e05-4245-aeb2-65100cdb4169) [worker:875] [WARNING] Moving job to FailedJobRegistry (Work-horse terminated unexpectedly; waitpid returned 11 (signal 11); ) [worker:1114] I then started doing debugging, then I saw error encounters as soon as the job task tried executing the request call i.e. requests.get(url) And if I remove the request call from the job, then it executed gracefully without any errors. Signal 11 suggests a segmentation fault, I suspect something related to memory but not quite sure about it. So anyone here encountered similar issue and workaround for this :) | I think I found the cause of the request being not executed gracefully on rq. I just guess it was due to flow at https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L2619 not being able to pick the proxy setting in the RQ context. So I just skipped the way to reach there by setting NO_PROXY env to the URL I am trying to request. Now I can run the request on RQ, as it should be. But without disabling the proxy it still gives an error, maybe I need to dig more into this. More context on this: This seems to problem with MAC OS only, because of the import issue I highlighted above. OS I am facing this issue: Apple M2 Pro, with OS 13.5 (22G74) I will post more on this once I find concrete ideas for why it is working by setting `NO_PROXY. I think I got it :), Let's set up the this task on task.py module This task imports, underlying MAC os function as I outlined above and you can see exact same call in above link. def test_mac_import_issue_on_request_lib_task(): import sys if sys.platform == 'darwin': from _scproxy import _get_proxy_settings, _get_proxies _get_proxy_settings() _get_proxies() Then enqueue them from any module, for example if __name__ == '__main__': # rq job test from tasks test_mac_import_issue_on_request_lib_task_on_rq_job test_mac_import_issue_on_request_lib_task_on_rq_job.delay() Now run worker & execute your module which trigger this enqueue. Now check, whether it pops the error, for e.g [WARNING] Moving job to FailedJobRegistry (work-horse terminated unexpectedly; waitpid returned 11) [worker:874] That call was executed normally when you run independently (not in the RQ context), For e.g in you pyhon shell, if you try running: from _scproxy import _get_proxy_settings, _get_proxies get_proxy_settings() # This outputs dict _get_proxy_settings() # This also outputs dict So my next goal is to find out what's going on in the RQ context. I will further update this upon finding more meaningful ideas on this. | 2 | 2 |
77,137,301 | 2023-9-19 | https://stackoverflow.com/questions/77137301/curve-fit-seems-to-overestimate-error-of-estimated-parameters | I have some data from the lab, which I'm fitting to the function f(s, params) = amp * (s/sp) / (1 + s/sp + 1.041) + bg (couldn't figure out how to type set this) I set absolute_sigma=True on curve_fit to get the absolute uncertainties of the parameters (sp, amp, bg), but the calculated error from np.sqrt(np.diag(pcov)) seems unreasonable. See the plot below. The blue dots are data. The red line is the f(s, *popt). The green line replaces the optimal sp value with sp - it's error as calculated from np.sqrt(np.diag(pcov)). The orange line is sp + the same value. I would expect the +/- lines to be much tighter to the red line. Here's a minimal example: # Function for fitting def scattering_rate(s, sp, amp, bg): return amp * (s/sp) / (1 + s/sp + (-20/19.6)**2) + bg # data s = np.array([0.6, 1.2, 2.3, 4.3, 8.1, 15.2, 28.5, 53.4]) y = np.array([8.6, 8.5, 8.9, 9.5, 10.6, 12.6, 15.5, 18.3]) # Fit data to saturated scattering rate popt, pcov = curve_fit(scattering_rate, s, y, absolute_sigma=True) print('Fit parameters', popt) print('Fit uncertainties', np.sqrt(np.diag(pcov))) # Construct fit from optimized parameters fit_s = np.linspace(np.min(s), np.max(s), 100) fit = scattering_rate(fit_s, *popt) # Consider one error difference in sp value fit_plus_err = saturation_power(fit_s, popt[0] + np.sqrt(np.diag(pcov))[0], popt[1], popt[2]) fit_minus_err = saturation_power(fit_s, popt[0] - np.sqrt(np.diag(pcov))[0], popt[1], popt[2]) # Plot plt.plot(s, y, '-o', markeredgecolor='k', label='data') plt.plot(fit_s, fit_plus_err, label='sp + err') plt.plot(fit_s, fit_minus_err, label='sp - err') plt.plot(fit_s, fit, '-r', label='opt sp') plt.xlabel('s') plt.ylabel('y') plt.legend() plt.grid() plt.show() Edit Following @jlandercy's answer, we need the error bars of the original data which are y_err = array([0.242, 0.231, 0.282, 0.31 , 0.373]). Including that in curve_fit's sigma argument, the results look much better though still a bit distance | Why When using absolute_sigma=True your are advised to feed the sigma switch as well, if not they are replaced by 1. making Chi Square loss function behaves as RSS and pcov becomes somehow meaningless if your uncertainty are not unitary. Fix Provides uncertainty or an estimate of it as well to get a meaningful pcov matrix. Setting sigma to 0.15 with your data drastically improve the pcov matrix while keeping the Chi Square statistics significant: sy = np.full(y.size, 0.15) popt, pcov = curve_fit(scattering_rate, s, y, sigma=sy, absolute_sigma=True) np.sqrt(pcov) # array([[3.25479068, 2.07084837, 0.45391942], # [2.07084837, 1.35773667, 0.2563505 ], # [0.45391942, 0.2563505 , 0.09865549]]) Update You have two direct options to improve the error: Collect more points (5 points for 3 parameters leave a Chi Square with dof=2 which is very asymmetric); Reduce the uncertainty on y. Figure below shows the impact on a synthetic dataset with equivalent parameters of yours: | 2 | 2 |
77,130,229 | 2023-9-18 | https://stackoverflow.com/questions/77130229/psycopg3-inserting-dict-into-jsonb-field | I have a table with a JSONB field and would like to insert into it using a named dict like so: sql = "INSERT INTO tbl (id, json_fld) VALUES (%(id)s, %(json_fld)s)" conn.execute(sql, {'id':1, 'json_fld': {'a':1,'b':false, 'c': 'yes'}}); I tried the answers in this question but those all apply to psycopg2 and NOT psycopg3 and they do not work here (notably I tried): conn.execute(sql, {'id':1, 'json_fld': json.dumps({'a':1,'b':false, 'c': 'yes'})}); The error remains the same: psycopg.ProgrammingError: cannot adapt type 'dict' using placeholder '%s' (format: AUTO) | Python code to convert dict to jsonb using psycopg JSON adapters described here JSON adaptation section JSON adaptation. import psycopg from psycopg.types.json import Jsonb con = psycopg.connect("dbname=test user=postgres") cur = con.cursor() cur.execute("insert into json_test values(%s, %s)", [1, Jsonb({'a':1,'b': False, 'c': 'yes'})]) con.commit() This results in: select * from json_test ; id | js_fld ----+---------------------------------- 1 | {"a": 1, "b": false, "c": "yes"} | 4 | 6 |
77,137,537 | 2023-9-19 | https://stackoverflow.com/questions/77137537/when-does-using-floating-point-operations-to-calculate-ceiling-integer-log2-fail | I'm curious what the first input is which differentiates these two functions: from math import * def ilog2_ceil_alt(i: int) -> int: # DO NOT USE THIS FUNCTION IT'S WRONG return ceil(log2(i)) def ilog2_ceil(i: int) -> int: # Correct if i <= 0: raise ValueError("math domain error") return (i-1).bit_length() ... Obviously, the first one is going to fail for some inputs due to rounding/truncation errors when cramming (the log of) an unlimited-sized integer through a finite double in the pipeline— however, I tried running this test code for a few minutes, and it didn't find a problem: ... def test(i): if ilog2_ceil(i) != ilog2_ceil_alt(i): return i def main(start=1): import multiprocessing, itertools p = multiprocessing.Pool() it = p.imap_unordered(test, itertools.count(start), 100) return next(i for i in it if i is not None) if __name__ == '__main__': i = main() print("Failing case:", i) I tried testing assorted large values like 2**32 and 2**64 + 9999, but it didn't fail on these. What is the smallest (positive) integer for which the "alt" function fails? | A first issue with ceil(log2(i)) is that the integer i will first be converted to the floating-point type toward 0 (this is the wrong direction!), here with a 53-bit precision. For instance, if i is 253 + 1, it will be converted to 253, and you'll get 53 instead of 54. But another problem may occur even with smaller values: The values to consider are those that are slightly larger than a power of 2, say 2n + k with a small integer k > 0, because the log2 may round to the integer n (even if log2 is correctly rounded) while you would want a FP number slightly larger than n at this point. Thus this will give ceil(n), i.e. n instead of n+1. Now, let's consider the worst case for k, i.e. k = 1. Let p denote the precision (on your example, p = 53 for the double precision, but let's generalize). One has log2(2n + 1) = n + log2(1 + 2−n) ≈ n + 0.43·2−n. If the representation of n needs exactly q bits, then the ulp will be 2q−p. To get the expected result, 0.43·2−n needs to be larger than 1/2 ulp, i.e. 2−n−2 ⩾ 2q−p−1, i.e. n ⩽ p−q−1. Since q = ⌈log2(n)⌉, the condition is n + ⌈log2(n)⌉ ⩽ p − 1. But since ⌈log2(n)⌉ is small compared to n, the maximum value of n will be of the order of p, so that one gets an approximate condition by replacing ⌈log2(n)⌉ by ⌈log2(p)⌉, i.e. n ⩽ p − ⌈log2(p)⌉ − 1. Here's a C program using GNU MPFR to find the first value of n that fails, for each precision p: #include <stdio.h> #include <stdlib.h> #include <mpfr.h> static void test (long p) { mpfr_t x; mpfr_init2 (x, p); for (long n = 1; ; n++) { /* i = 2^n + 1 */ mpfr_set_ui_2exp (x, 1, n, MPFR_RNDN); mpfr_add_ui (x, x, 1, MPFR_RNDN); mpfr_log2 (x, x, MPFR_RNDN); mpfr_ceil (x, x); long r = mpfr_get_si (x, MPFR_RNDN); if (r != n + 1) { printf ("p = %ld, fail for n = %ld\n", p, n); break; } } mpfr_clear (x); } int main (int argc, char **argv) { long p, pmax; if (argc != 2) exit (1); pmax = strtol (argv[1], NULL, 0); for (p = 2; p <= pmax; p++) test (p); return 0; } With the argument 64, one gets: p = 2, fail for n = 2 p = 3, fail for n = 3 p = 4, fail for n = 4 p = 5, fail for n = 4 p = 6, fail for n = 5 p = 7, fail for n = 6 p = 8, fail for n = 7 p = 9, fail for n = 8 p = 10, fail for n = 8 p = 11, fail for n = 9 p = 12, fail for n = 10 p = 13, fail for n = 11 p = 14, fail for n = 12 p = 15, fail for n = 13 p = 16, fail for n = 14 p = 17, fail for n = 15 p = 18, fail for n = 16 p = 19, fail for n = 16 p = 20, fail for n = 17 p = 21, fail for n = 18 p = 22, fail for n = 19 p = 23, fail for n = 20 p = 24, fail for n = 21 p = 25, fail for n = 22 p = 26, fail for n = 23 p = 27, fail for n = 24 p = 28, fail for n = 25 p = 29, fail for n = 26 p = 30, fail for n = 27 p = 31, fail for n = 28 p = 32, fail for n = 29 p = 33, fail for n = 30 p = 34, fail for n = 31 p = 35, fail for n = 32 p = 36, fail for n = 32 p = 37, fail for n = 33 p = 38, fail for n = 34 p = 39, fail for n = 35 p = 40, fail for n = 36 p = 41, fail for n = 37 p = 42, fail for n = 38 p = 43, fail for n = 39 p = 44, fail for n = 40 p = 45, fail for n = 41 p = 46, fail for n = 42 p = 47, fail for n = 43 p = 48, fail for n = 44 p = 49, fail for n = 45 p = 50, fail for n = 46 p = 51, fail for n = 47 p = 52, fail for n = 48 p = 53, fail for n = 49 p = 54, fail for n = 50 p = 55, fail for n = 51 p = 56, fail for n = 52 p = 57, fail for n = 53 p = 58, fail for n = 54 p = 59, fail for n = 55 p = 60, fail for n = 56 p = 61, fail for n = 57 p = 62, fail for n = 58 p = 63, fail for n = 59 p = 64, fail for n = 60 EDIT: So, for double precision (p = 53), if log2 is correctly rounded (or at least, has a good accuracy), the smallest failing integer is 249 + 1. | 2 | 5 |
77,111,621 | 2023-9-15 | https://stackoverflow.com/questions/77111621/create-google-calendar-invites-using-service-account-securely | I created a service account using my Enterprise Google Workspace account and I need to automate calendar invites creation using Python. I have added the service account email into the calendar's shared people and when I try to get the calendar using service.calendarList().list().execute() it works fine. However, if I try to send an invite it doesn't work, and I get the error below: googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/calendar/v3/calendars/xxxxxx%group.calendar.google.com/events?alt=json returned "You need to have writer access to this calendar.". Details: "[{'domain': 'calendar', 'reason': 'requiredAccessLevel', 'message': 'You need to have writer access to this calendar.'}]"> Looking at the docs I found that I need to delegate domain-wide authority to the service account for this to work, but my company isn't allowing this service account to have this type of access because of security issues. So I wanted to know if there is any way that I could do this without delegating domain wide access to this service account ? Because delegating domain wide access gives me access to impersonated anyone in the domain, so it's a security issue. I want the service account to be able to impersonate just the parent google account from where it was created. Below is the full code that I used to get the calendar and create the invite from google.oauth2 import service_account from googleapiclient.discovery import build class GoogleCalendar: SCOPES = [ "https://www.googleapis.com/auth/calendar", "https://www.googleapis.com/auth/calendar.events", ] def __init__(self, credentials, calendar_id) -> None: credentials = service_account.Credentials.from_service_account_file( credentials, scopes=self.SCOPES ) self.service = build("calendar", "v3", credentials=credentials) self.id = calendar_id def get_calendar_list(self): return self.service.calendarList().list().execute() def add_calendar(self): entry = {"id": self.id} return self.service.calendarList().insert(body=entry).execute() def create_invite(self): event = { "summary": "Google I/O 2015", "location": "800 Howard St., San Francisco, CA 94103", "description": "A chance to hear more about Google's developer products.", "start": { "dateTime": "2023-09-16T09:00:00-07:00", "timeZone": "America/Los_Angeles", }, "end": { "dateTime": "2023-09-16T17:00:00-07:00", "timeZone": "Indian/Mauritius", }, "attendees": [{"email": "[email protected]"}], } event = self.service.events().insert(calendarId=self.id, body=event).execute() work_cal_id = "[email protected]" cal = GoogleCalendar( credentials="work.json", calendar_id=work_cal_id ) cal.add_calendar() print(cal.get_calendar_list()) cal.create_invite() | AFAIK you have 2 options available to you. Google has removed several other options. Serviceaccount with delegation. You state that this is not an option for you, but technically it is. Give serviceaccount access to Calendar API for specific user, using OAuth 2.0. Option 1 This is actually the best (and the "proper" way), but I can understand the security problems that this option entails, as delegation can be a very big security risk if the SA gets compromised. Option 2 This is currently the only other option. See this article for OAuth 2.0 details. It's basically the implementation of giving an app access to Google API endpoints for a user. This picture does a nice recap: The user in the picture above should be an existing user in your company that has edit access to the calendar in question. You might want to create a specific user for this (such as [email protected] or [email protected]), to not tie the invitations to a specific "real" user. Basically the only thing that changes compared to your existing code is the generation of the token/credentials. This article explains how to create the credentials object you need to authenticate the requests. Whether or not this works for you depends on the security requirements for your company, and the specific Workspace settings applied to the workspace account. The article linked above for OAuth 2.0 is a very good starting point, followed by the specific article for server-side apps. Please do take token expiration into account. Not using the refresh token for 6 months fe. will invalidate it... | 3 | 3 |
77,117,250 | 2023-9-16 | https://stackoverflow.com/questions/77117250/how-to-blur-the-edges-of-a-surface-in-pygame | I'd like to blur the edges of a surface in a "gradient fashion", in Pygame. Here is an example of the desired effect where we have : No blur on first image Medium blur on second square High blur on third square image of blur effects desired Pygame has 2 functions to blur surfaces : pygame.transform.box_blur and pygame.transform.gaussian_blur. However they don't have perform a gradient blur between the surface and the screen like in the above example. | You need to create a Surface with a transparent background (pygame.SRCALPHA) that is slightly larger than your image. Then you can blit your image onto the surface and blur it: rect = pygame.Rect(0, 0, image.get_width()+100, image.get_height()+100) blur_image = pygame.Surface(rect.size, pygame.SRCALPHA) blur_image.blit(image, image.get_rect(center = rect.center)) If you then blit this Surface onto the display, you get a gradient transition. When you look in the pygame documentation you'll find, that pygame.transform.box_blur and pygame.transform.gaussian_blur does not exist ins Pygame (2.5.1 at the time of writing) but only pygame-ce. However, you can create your own blur function with PLI Image: def blur_image(surf, radius): pil_string_image = pygame.image.tostring(surf, "RGBA",False) pil_image = Image.frombuffer("RGBA", surf.get_size(), pil_string_image) pil_blurred = pil_image.filter(ImageFilter.GaussianBlur(radius=radius)) blurred_image = pygame.image.fromstring(pil_blurred.tobytes(), pil_blurred.size, pil_blurred.mode) return blurred_image.convert_alpha() Minimal example import pygame from PIL import Image, ImageFilter pygame.init() window = pygame.display.set_mode((300, 300)) clock = pygame.time.Clock() background = pygame.Surface(window.get_size()) ts, w, h, c1, c2 = 50, *background.get_size(), (32, 32, 32), (64, 64, 64) tiles = [((x*ts, y*ts, ts, ts), c1 if (x+y) % 2 == 0 else c2) for x in range((w+ts-1)//ts) for y in range((h+ts-1)//ts)] [pygame.draw.rect(background, color, rect) for rect, color in tiles] def blur_image(surf, radius): pil_string_image = pygame.image.tostring(surf, "RGBA",False) pil_image = Image.frombuffer("RGBA", surf.get_size(), pil_string_image) pil_blurred = pil_image.filter(ImageFilter.GaussianBlur(radius=radius)) blurred_image = pygame.image.fromstring(pil_blurred.tobytes(), pil_blurred.size, pil_blurred.mode) return blurred_image.convert_alpha() def blur_image_edges(surf, radius): rect = pygame.Rect(0, 0, surf.get_width()+radius*4, surf.get_height()+radius*4) blur_surf = pygame.Surface(rect.size, pygame.SRCALPHA) blur_surf.blit(surf, surf.get_rect(center = rect.center)) blurred_image = blur_image(blur_surf, radius) return blurred_image image = pygame.Surface((200, 200)) image.fill('red') blurred_image = blur_image_edges(image, 20) run = True while run: clock.tick(100) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False window.blit(background, (0, 0)) window.blit(blurred_image, blurred_image.get_rect(center = window.get_rect().center)) pygame.display.flip() pygame.quit() exit() | 2 | 2 |
77,095,776 | 2023-9-13 | https://stackoverflow.com/questions/77095776/fastapi-pydantic-data-validation-for-put-method-if-body-only-contains-the-update | I am learning FastAPI and understanding that its data validation using pydantic is one of its features. But after reading its put method example from its tutorial I have a question if I only want to let the put body contain the updated data(as the URL already has its id), how do I do that? Use the sample code from the tutorial as an example to what I mean, from fastapi import FastAPI from fastapi.encoders import jsonable_encoder from pydantic import BaseModel app = FastAPI() class Item(BaseModel): id: str description: str = "default description" price: Union[float, None] = None tax: float = 10.5 tags: list[str] = [] ... @app.put("/items/{item_id}") #async def update_item(item_id: str, item:Item): async def update_item(item_id: str, item): pass If I code async def update_item(item_id: str, item:Item) then the body has to contain id property otherwise I will get 422 "field required". But I feel that is unnecessary because the URL /items/{item_id} already contains the id I just want body to contain the updated data. But when I coded async def update_item(item_id: str, item), to my surprise the item became the required QUERY PARAMETERS! As its document shows: Why does it become query parameters then? This is my second question. I feel that is wrong because I prefer to query parameters for GET only. --- Update --- I guess the 2 methods Chris provided are the way FastAPI solves my first question (whether id should be one of Item's properties is another question, e.g. check What to do when REST POST provides an ID?), but I come from nodejs background so I would like to provide Nestjs solution in comparison. Using Nestjs sample code here https://docs.nestjs.com/controllers#full-resource-sample @Controller('cats') export class CatsController { @Post() create(@Body() createCatDto: CreateCatDto) { return 'This action adds a new cat'; } ... @Put(':id') update(@Param('id') id: string, @Body() updateCatDto: UpdateCatDto) { return `This action updates a #${id} cat`; } As https://docs.nestjs.com/techniques/validation#mapped-types explains "The PartialType() function returns a type (class) with all the properties of the input type set to optional." export class CreateCatDto { name: string; age: number; breed: string; } export class UpdateCatDto extends PartialType(CreateCatDto) {} | Solution 1 You could have a Pydantic model, which would act as the parent one and include every attribute that is shared between both POST and PUT request bodies. You could then create a submodel, containing only the id attribute and inheriting from the parent model (not the BaseModel), which would be used for POST requests. The example below is based on the one given in FastAPI's documentation. Some related answer can be found here. Working Example from fastapi import FastAPI from fastapi.encoders import jsonable_encoder from pydantic import BaseModel from typing import Union, List app = FastAPI() class Item(BaseModel): description: str = "default description" price: Union[float, None] = None tax: float = 10.5 tags: List[str] = [] class ItemCreate(Item): id: str items = { "foo": {"name": "Foo", "price": 50.2}, "bar": {"name": "Bar", "description": "The bartenders", "price": 62, "tax": 20.2}, "baz": {"name": "Baz", "description": None, "price": 50.2, "tax": 10.5, "tags": []}, } @app.get("/items/{item_id}", response_model=Union[Item,str]) async def read_item(item_id: str): if item_id in items: return items[item_id] else: return 'Item not found' @app.put("/items/{item_id}", response_model=Item) async def update_item(item_id: str, item: Item): update_item_encoded = jsonable_encoder(item) items[item_id] = update_item_encoded return update_item_encoded @app.post("/items") async def create_item(item: ItemCreate): new_item_encoded = jsonable_encoder(item) items[item.id] = new_item_encoded return 'Success' Alternatively, as shown in this answer, you could use the same Item model for both PUT and POST requests, with an extra id parameter explicitly defined as Body in the create_item endpoint for POST requests. For instance: from fastapi import Body @app.post("/items") async def create_item(item: Item, id: str = Body(...)): new_item_encoded = jsonable_encoder(item) items[id] = new_item_encoded return 'Success' In that case, the request body would look like this: { "item": { "description": "default description", "price": 0, "tax": 10.5, "tags": [] }, "id": "string" } Solution 2 With regard to the recent update in your question, indicating that (1) item_id should not be left to the client to decide/pass it (which is what you asked for initally), but rather be auto-created on server side (by your db or some other mechanism), and (2) should have the ItemUpdate model inheriting from the Item model with all the fields being optional (as shown in the Nest.js example you provided), here's a working example on how to approach the problem in that way. The @optional decorator in utlis.py module below has been taken from here, which is an update of this original post, so that is compatible with Pydantic V2. Additional related posts around creating an optional/partial decorator can be found here, as well as here, here and here. There also seems to be a related package, namely pydantic-partial (haven't tested it though). Working Example main.py from fastapi import FastAPI, Body, HTTPException from fastapi.encoders import jsonable_encoder from pydantic import BaseModel from typing import Union, List from utils import optional import random import string app = FastAPI() items = { "foo": {"name": "Foo", "price": 50.2}, "bar": {"name": "Bar", "description": "The bartenders", "price": 62, "tax": 20.2}, "baz": {"name": "Baz", "description": None, "price": 50.2, "tax": 10.5, "tags": []}, } class Item(BaseModel): description: str = "default description" price: Union[float, None] = None tax: float = 10.5 tags: List[str] = [] @optional class ItemUpdate(Item): pass def check_item_id(item_id): if item_id not in items: raise HTTPException(status_code=400, detail='Item not found') @app.get("/items/{item_id}", response_model=Union[Item, str]) async def read_item(item_id: str): check_item_id(item_id) return items[item_id] @app.post("/items") async def create_item(item: Item): item_encoded = jsonable_encoder(item) item_id = ''.join(random.choices(string.ascii_letters + string.digits, k=5)) items[item_id] = item_encoded return f"Item '{item_id}' has been created" @app.put("/items/{item_id}", response_model=Union[Item, str]) async def update_item(item_id: str, item: ItemUpdate): check_item_id(item_id) old = items[item_id] new = jsonable_encoder(item) old.update((k,v) for k,v in new.items() if v is not None) return old utils.py from pydantic import BaseModel from pydantic import create_model from typing import Optional import inspect def optional(*fields): def dec(cls): fields_dict = {} for field in fields: field_info = cls.__annotations__.get(field) if field_info is not None: fields_dict[field] = (Optional[field_info], None) OptionalModel = create_model(cls.__name__, **fields_dict) OptionalModel.__module__ = cls.__module__ return OptionalModel if fields and inspect.isclass(fields[0]) and issubclass(fields[0], BaseModel): cls = fields[0] fields = cls.__annotations__ return dec(cls) return dec About Query Parameters As for the query parameters, the HTTP protocol does not prevent one from using query params when sending a request using PUT, POST, etc., methods. Hence, in FastAPI, when declaring other endpoint parameters that are not part of the path parameters, such as item_id path parameter in the example above, or explicitly declared otherwise, such as Item in the given example, they are automatically interpreted as query parameters of type str. That's the reason that item is expected as query parameter when removing the Item definition from it. Have a look at this answer and this answer for more details. | 3 | 1 |
77,138,178 | 2023-9-19 | https://stackoverflow.com/questions/77138178/fill-in-zero-values-only-in-the-center-of-a-numpy-array | I'm working with a rasterio raster that I've 'read' into python, so it is now a numpy array. The outer edge of the np array are all zeros, and the interior are all ones, except, in the midst of the ones are occasional zeros, see the example array below. I want to leave all the zeros on the outside of the array alone (i.e. keep them zeros), but want to convert the zeros that are completely surrounded by ones (i.e. the zeros in the middle of the donut of ones) to one. However, I'm not really sure how to start. Current array: import numpy as np arr = np.array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 0, 0, 1, 1, 0], [0, 1, 1, 0, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]]) Goal array: import numpy as np arr = np.array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]]) | Perhaps you can use scipy.ndimage.binary_fill_holes: import numpy as np from scipy.ndimage import binary_fill_holes arr = np.array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 0, 0, 1, 1, 0], [0, 1, 1, 0, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]]) filled_arr = binary_fill_holes(arr).astype(int) print(filled_arr) Output: [[0 0 0 0 0 0 0 0] [0 1 1 1 1 1 1 0] [0 1 1 1 1 1 1 0] [0 1 1 1 1 1 0 0] [0 1 1 1 1 1 0 0] [0 0 0 0 0 0 0 0]] | 3 | 4 |
77,136,876 | 2023-9-19 | https://stackoverflow.com/questions/77136876/ending-numpy-calculation-early | Given two large arrays, A = np.random.randint(10,size=(10000,2)) B = np.random.randint(10,size=(10000,2)) I would like to determine if any of the vectors have a cross product of zero. We could do C = np.cross(A[:,None,:],B[None,:,:]) and then check if C contains a 0 or not. not C.all() However, this process requires calculating all the cross products which can be time consuming. Instead, I would prefer to let numpy perform the cross product, but IF a zero is reached at any point, then simply cut the whole operation and end early. Does numpy have such an "early termination" operation that will cut numpy operations early if they reach a condition? Something like, np.allfunc() np.anyfunc() The example above is such a case where A and B have an extremely high likelihood of having a zero cross product at some point (in fact is very likely to occur near the start), so much so, that performing a python-for-loop (yuck!) is much faster than using numpy's highly optimized code. In general, what is the fastest way to determine if A and B have a zero cross product? | Does numpy have such an "early termination" operation that will cut numpy operations early if they reach a condition? Put it simply, no, Numpy does not. In practice, this is a bit more complex than that. In fact, the allfunc and anyfunc function you propose similar to np.all and np.any. Such function does some kind of early termination. Here is a proof on my machine: v = np.random.randint(0, 2, 200*1024*1024) > 0 # 2.74 ms ± 61.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit -n 10 v.all() # 3.16 ms ± 46.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) v[0:v.size//16] = True %timeit -n 10 v.all() # 3.72 ms ± 77.7 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) v[0:v.size//8] = True %timeit -n 10 v.all() # 4.64 ms ± 80.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) v[0:v.size//4] = True %timeit -n 10 v.all() # 6.67 ms ± 127 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) v[0:v.size//2] = True %timeit -n 10 v.all() # 10.5 ms ± 69.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) v[0:v.size] = True %timeit -n 10 v.all() The former is so fast that it cannot possibly iterate over the whole array since the memory throughput would reach 71 GiB/s while my RAM can only reach the theoretical maximum bandwidth of 47.7 GiB/s (about 40 GiB/s in practice). My CPU cache is far from being big enough to store a significant part of this array and the array takes 200 MiB in RAM so each boolean takes 1 byte in RAM. Strangely, the speed of all and any is quite disappointing in the first case since v[2] is False in practice. Numpy certainly perform the computation chunk by chunk and perform the early termination only after computing a chunk (if needed). The problem with any and all is that they require a boolean array in parameter. Thus, the boolean array need to be entirely computed first defeating the benefit of any early termination. Alternatively, Numpy provides ufuncs that are designed to operate on arrays in an element-by-element fashion. An example of ufunc is np.ufunc.reduce. They could be used to compose algorithms in such a way early termination would theoretically be possible. This solution would be significantly faster than usual non-ufunc operations when termination is possible pretty early. However, when the termination is only possible lately, ufuncs would be much slower than non-ufunc operations. In fact, ufuncs are (currently) fundamentally inefficient compared to non-ufunc operations. This is because ufuncs require an expensive (native) function call per item. This function call, applied on a scalar items, prevent several critical optimizations like the use of SIMD instructions. Still, ufuncs can often be faster than using pure-Python code element-by-element thanks to "vectorization". The current way ufuncs can be composed is AFAIK not sufficient, in your use-case, to write an efficient code benefiting from early termination. The standard way to do early termination in Numpy is to split the computation in many chunks (manually). This solution is generally fast because it is vectorized, often SIMD-friendly, and cache efficient (assuming chunks are small enough). The overhead of Numpy is small as long as chunks are big enough. Here is an example: def compute_by_chunk(A, B): chunkSize = 256 for i in range(0, A.size, chunkSize): for j in range(0, B.size, chunkSize): C = np.cross(A[i:i+chunkSize,None,:], B[None,j:j+chunkSize,:]) if not C.all(): # print(f'Zero found in chunk ({i},{j})') return True return False For your example of input, it takes 239 µs on my machine while the naive Numpy implementation takes 234 ms. This is about 1000 times faster in this case. With smaller chunks, like 32x32, the computation is even faster: 32.3 µs. Actually, smaller chunks are better when there are a lot of zeros (since it takes time to compute the first chunk). 256x256 should be relatively good on most machines (rather cache friendly, memory efficient and with low Numpy overhead). A more simpler solution (which is also generally faster) is to use modules/tools, like Cython or Numba, compiling your code so loops can be fast thanks to compilation to native functions. However, with this solution, you need to reimplement the cross product yourself. Indeed, Numba does not support it yet and Cython will not be faster (than a pure-Python code using Numpy) if you just call Numpy functions in loops. | 4 | 2 |
77,137,092 | 2023-9-19 | https://stackoverflow.com/questions/77137092/how-to-show-all-x-tick-labels-with-seaborn-objects | How do I make it so that it shows all x ticks from 0 to 9? bin diff 1 4 -0.032748 3 9 0.106409 13 7 0.057214 17 3 0.157840 19 0 -0.086567 ... ... ... 1941 0 0.014386 1945 4 0.049601 1947 9 0.059406 1957 1 0.045282 1959 6 -0.033853 ( so.Plot(x='bin', y='diff', data=diff_df) .theme({**axes_style("whitegrid"), "grid.linestyle": ":"}) .add(so.Dots()) .add(so.Range(color='orange'), so.Est()) .add(so.Dot(color='orange'), so.Agg()) .add(so.Line(color='orange'), so.Agg()) .label( x="Image Similarity Bin", y="Difference", color=str.capitalize, ) ) I tried to set xticks in .label, but it doesn't do anything. | As per Parameterizing scales and Customizing legends and ticks, use seaborn.objects.Continuous.tick inside seaborn.objects.Plot.scale .scale(x=so.Continuous().tick(every=1)) import pandas as pd import numpy as np import seaborn.objects as so # sample data np.random.seed(365) rows = 1100 data = {'diff': np.random.random(size=rows), 'bin': np.random.choice(range(10), size=rows)} df = pd.DataFrame(data) # plot (so.Plot(x='bin', y='diff', data=df) .add(so.Dots()) .add(so.Range(color='orange'), so.Est()) .add(so.Dot(color='orange'), so.Agg()) .add(so.Line(color='orange'), so.Agg()) .label(x="Image Similarity Bin", y="Difference", color=str.capitalize) .scale(x=so.Continuous().tick(every=1))) # adjust the ticks here | 2 | 5 |
77,134,662 | 2023-9-19 | https://stackoverflow.com/questions/77134662/how-to-draw-3d-synthesis-of-fourier-series | I saw this by using Tikz: https://tikz.net/fourier_series/ I want to create this 3D synthesis of Fourier Series: with Python 3D plot, there is matplotlib that is able to do that. My MWE / What I have achieved is: # https://pythonnumericalmethods.berkeley.edu/notebooks/chapter24.02-Discrete-Fourier-Transform.html # Generate 3 sine waves with frequencies 1 Hz, 4 Hz, and 7 Hz, # amplitudes 3, 1 and 0.5, and phase all zeros. # Add this 3 sine waves together with a sampling rate 100 Hz import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn-poster') # sampling rate sr = 100 # sampling interval ts = 1.0/sr t = np.arange(0,1,ts) freq = 1. x = 3*np.sin(2*np.pi*freq*t) x1 = 3*np.sin(2*np.pi*freq*t) freq = 4 x += np.sin(2*np.pi*freq*t) x2 = np.sin(2*np.pi*freq*t) freq = 7 x += 0.5* np.sin(2*np.pi*freq*t) x3 = 0.5* np.sin(2*np.pi*freq*t) # Write a function DFT(x) which takes in one argument, # x - input 1 dimensional real-valued signal. # The function will calculate the DFT of the signal and return the DFT values. def DFT(x): """ Function to calculate the discrete Fourier Transform of a 1D real-valued signal x """ N = len(x) n = np.arange(N) k = n.reshape((N, 1)) e = np.exp(-2j * np.pi * k * n / N) X = np.dot(e, x) return X X = DFT(x) # calculate the frequency N = len(X) n = np.arange(N) T = N/sr freq = n/T n_oneside = N//2 # get the one side frequency f_oneside = freq[:n_oneside] # normalize the amplitude X_oneside =X[:n_oneside]/n_oneside # subplot(2, 2, 3)), the axes will go to the third section of the 2x2 matrix # i.e, to the bottom-left corner. plt.figure(figsize = (12, 6)) plt.subplot(3, 1, 1) plt.plot(t, x, 'r') plt.ylabel('Amplitude') plt.subplot(3, 1, 2) plt.stem(f_oneside, abs(X_oneside), 'b', \ markerfmt=" ", basefmt="-b") plt.xlabel('Freq (Hz)') plt.ylabel('DFT Amplitude |X(freq)|') plt.xlim(0, 10) plt.subplot(3, 2, 5) plt.plot(t, x1, 'g') plt.ylabel('Amplitude') plt.subplot(3, 2, 6) plt.plot(t, x2, 'g') plt.ylabel('Amplitude') plt.tight_layout() plt.show() The plot is still separated in 2D each of them. The single sine wave The Fourier Series (the sum of all the sine waves form number 1) The DFT plot it is not in 3D, so I hope someone can help me with this. | Based on Python Programming and Numerical Methods - A Guide for Engineers and Scientists: Discrete Fourier Transform (DFT) # Generate 3 sine waves with frequencies 1 Hz, 4 Hz, and 7 Hz, # amplitudes 3, 1 and 0.5, and phase all zeros. # Add this 3 sine waves together with a sampling rate 100 Hz import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn-poster') # sampling rate sr = 100 # sampling interval ts = 1.0/sr t = np.arange(0,1,ts) freq = 1. x = 3*np.sin(2*np.pi*freq*t) freq = 4 x += np.sin(2*np.pi*freq*t) freq = 7 x += 0.5* np.sin(2*np.pi*freq*t) # Write a function DFT(x) which takes in one argument, # x - input 1 dimensional real-valued signal. # The function will calculate the DFT of the signal and return the DFT values. def DFT(x): """ Function to calculate the discrete Fourier Transform of a 1D real-valued signal x """ N = len(x) n = np.arange(N) k = n.reshape((N, 1)) e = np.exp(-2j * np.pi * k * n / N) X = np.dot(e, x) return X X = DFT(x) # calculate the frequency N = len(X) n = np.arange(N) T = N/sr freq = n/T n_oneside = N//10 # get the one side frequency f_oneside = freq[:n_oneside] # normalize the amplitude X_oneside =X[:n_oneside]/n_oneside ax = plt.figure().add_subplot(projection='3d') # Plot a sin curve using the x and y axes. x1 = 3*np.sin(2*np.pi*freq*t) x2 = np.sin(2*np.pi*freq*t) x3 = 0.5* np.sin(2*np.pi*freq*t) # The axis direction for the zs. # This is useful when plotting 2D data on a 3D Axes. # The data must be passed as xs, ys. # Setting zdir to 'y' then plots the data to the x-z-plane. #ax.set_xlim(0, 1) ax.plot(t, x, zs=-1, zdir='y', label='DFT(x)') ax.plot(t, x2, zs=4, zdir='y', label='$\sin (2πt)$') ax.plot(t, x1, zs=1, zdir='y', label='$3 \sin (2πt)$') ax.plot(t, x3, zs=7, zdir='y', label='$0.5 \sin (2πt)$') x4 = [1.2] * n_oneside # Create array of 2 with length of 10 y4 = f_oneside z4 = abs(X_oneside) ax.stem(x4, y4, z4, basefmt="-b", label='DFT Amplitude |X(freq)|') ax.set_title('Discrete Fourier Transform') ax.legend(loc='upper left', bbox_to_anchor=[-0.51, 0.5]) ax.set_xlabel('Time domain', fontweight='bold', fontsize=10, labelpad=20) ax.set_ylabel('Frequency domain', fontweight='bold', fontsize=10, labelpad=20) ax.set_zlabel('Amplitude', fontweight='bold', fontsize=10, labelpad=20) ax.view_init(elev=20., azim=-35) plt.show() | 3 | 1 |
77,094,430 | 2023-9-13 | https://stackoverflow.com/questions/77094430/how-to-check-if-a-python-package-is-installed-using-poetry | I'm using Poetry to manage my Python project, and I would like to check if a package is installed. This means the package is actually fetched from the remote repository and is located in the .venv/ folder. I went through the official documentation but didn't find any command that can do that. Any ideas? Thanks. Update: I ended up with a solution by running the following command and parsing its output. Thanks all for the help here! poetry install --dry-run --sync --no-ansi | Assuming you have activated the virtual environment in the .venv folder (using, for example source .venv/bin/activate), you can use pip list to list all of the installed packages in that virtual environment. There is a poetry show command to list all available packages as well that you might want to look into: https://python-poetry.org/docs/cli/#show Update: You can parse the output of the poetry install --dry-run command to see what packages are installed, without actually installing any packages. Look for the lines with both "Installing" and "Skipped" and "reason: Already installed", to see what packages are already installed. | 7 | 7 |
77,134,706 | 2023-9-19 | https://stackoverflow.com/questions/77134706/event-loop-is-closed-is-playwright-already-stopped | Context After initialisation of a playwright browser object in function initialise_playwright_browsercontroller, I try to use its Page object in another function. However, that yields get error: "Event loop is closed! Is Playwright already stopped?" If I use the Page object in the same function (and with statement) in which it is created, the error does not occur. MWE Below is the an MWE that demonstrates the error: from playwright.sync_api import sync_playwright from playwright.sync_api._generated import Locator # type: ignore[import] from playwright.sync_api._generated import Browser, Page from typeguard import typechecked @typechecked def initialise_playwright_browsercontroller( *, start_url: str, ) -> tuple[Browser, Page]: """Creates a Playwright browser, opens a new page, and navigates to a specified URL. Returns: tuple[Browser, Page]: A tuple containing the browser and page objects. """ with sync_playwright() as p: for browser_type in [p.chromium, p.firefox, p.webkit]: if ( browser_type.name != "webkit" and browser_type.name == "firefox" ): browser = browser_type.launch() # Create a new page and navigate to the URL page = browser.new_page() page.goto(start_url) # Return the browser and page objects return browser, page raise ValueError("Error: Could not find browser.") @typechecked def do_something_on_webpage() -> None: """Tries to do something with the object received from the the browser.""" # Declare the browser and page objects. browser: Browser page: Page # Specify the website to go to. start_url: str = "https://github.com" browser, page = initialise_playwright_browsercontroller( start_url=start_url ) print(f"page url = {page.url}") sign_in_button: Locator = page.locator("Sign in") sign_in_button.click() print("Done.") do_something_on_webpage() Question How can I create the Page object elsewhere, and pass it around and use it in other functions? | If you want to use context manager, then you should use yield instead of return from playwright.sync_api import sync_playwright from playwright.sync_api._generated import Locator # type: ignore[import] from playwright.sync_api._generated import Browser, Page from typeguard import typechecked import contextlib @typechecked @contextlib.contextmanager def initialise_playwright_browsercontroller( *, start_url: str, ) -> tuple[Browser, Page]: """Creates a Playwright browser, opens a new page, and navigates to a specified URL. Returns: tuple[Browser, Page]: A tuple containing the browser and page objects. """ with sync_playwright() as p: for browser_type in [p.chromium, p.firefox, p.webkit]: if ( browser_type.name != "webkit" and browser_type.name == "firefox" ): browser = browser_type.launch() # Create a new page and navigate to the URL page = browser.new_page() page.goto(start_url) # Return the browser and page objects yield browser, page return @typechecked def do_something_on_webpage() -> None: """Tries to do something with the object received from the the browser.""" # Specify the website to go to. start_url: str = "https://github.com" with initialise_playwright_browsercontroller( start_url=start_url ) as (browser, page): print(f"page url = {page.url}") sign_in_button: Locator = page.locator("Sign in") sign_in_button.click() print("Done.") do_something_on_webpage() | 2 | 5 |
77,135,853 | 2023-9-19 | https://stackoverflow.com/questions/77135853/extract-date-from-weirdly-formatted-datetime-polars | import polars as pl df = pl.DataFrame(['2023-08-01T06:13:24.448409', '2023-08-01T07:29:34.491027']) print(df.with_columns(pl.col('column_0').str.strptime(pl.Date,'%Y-%m-%d'))) So I have the following mass of code, and for some ungodly reason I cant for the death of me extract the date from those given strings. In the following example I keep getting the error exceptions.ComputeError: strict date parsing failed for 2 value(s) (2 unique): ["2023-08-01T06:13:24.448409", "2023-08-01T07:29:34.491027"] Any ideas on how can I extract the string from such format of datetime? And why I get this error? | To parse a datetime upfront you'll need to provide the full format (date & time) initially then retrieve the date part afterwards. Try the following format: '%Y-%m-%dT%H:%M:%S.%f' | 2 | 3 |
77,128,583 | 2023-9-18 | https://stackoverflow.com/questions/77128583/how-to-find-all-tuple-permutations-in-a-list | I want to create all possible permutations of tuples in a list using python. For example, take the list my_list = [(None, 1), (None,1), (None,1)] I want the generate a list of (2!)^n lists where 2! is the number of combinations of the tuple arrangements and n represents the number of tuples in the list. So the above list should produce 8 distinct lists, for example. I can achieve this if I use the itertools python module in the following way: from itertools import permutations my_list = [(None, 1), (None, 1), (None, 1)] all_permutations = [] for k in permutations(my_list[0]): for j in permutations(my_list[1]): for i in permutations(my_list[2]): all_permutations.append([k,j,i]) print(all_permutations) and I get the expected output: [[(None, 1), (None, 1), (None, 1)], [(None, 1), (None, 1), (1, None)], [(None, 1), (1, None), (None, 1)], [(None, 1), (1, None), (1, None)], [(1, None), (None, 1), (None, 1)], [(1, None), (None, 1), (1, None)], [(1, None), (1, None), (None, 1)], [(1, None), (1, None), (1, None)]] How do I generalize this result to a list of n tuples of size 2 without having to hard code the for-loop? The closest thing I've seen was this question, but it doesn't quite do what I need. Any suggestions would be greatly appreciated! | itertools.product is useful here (if I understand your question correctly). itertools.product is the equivalent to your nested loops and takes as argument multiple iterables. So [permutations(x) for x in my_list] (or map(permutations, my_list)) generically creates a list of iterables from my_list and the asterisk * turns that into separate arguments for product. >>> from itertools import permutations, product >>> my_list = [(None, 1), (None, 1), (None, 1)] >>> list(product(*[permutations(x) for x in my_list])) [((None, 1), (None, 1), (None, 1)), ((None, 1), (None, 1), (1, None)), ((None, 1), (1, None), (None, 1)), ((None, 1), (1, None), (1, None)), ((1, None), (None, 1), (None, 1)), ((1, None), (None, 1), (1, None)), ((1, None), (1, None), (None, 1)), ((1, None), (1, None), (1, None))] >>> my_list = [(None, 1), ("a", "b"), (1,2,3), (None, 4)] >>> list(product(*[permutations(x) for x in my_list])) [((None, 1), ('a', 'b'), (1, 2, 3), (None, 4)), ((None, 1), ('a', 'b'), (1, 2, 3), (4, None)), ((None, 1), ('a', 'b'), (1, 3, 2), (None, 4)), ((None, 1), ('a', 'b'), (1, 3, 2), (4, None)), ((None, 1), ('a', 'b'), (2, 1, 3), (None, 4)), ((None, 1), ('a', 'b'), (2, 1, 3), (4, None)), ((None, 1), ('a', 'b'), (2, 3, 1), (None, 4)), ((None, 1), ('a', 'b'), (2, 3, 1), (4, None)), ((None, 1), ('a', 'b'), (3, 1, 2), (None, 4)), ((None, 1), ('a', 'b'), (3, 1, 2), (4, None)), ((None, 1), ('a', 'b'), (3, 2, 1), (None, 4)), ((None, 1), ('a', 'b'), (3, 2, 1), (4, None)), ((None, 1), ('b', 'a'), (1, 2, 3), (None, 4)), ((None, 1), ('b', 'a'), (1, 2, 3), (4, None)), ((None, 1), ('b', 'a'), (1, 3, 2), (None, 4)), ((None, 1), ('b', 'a'), (1, 3, 2), (4, None)), ((None, 1), ('b', 'a'), (2, 1, 3), (None, 4)), ((None, 1), ('b', 'a'), (2, 1, 3), (4, None)), ((None, 1), ('b', 'a'), (2, 3, 1), (None, 4)), ((None, 1), ('b', 'a'), (2, 3, 1), (4, None)), ((None, 1), ('b', 'a'), (3, 1, 2), (None, 4)), ((None, 1), ('b', 'a'), (3, 1, 2), (4, None)), ((None, 1), ('b', 'a'), (3, 2, 1), (None, 4)), ((None, 1), ('b', 'a'), (3, 2, 1), (4, None)), ((1, None), ('a', 'b'), (1, 2, 3), (None, 4)), ((1, None), ('a', 'b'), (1, 2, 3), (4, None)), ((1, None), ('a', 'b'), (1, 3, 2), (None, 4)), ((1, None), ('a', 'b'), (1, 3, 2), (4, None)), ((1, None), ('a', 'b'), (2, 1, 3), (None, 4)), ((1, None), ('a', 'b'), (2, 1, 3), (4, None)), ((1, None), ('a', 'b'), (2, 3, 1), (None, 4)), ((1, None), ('a', 'b'), (2, 3, 1), (4, None)), ((1, None), ('a', 'b'), (3, 1, 2), (None, 4)), ((1, None), ('a', 'b'), (3, 1, 2), (4, None)), ((1, None), ('a', 'b'), (3, 2, 1), (None, 4)), ((1, None), ('a', 'b'), (3, 2, 1), (4, None)), ((1, None), ('b', 'a'), (1, 2, 3), (None, 4)), ((1, None), ('b', 'a'), (1, 2, 3), (4, None)), ((1, None), ('b', 'a'), (1, 3, 2), (None, 4)), ((1, None), ('b', 'a'), (1, 3, 2), (4, None)), ((1, None), ('b', 'a'), (2, 1, 3), (None, 4)), ((1, None), ('b', 'a'), (2, 1, 3), (4, None)), ((1, None), ('b', 'a'), (2, 3, 1), (None, 4)), ((1, None), ('b', 'a'), (2, 3, 1), (4, None)), ((1, None), ('b', 'a'), (3, 1, 2), (None, 4)), ((1, None), ('b', 'a'), (3, 1, 2), (4, None)), ((1, None), ('b', 'a'), (3, 2, 1), (None, 4)), ((1, None), ('b', 'a'), (3, 2, 1), (4, None))] >>> list(product(*map(permutations, my_list)) [((None, 1), ('a', 'b'), (1, 2, 3), (None, 4)), ((None, 1), ('a', 'b'), (1, 2, 3), (4, None)), ((None, 1), ('a', 'b'), (1, 3, 2), (None, 4)), [same output...] Unlike your example, the above returns a list of tuples (and not a list of lists). Imho, tuples are more appropriate, but if you insist that you need lists, you can of course also convert them, using: [list(tup) for tup in product(*[permutations(x) for x in my_list])] or list(map(list, product(*map(permutations, my_list)))) | 2 | 4 |
77,098,113 | 2023-9-13 | https://stackoverflow.com/questions/77098113/solving-incompatible-dtype-warning-for-pandas-dataframe-when-setting-new-column | Setting the value of a new dataframe column: df.loc[df["Measure] == metric.label, "source_data_url"] = metric.source_data_url now (as of Pandas version 2.1.0) gives a warning, FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value ' metric_3' has dtype incompatible with float64, please explicitly cast to a compatible dtype first. The Pandas documentation discusses how the problem can be solved for a Series but it is not clear how to do this iteratively (the line above is called in a loop over metrics and it's the final metric that gives the warning) when assigning a new DataFrame column. How can this be done? | I had the same problem. My intuition of this is that when you are setting value for the first time to the column source_data_url, the column does not yet exists, so pandas creates a column source_data_url and assigns value NaN to all of its elements. This makes Pandas think that the column's dtype is float64. Then it raises this warning. My solution was to create the column with some default value, e.g. empty string, before adding values to it: df["source_data_url"] = "" or None seems also to work: df["source_data_url"] = None | 11 | 10 |
77,131,685 | 2023-9-19 | https://stackoverflow.com/questions/77131685/the-fastest-way-of-pyspark-and-geodataframe-to-check-if-a-point-is-contained-in | A CSV file read through pyspark contains tens of thousands of GPS information (lat, lon) and a feather file read through geodataframe contains millions of polygon information. In the previous question (The best algorithm to get index with specific range at pandas dataframe), I succeeded in creating a geodataframe. However, it was confirmed that a large amount of calculation time was consumed when using geodataframe with data read from pyspark. First, data was loaded through pyspark in the following. data = spark.read.option("header", True).csv("path/file.csv") The data contains second-by-second GPS information about the vehicle as follows: time Vehicle.Location.Latitude Vehicle.Location.Longitude 2023-01-01 00:00:00 37.123456 123.123456 2023-01-01 00:00:01 37.123457 123.123457 Second, the previously created geodataframe data was loaded as follows gdf = gpd.read_feather("/path/file.feather") The geodataframe contains geometry information as follow: id MAX_SPD geometry 0 60 POLYGON ((126.27306 33.19865, 126.27379 33.198... 1 60 POLYGON ((126.27222 33.19865, 126.27306 33.198... Next, I created a user-defined function within pyspark. The purpose is to find out the MAX_SPD value of the polygon that contains given GPS information. If it is included in multiple polygons, max(MAX_SPD) is retrieved. def find_intersection(longitude, latitude): if type(longitude) != float or type(latitude) != float: return -1 mgdf = gdf['geometry'].contains(Point(longitude, latitude)) max_spd = gdf.loc[mgdf, 'MAX_SPD'].max() if math.isnan(max_spd): max_spd = -1 return max_spd find_intersection_udf = udf(find_intersection, IntegerType()) data = data.withColumn("max_spd", find_intersection_udf(col("`Vehicle.Location.Longitude`"), col("`Vehicle.Location.Latitude`"))) data.select("`max_spd`").show() However, Using user-defined function seems to be time consuming. Are there any other good ideas to redue time consumption? | You can use Apache Sedona for your geospatial analysis. https://sedona.apache.org/latest-snapshot/tutorial/sql/ I adapted this notebook to give you an example of how to do this. All you have to do is readin your Points CSV into point_rdd and polygon data in feather file format into Geopandas dataframe and then convert into polygon_rdd. Do a join query and get results. The results will have your attributes like MAX_SPD and then you do a aggregate query to retrieve the max value i.e. max(MAX_SPD) . Notebook : https://github.com/apache/sedona/blob/master/binder/ApacheSedonaCore.ipynb Data used in the following script can be found at this location : https://github.com/apache/sedona/tree/master/binder/data Requirements : pip install apache-sedona pip install geopandas pip install folium Python script : import sys import folium from pyspark import StorageLevel import geopandas as gpd import pandas as pd from pyspark.sql.types import StructType from pyspark.sql.types import StructField from sedona.spark import * config = SedonaContext.builder() .\ config('spark.jars.packages', 'org.apache.sedona:sedona-spark-shaded-3.0_2.12:1.4.1,' 'org.datasyslab:geotools-wrapper:1.4.0-28.2'). \ config("spark.serializer", KryoSerializer.getName). \ config("spark.kryo.registrator", SedonaKryoRegistrator.getName). \ getOrCreate() sedona = SedonaContext.create(config) sc = sedona.sparkContext point_rdd = PointRDD(sc, "../sedona_data/arealm-small.csv", 1, FileDataSplitter.CSV, True, 10, StorageLevel.MEMORY_ONLY, "epsg:4326", "epsg:4326") ## Getting approximate total count count_approx = point_rdd.approximateTotalCount print("approximate count", count_approx) # To run analyze please use function analyze print("analyse is here", point_rdd.analyze()) # collect to Python list points_List = point_rdd.rawSpatialRDD.collect()[:5] print("Printng point list") print(points_List) point_rdd_to_geo = point_rdd.rawSpatialRDD.map(lambda x: [x.geom, *x.getUserData().split("\t")]) point_gdf = gpd.GeoDataFrame( point_rdd_to_geo.collect(), columns=["geom", "attr1", "attr2", "attr3"], geometry="geom" ) print("GeoPandas Dataframe") print(point_gdf[:5]) spatial_df = Adapter.\ toDf(point_rdd, ["attr1", "attr2", "attr3"], sedona).\ createOrReplaceTempView("spatial_df") spatial_gdf = sedona.sql("Select attr1, attr2, attr3, geometry as geom from spatial_df") spatial_gdf.show(5, False) ## Apache Sedona spatial partitioning method can # significantly speed up the join query. # Three spatial partitioning methods are available: # KDB-Tree, Quad-Tree and R-Tree. # Two SpatialRDD must be partitioned by the same way. ## Very Important : Choose the Same Spatial Paritioning Everywhere. in this case GridType.KDBTREE done_value = point_rdd.spatialPartitioning(GridType.KDBTREE) print("spatial partitioniing done value = ", done_value) polygon_rdd = PolygonRDD(sc, "../sedona_data/primaryroads-polygon.csv", FileDataSplitter.CSV, True, 11, StorageLevel.MEMORY_ONLY, "epsg:4326", "epsg:4326") print(polygon_rdd.analyze()) polygon_rdd.spatialPartitioning(point_rdd.getPartitioner()) ### This retrieves the same paritioner i.e. GridType.KDBTREE # building an index point_rdd.buildIndex(IndexType.RTREE, True) polygon_rdd.buildIndex(IndexType.RTREE, True) # Perform Spatial Join Query result = JoinQuery.SpatialJoinQueryFlat(point_rdd, polygon_rdd, True, False) print("Result of the Join Query") print(result.take(2)) schema = StructType( [ StructField("geom_left", GeometryType(), False), StructField("geom_right", GeometryType(), False) ] ) # Set verifySchema to False spatial_join_result = result.map(lambda x: [x[0].geom, x[1].geom]) sedona_df = sedona.createDataFrame(spatial_join_result, schema, verifySchema=False) print("Schema of Sedona Dataframe") print(sedona_df.printSchema()) print("Values here") sedona_df.show(n=10, truncate=False) gdf = gpd.GeoDataFrame(sedona_df.rdd.collect(), columns=["geom_left", "geom_right"], geometry="geom_left") gdf_points = gpd.GeoDataFrame(sedona_df.rdd.collect(), columns=["geom_left", "geom_right"], geometry="geom_right") folium_map = folium.Map(zoom_start=0.0001, tiles="CartoDB positron") for _, r in gdf_points.iterrows(): # Without simplifying the representation of each borough, # the map might not be displayed sim_geo = gpd.GeoSeries(r["geom_right"]).simplify(tolerance=0.000001) geo_j = sim_geo.to_json() geo_j = folium.GeoJson(data=geo_j, style_function=lambda x: {"fillColor": "red"}) geo_j.add_to(folium_map) for _, r in gdf.iterrows(): # Without simplifying the representation of each borough, # the map might not be displayed sim_geo = gpd.GeoSeries(r["geom_left"]).simplify(tolerance=0.000001) geo_j = sim_geo.to_json() geo_j = folium.GeoJson(data=geo_j, style_function=lambda x: {"fillColor": "yellow"}) geo_j.add_to(folium_map) folium_map.show_in_browser() Output : approximate count 3000 analyse is here True Printng point list [Geometry: Point userData: testattribute0 testattribute1 testattribute2, Geometry: Point userData: testattribute0 testattribute1 testattribute2, Geometry: Point userData: testattribute0 testattribute1 testattribute2, Geometry: Point userData: testattribute0 testattribute1 testattribute2, Geometry: Point userData: testattribute0 testattribute1 testattribute2] GeoPandas Dataframe geom attr1 attr2 attr3 0 POINT (-88.33149 32.32414) testattribute0 testattribute1 testattribute2 1 POINT (-88.17593 32.36076) testattribute0 testattribute1 testattribute2 2 POINT (-88.38895 32.35707) testattribute0 testattribute1 testattribute2 3 POINT (-88.22110 32.35078) testattribute0 testattribute1 testattribute2 4 POINT (-88.32399 32.95067) testattribute0 testattribute1 testattribute2 +--------------+--------------+--------------+----------------------------+ |attr1 |attr2 |attr3 |geom | +--------------+--------------+--------------+----------------------------+ |testattribute0|testattribute1|testattribute2|POINT (-88.331492 32.324142)| |testattribute0|testattribute1|testattribute2|POINT (-88.175933 32.360763)| |testattribute0|testattribute1|testattribute2|POINT (-88.388954 32.357073)| |testattribute0|testattribute1|testattribute2|POINT (-88.221102 32.35078) | |testattribute0|testattribute1|testattribute2|POINT (-88.323995 32.950671)| +--------------+--------------+--------------+----------------------------+ only showing top 5 rows spatial partitioniing done value = True True Result of the Join Query [[Geometry: Polygon userData: , Geometry: Point userData: testattribute0 testattribute1 testattribute2], [Geometry: Polygon userData: , Geometry: Point userData: testattribute0 testattribute1 testattribute2]] Schema of Sedona Dataframe root |-- geom_left: geometry (nullable = false) |-- geom_right: geometry (nullable = false) None Values here +------------------------------------------------------------------------------------------------------------------------+----------------------------+ |geom_left |geom_right | +------------------------------------------------------------------------------------------------------------------------+----------------------------+ |POLYGON ((-88.40049 30.474455, -88.40049 30.692167, -88.006968 30.692167, -88.006968 30.474455, -88.40049 30.474455)) |POINT (-88.35457 30.634836) | |POLYGON ((-88.40047 30.474213, -88.40047 30.691941, -88.007269 30.691941, -88.007269 30.474213, -88.40047 30.474213)) |POINT (-88.35457 30.634836) | |POLYGON ((-88.403842 32.448773, -88.403842 32.737139, -88.099114 32.737139, -88.099114 32.448773, -88.403842 32.448773))|POINT (-88.347036 32.454904)| |POLYGON ((-88.403842 32.448773, -88.403842 32.737139, -88.099114 32.737139, -88.099114 32.448773, -88.403842 32.448773))|POINT (-88.39203 32.507796) | |POLYGON ((-88.403842 32.448773, -88.403842 32.737139, -88.099114 32.737139, -88.099114 32.448773, -88.403842 32.448773))|POINT (-88.349276 32.548266)| |POLYGON ((-88.403842 32.448773, -88.403842 32.737139, -88.099114 32.737139, -88.099114 32.448773, -88.403842 32.448773))|POINT (-88.329313 32.618924)| |POLYGON ((-88.403823 32.449032, -88.403823 32.737291, -88.09933 32.737291, -88.09933 32.449032, -88.403823 32.449032)) |POINT (-88.347036 32.454904)| |POLYGON ((-88.403823 32.449032, -88.403823 32.737291, -88.09933 32.737291, -88.09933 32.449032, -88.403823 32.449032)) |POINT (-88.39203 32.507796) | |POLYGON ((-88.403823 32.449032, -88.403823 32.737291, -88.09933 32.737291, -88.09933 32.449032, -88.403823 32.449032)) |POINT (-88.349276 32.548266)| |POLYGON ((-88.403823 32.449032, -88.403823 32.737291, -88.09933 32.737291, -88.09933 32.449032, -88.403823 32.449032)) |POINT (-88.329313 32.618924)| +------------------------------------------------------------------------------------------------------------------------+----------------------------+ only showing top 10 rows Your map should have been opened in your browser automatically. Following is the map opened in browser. | 2 | 4 |
77,131,670 | 2023-9-19 | https://stackoverflow.com/questions/77131670/python-pandas-replace-the-values-of-a-dataframe-by-searching-in-another-datafram | I have two dataframes. df1 contains CITIES and the total number of VISITS. df2 contains the VISITS records. Periodically df1 is updated with the data from df2 with the new VISITS. df1 example (before updating) ID NAME VISITS --- 01 CITY1 01 02 CITY2 01 ... 06 CITYZ 12 df2 example CITY NUMBER --- ... CITY1 01 CITY2 01 <--- highest of CITY2 CITYZ 13 CITY1 02 ... CITYZ 14 CITY1 03 <--- highest of CITY1 CITYZ 15 <--- highest of CITYZ To update it will look for df1['NAME'] in df2['CITY'] (this is the correlation) and take the highest df2['NUMBER'] and put it in df1['VISITS'] of that CITY. df1 after updating ID NAME VISITS --- 01 CITY1 03 <--- updated 02 CITY2 01 <--- updated or not, it doesn't matter ... 06 CITYZ 15 <--- updated my approach: df2.loc[df2['CITY'] == 'CITYZ', 'NUMBER'].max() I get the max number of "CITIZ" (hardcoded), but I don't know how to link it to df1. The next is clearly wrong, but it is the idea: df1['VISITS'] = df2.loc[df2['CITY'] == df1['NAME'], 'NUMBER'].max() This "solution" gives the following error: ValueError: Can only compare identically-labeled Series objects | You can remove duplicates from df2 and keep only the last updated values (if your cities are already sorted by 'NUMBER', you can remove sort_values): df1['VISITS'] = df1['NAME'].map(df2.sort_values('NUMBER', ascending=True) .drop_duplicates('CITY', keep='last') .set_index('CITY')['NUMBER']) print(df1) # Output ID NAME VISITS 0 1 CITY1 3 1 2 CITY2 1 2 6 CITYZ 15 Based on @Nick's answer: df1['VISITS'] = df1['NAME'].map(df2.groupby('CITY')['NUMBER'].max()) Input data: >>> df1 ID NAME VISITS 0 1 CITY1 1 1 2 CITY2 1 2 6 CITYZ 12 >>> df2 CITY NUMBER 0 CITY1 1 1 CITY2 1 2 CITYZ 13 3 CITY1 2 4 CITYZ 14 5 CITY1 3 6 CITYZ 15 | 2 | 3 |
77,118,099 | 2023-9-16 | https://stackoverflow.com/questions/77118099/typeerror-updater-init-got-an-unexpected-keyword-argument-token | I have this code which is suppose to look for /help command in telegram. So once you type /help in the telegram channel it will give you options. The code is as follows. from telegram import Update from telegram.ext import Updater, CommandHandler, MessageHandler, CallbackContext from telegram.ext import filters # Define your bot token here TOKEN = "YOUR_BOT_TOKEN" def start(update, context): update.message.reply_text("Welcome to your Telegram bot!") def help_command(update, context): update.message.reply_text("You requested help. Here are some available commands:\n" "/help - Show this help message\n" "/start - Start the bot") def handle_message(update, context): text = update.message.text if text == '/start': start(update, context) elif text == '/help': help_command(update, context) def main(): # Initialize the Updater with your bot token updater = Updater(token=TOKEN, use_context=True) dispatcher = updater.dispatcher # Define the command handlers dispatcher.add_handler(CommandHandler("start", start)) dispatcher.add_handler(CommandHandler("help", help_command)) # Handle non-command messages using a filter dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, handle_message)) # Start the bot updater.start_polling() updater.idle() if __name__ == '__main__': main() However I am getting this error TypeError: Updater.__init__() got an unexpected keyword argument 'token' Could you please advise how I can resolve this error. | I used telegram.Bot class and then I passed it to the Updater like this: from telegram import Update, Bot from telegram.ext import Updater, CommandHandler, MessageHandler, CallbackContext from telegram.ext import filters #defining your bot token here TOKEN = "YOUR_BOT_TOKEN" def start(update: Update, context: CallbackContext): update.message.reply_text("Welcome to your Telegram bot!") def help_command(update: Update, context: CallbackContext): update.message.reply_text("You requested help. Here are some available commands:\n" "/help - Show this help message\n" "/start - Start the bot") def handle_message(update: Update, context: CallbackContext): text = update.message.text if text == '/start': start(update, context) elif text == '/help': help_command(update, context) def main(): #creating a bot instance bot = Bot(token=TOKEN) #creating an updater instance and pass the bot instance updater = Updater(bot=bot, use_context=True) dispatcher = updater.dispatcher #defining the command handlers dispatcher.add_handler(CommandHandler("start", start)) dispatcher.add_handler(CommandHandler("help", help_command)) #handling non-command messages using a filter dispatcher.add_handler(MessageHandler(filters.Text & ~filters.Command, handle_message)) #starting the bot updater.start_polling() updater.idle() if __name__ == '__main__': main() | 4 | 2 |
77,130,750 | 2023-9-18 | https://stackoverflow.com/questions/77130750/gitlab-ci-job-token-does-not-authenticate-with-private-python-repository | I tried to use the CI_JOB_TOKEN in a GitLab CI pipeline to install a Python Package from a different project's package registry. According to the documentation i should just have to add my project to the allowlist of the corresponding project and run the pipeline. However i always get the following 401 error when running this command pip install --extra-index-url https://__token__:[email protected]/api/v4/projects/<projId>/packages/pypi/simple <package> Looking in indexes: https://pypi.org/simple, https://__token__:****@gitlab.com/api/v4/projects/<projId>/packages/pypi/simple WARNING: 401 Error, Credentials not correct for https://gitlab.com/api/v4/projects/<projId>/packages/pypi/simple/<package>/ ERROR: Could not find a version that satisfies the requirement <package> (from versions: none) ERROR: No matching distribution found for <package> WARNING: 401 Error, Credentials not correct for https://gitlab.com/api/v4/projects/<projId>/packages/pypi/simple/pip/ I tried to remove the allow list protections completely and it still did not work. As a workaround i just added a secret variable in form of an Access Token to the pipelines in the GUI and with that it works, but this seems rather hacky since i need to update the token every few weeks. Did i miss anything in the documentation and the CI_JOB_TOKEN does not have access to these registries ? | According to the GitLab PyPI registry authentication documentation, you should use the username gitlab-ci-token when authenticating with a job token. This might be confusing because some other examples use __token__ even though GitLab does not accept this username unless you are using an access token literally named __token__. __token__ is normally used for tokens on PyPI.org, however. | 6 | 10 |
77,109,279 | 2023-9-15 | https://stackoverflow.com/questions/77109279/how-do-i-locate-minima-in-an-array | I have a code to plot a heatmap for a set of data, represented as (x, y, f(x, y)), and I want to find the local minimum points. import numpy as np import math import matplotlib.pyplot as plt import matplotlib as mpl from scipy.interpolate import griddata data = np.genfromtxt('data.dat', skip_header=1, delimiter=' ') x, y, z = data[:, 1], data[::, 0], data[:, 2] x, y, z = x*180/math.pi, y*180/math.pi, z - min(z) xi, yi = np.linspace(max(x), min(x), 1000), np.linspace(max(y), min(y), 1000) xi, yi = np.meshgrid(xi, yi) zi = griddata((x, y), z, (xi, yi), method='linear') plt.figure(figsize=(10,5)) plt.pcolormesh(xi, yi, zi, shading='auto', cmap='jet') plt.colorbar(label='Heatmap') plt.gca().invert_yaxis() plt.show() Here's code to generate some fake data: import math with open('data.dat', 'w') as arquivo: for x in range(20): for y in range(20): z = -math.exp(math.sin(x*y)- math.cos(y)) arquivo.write(f"{x}\t\t{y}\t\t{z}\n") Heatmap example with minimum points circled: I tried to use np.gradient, thinking that maybe by taking two derivatives I would be able to determine the local minimum points (zero in the first derivative and negative in the second), but I was not able to make any of it work. | To find local minima, we usually used some gradient based optimizations like gradient descent. However, it is not easy to find all local minima unless doing a lot of "restart" (generally people are happy with one local minimum). One straightforward method to your problem is using grid search: if the current point is less than the neighbor around it, it is one local minimum. The code snippet is below # Function to get the neighbors of a given point (i,j) def get_neighbors(i, j, shape): neighbors = [] for x in [-1, 0, 1]: for y in [-1, 0, 1]: ni, nj = i + x, j + y if (0 <= ni < shape[0]) and (0 <= nj < shape[1]) and (x, y) != (0, 0): neighbors.append((ni, nj)) return neighbors local_minima = [] # Iterate over the 2D grid for i in range(zi.shape[0]): for j in range(zi.shape[1]): current_value = zi[i, j] neighbors = get_neighbors(i, j, zi.shape) # Check if the current point is less than all its neighbors if all(current_value < zi[n[0], n[1]] for n in neighbors): local_minima.append((xi[i, j], yi[i, j], current_value)) # Print the local minima for loc in local_minima: print(f"Local minimum value {loc[2]} at location ({loc[0]}, {loc[1]}).") And then plot the local minima # Marking all the local minima on the plot for loc in local_minima: plt.scatter(loc[0], loc[1], color='red', s=100, marker='x') | 5 | 3 |
77,130,056 | 2023-9-18 | https://stackoverflow.com/questions/77130056/converting-a-list-to-pandas-dataframe-where-list-contains-dictionary | I wanted to convert a list to pandas dataframe, where the first element of the list is a dictionary. I have below code import pandas as pd import numpy as np pd.DataFrame([{'aa' : 10}, np.nan]) However this fails with below message Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.11/site-packages/pandas/core/frame.py", line 782, in __init__ arrays, columns, index = nested_data_to_arrays( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py", line 498, in nested_data_to_arrays arrays, columns = to_arrays(data, columns, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py", line 832, in to_arrays arr, columns = _list_of_dict_to_arrays(data, columns) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py", line 912, in _list_of_dict_to_arrays pre_cols = lib.fast_unique_multiple_list_gen(gen, sort=sort) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pandas/_libs/lib.pyx", line 374, in pandas._libs.lib.fast_unique_multiple_list_gen File "/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py", line 910, in <genexpr> gen = (list(x.keys()) for x in data) ^^^^^^ AttributeError: 'float' object has no attribute 'keys' Could you please help how to resolve this issue? | Enclose your list into np.array: pd.DataFrame(np.array([{'aa' : 10}, np.nan])) 0 0 {'aa': 10} 1 NaN Though you list is quite small, here's timings comparison just for the case: In [777]: %timeit pd.DataFrame(np.array([{'aa' : 10}, np.nan])) 26.6 µs ± 220 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [778]: %timeit pd.Series([{'aa' : 10}, np.nan]).to_frame() 49.6 µs ± 911 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) | 2 | 1 |
77,101,344 | 2023-9-14 | https://stackoverflow.com/questions/77101344/selenium-headless-not-functioning-with-buttons-properly | I have some code to open a search url for a site, click the first result, and click a button. This all works fine until I try to use headless Chrome. Code (working -> not using headless Chrome): from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait browser = webdriver.Chrome() browser.get("https://www.google.com/search?q=chatgpt") f=browser.find_elements(By.TAG_NAME, "h3") f[0].click() button = browser.find_elements(By.TAG_NAME, 'button') button[0].click() print("Page title was '{}'".format(browser.title)) input() Output (which is what I want): DevTools listening on ws://127.0.0.1:58358/devtools/browser/e8f33ba5-9423-4900-9d36-035615599b61 Page title was 'ChatGPT' [6464:11928:0914/044721.426:ERROR:device_event_log_impl.cc(225)] [04:47:21.426] USB: usb_service_win.cc:415 Could not read device interface GUIDs: The system cannot find the file specified. (0x2) Code (not working -> using headless Chrome): from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--headless") chrome_options.add_argument("--disable-gpu") browser = webdriver.Chrome(options=chrome_options) browser.get("https://www.google.com/search?q=chatgpt") f=browser.find_elements(By.TAG_NAME, "h3") f[0].click() button = browser.find_elements(By.TAG_NAME, 'button') button[0].click() print("Page title was '{}'".format(browser.title)) input() Error: DevTools listening on ws://127.0.0.1:58289/devtools/browser/e4c33772-2e8c-4060-96b6-6aa730ef53c2 [0914/044642.023:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0) [0914/044644.747:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0) [0914/044645.283:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'browsing-topics'.", source: (0) [0914/044645.284:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'interest-cohort'.", source: (0) [0914/044645.435:INFO:CONSOLE(0)] "Refused to execute script from 'https://chat.openai.com/cdn-cgi/challenge-platform/h/g/scripts/alpha/invisible.js?ts=1694649600' because its MIME type ('') is not executable, and strict MIME type checking is enabled.", source: about:blank (0) [0914/044646.033:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'browsing-topics'.", source: (0) [0914/044646.033:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'interest-cohort'.", source: (0) Traceback (most recent call last): File "d:\RapidAPI\willitworkorwillittwerk.py", line 14, in <module> button[0].click() ~~~~~~^^^ IndexError: list index out of range I tried google, youtube, chatgpt, and tiktok. Most sites are giving me the same thing, but example.com worked for some reason. This is my first time trying to use headless mode. I asked chatgpt and it said this isn't intended behavior, so if anyone has any information, that would be great. I also tried undetected_chromedriver, which I might just use, but it gave similar errors. Chrome Version: 117.0.5938.63 Webdriver Version: 117.0.5938.62 Selenium Version: 4.12.0 i added --headless==new it removed the cloudflare errors but im still getting the out of index error but when its not headless i dont get it | You have to use the new headless mode to get the same results as normal mode: chrome_options.add_argument("--headless=new") You'll see that if you search for --headless=new in the Selenium documentation on https://www.selenium.dev/documentation/webdriver/browsers/chrome/ If you're still having issues, try SeleniumBase UC Mode after pip install seleniumbase: from seleniumbase import Driver try: driver = Driver(uc=True, headless2=True) driver.get("https://www.google.com/search?q=chatgpt") driver.click("h3") driver.click("button") print(driver.title) finally: driver.quit() Here's the result after running with python: ChatGPT | 3 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.