question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,895,565 | 2023-8-13 | https://stackoverflow.com/questions/76895565/python-dunder-methods-wrapped-as-property | I stumbled upon this code that I found weird as it seems to violate the fact that python builtins call dunder methods directly from the class of the object. Using __call__ as an example, if we define class A as following: class A: @property def __call__(self): def inner(): return 'Called.' return inner a = A() a() # return 'Called.' type(a).__call__(a) # return 'property' object is not callable. However, this behaviour seems to contradict with what's said in Python's official documentation: object.__call__(self[, args...]) Called when the instance is βcalledβ as a function; if this method is defined, x(arg1, arg2, ...) roughly translates to type(x).__call__(x, arg1, ...). Can anyone explain what is going on here? | Yes, but it respects the way to retrieve a method from a given function - We can see that the __get__ method is called: On the code bellow, I just replaced property with a simpler descriptor that will retrieve its "func" - and used it as the __call__ method. In [34]: class X: ...: def __init__(self, func): ...: self.func = func ...: def __get__(self, instance, owner): ...: print("descriptor getter called") ...: return self.func ...: In [35]: class Y: ...: __call__ = X(lambda: "Z") ...: In [36]: y = Y() In [37]: y() descriptor getter called Out[37]: 'Z' So, the "dunder" functionality just retrieved the method through its __get__ as usual for all methods. What was skipped is the step that goes through __getattribute__ - Python will go directly to the __call__ slot in the A class, and not go through the normal lookup sequence, by calling __getattribute__, which starts at the class (for a descriptor), then looks at the instance, than back to the class (for a regular attribute): it assumes that if there is something at the instance's class dunder slot it is a method, and uses its __get__ method accordingly. A function's __get__ is used when retrieving it from an instance - as that is the mechanism that injects the instance as the self argument for the call. And __get__ is exactly the thing that property replaces to perform its "magic". To demonstrate that __getatribute__ is not called, in iPython, I had to encapsulate "y" inside a dummy function, otherwise iPython would trigger __getattribute__ when trying to autocomplete stuff: In [42]: class Y: ...: __call__ = X(lambda: "Z") ...: def __getattribute__(self, name): ...: print("getattribute called") ...: return super().__getattribute__(name) ...: In [43]: def w(): ...: y = Y() ...: return y() ...: In [44]: w() descriptor getter called Out[44]: 'Z' # in contrast with: In [46]: Y().__call__() getattribute called descriptor getter called Out[46]: 'Z' | 6 | 2 |
76,895,027 | 2023-8-13 | https://stackoverflow.com/questions/76895027/pythonic-way-to-find-index-of-certain-char-in-a-circular-manner | Let's say I have a string like: 'abcdefgha' I'd like to find the index of the next character a after the index 2 (in a circular manner). Meaning it should find index 7 in this case (via mystr.index('a', 2)); however, in this case: 'abcdefgh' it should return index 0. Is there any such built-in function? | There isn't a builtin for this, but you can easily write a function: def index_circular(s: str, sub: str, n: int) -> int: try: # Search starting from n return s.index(sub, n) except ValueError: # Wrap around and search from the start, until n return s.index(sub, 0, n+len(sub)-1) In use: >>> n = 2 >>> c = 'a' >>> index_circular('abcdefgha', c, n) 8 >>> index_circular('abcdefgh', c, n) 0 >>> index_circular('bcdefgh', c, n) Traceback (most recent call last): ... ValueError: substring not found (Note that 'a' actually occurs at index 8, not 7, in the first case.) Note: In the second s.index call, I'm setting the end parameter in order to avoid searching parts of the string that have already been searched. This is a bit of a premature optimization, but it's also a bit of a clarification about exactly which parts of the string are being searched in each step. The +len(sub)-1 is to allow for multi-character sub that spans index n, like: >>> index_circular('abc', 'ab', 1) 0 | 5 | 5 |
76,886,238 | 2023-8-11 | https://stackoverflow.com/questions/76886238/valueerror-when-using-custom-new-in-python-enum | I'm encountering an issue with the following Python code where a ValueError is raised when trying to create an instance of an Enum with a value that doesn't correspond to any defined enum member: from enum import Enum class Option(Enum): OPTION_1 = "Option 1" OPTION_2 = "Option 2" NONE = "" def __new__(cls, value): try: obj = object.__new__(cls) obj._value_ = value return obj except ValueError: return Option.NONE tmp = Option("Option 3") The ValueError seems to be related to how the __new__ method is handling the invalid values. I want the code to create an instance of the NONE member in case an invalid value is provided, but it doesn't seem to be working as expected. Why is this code resulting in a ValueError and how can I achieve the desired behavior of using the NONE member for invalid values? | The __new__ method is used during enum class creation; after that Enum.__new__ is swapped in, and it only does look-ups. To handle situations like these, use the _missing_ method: @classmethod def _missing_(cls, value): return cls.NONE Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 3 | 4 |
76,894,477 | 2023-8-13 | https://stackoverflow.com/questions/76894477/regular-expression-for-identical-characters-despite-line-breaks-and-spaces | How can I create a regular expression in python that matches consecutive identical characters, regardless of whether line breaks or spaces are in between? The number of identical characters should be adjustable. examples (e can be any character except newline or space): match: eee, e e e, e e e no match: ebe, e b e, e e attempts: (\S)[\s\n]*\1{2} (\S)(?:\s|\n)*\1{2} | You may use this regex: \A\s*(\S)(?:\s*\1\s*)+\Z RegEx Demo RegEx Details: \A: asserts position at start of the string \s*: Match 0 or more whitespaces (\S): Match any non-whitespace character and capture in group #1 (?:\s*\1\s*)+: Match same value we captured in group #1 surrounded with 0 or more whitespaces on either side. Repeat this group 1 or more times \Z: asserts position at the end of the string | 3 | 2 |
76,893,383 | 2023-8-13 | https://stackoverflow.com/questions/76893383/python-remove-part-of-a-string-based-on-non-case-sensitive-content | I'm learning python so I'm not an expert. I have a string like the following one (the line break is \n): test1 test2 othertext test3 test4 I would like to remove the othertext line but I can't figure out how to do it. I want this: test1 test2 test3 test4 Othertext is a text written by hand so I have to consider the possible errors: "OTherTEXT", "other text", "Othe rtext". It's always in one line, so maybe a .strip() could be enough? Any idea on how can I make this easily? Should I make a loop that "strips" every part of text between \n and the following \n? I tried to remove the text with a .replace("othertext", "") but it's a weak implementation. | Case insensitivity doesn't ignore spaces. So you have to first remove the spaces yourself (with .replace(" ", "") then do the comparison. For comparison part, use .lower() to convert the text into its lowercase version. text = """test1 test2 otHeR teXT test3 test4""" def remove_line(text: str, word: str) -> str: return "\n".join( [line for line in text.splitlines() if line.replace(" ", "").lower() != word] ) print(remove_line(text, "othertext")) output: test1 test2 test3 test4 | 2 | 2 |
76,893,273 | 2023-8-13 | https://stackoverflow.com/questions/76893273/shared-variable-between-parent-and-child-in-python | I have a global configuration pp that changes at runtime and needs it to be shared across all parent/child objects. class Config: pp = 'Init' def __init__(self): pass class Child(Config): def __init__(self, name): self.cc = name par = Config() print(f"Parent: {par.pp}") par.pp = "123" print(f"Parent: {par.pp}") child = Child('XYZ') print(f"Child-1: {child.pp} - {child.cc}") This prints: Parent: Init Parent: 123 Child-1: Init - XYZ The third line is expected to be Child-1: 123 - XYZ How can I implement that in a clean way? UPDATE* Currently it works with a method like: class Config: pp = 'Init' def __init__(self): pass def set_pp(self, val): type(self).pp = val | In your example, the parent of Child is not par (which is an instance of Config, but it is the class Config. So you could change the value directly on the class, like that: class Config: pp = 'Init' def __init__(self): pass class Child(Config): def __init__(self, name): self.cc = name print(f"Parent: {Config.pp}") Config.pp = "123" print(f"Parent: {Config.pp}") child = Child('XYZ') print(f"Child-1: {child.pp} - {child.cc}") Note that if you want to keep the same syntax, you could write class Config: _pp = "Init" @property def pp(self): return self._pp @pp.setter def pp(self, value): self.__class__._pp = value def __init__(self): pass class Child(Config): def __init__(self, name): self.cc = name par = Config() print(f"Parent: {par.pp}") par.pp = "123" print(f"Parent: {par.pp}") child = Child('XYZ') print(f"Child-1: {child.pp} - {child.cc}") In this version, the pp is set from the parent object, to the class Config. | 2 | 2 |
76,892,500 | 2023-8-13 | https://stackoverflow.com/questions/76892500/how-to-call-databricks-rest-api-to-list-jobs-run | I am currently developing a Python script to retrieve a comprehensive list of all the jobs that were executed yesterday. However, I'm encountering an issue with the script's pagination mechanism using tokens. Despite my attempts to loop through the pagination process, the resulting output remains unchanged. Here is the code import requests import pandas as pd import math import datetime import json def fetch_and_process_job_runs(base_uri, api_token, params): endpoint = '/api/2.1/jobs/runs/list' headers = {'Authorization': f'Bearer {api_token}'} all_data = [] # To store all the data from multiple pages while True: # print(params) response = requests.get(base_uri + endpoint, headers=headers, params=params) response_json = response.json() data = [] for run in response_json["runs"]: start_time_ms = run["start_time"] start_time_seconds = start_time_ms / 1000 start_time_readable = datetime.datetime.fromtimestamp(start_time_seconds).strftime('%Y-%m-%d %H:%M:%S') data.append({ "job_id": run["job_id"], "creator_user_name": run["creator_user_name"], "run_name": run["run_name"], "run_page_url": run["run_page_url"], "run_id": run["run_id"], "execution_duration_in_mins": math.ceil(int(run.get('execution_duration')) / (1000 * 60)), "result_state": run["state"].get("result_state"), "start_time": start_time_readable }) all_data.extend(data) df = pd.DataFrame(all_data) print(df) if response_json.get("has_more") == True: next_page_token = response_json.get("next_page_token") params['next_page_token'] = next_page_token else: break df = pd.DataFrame(all_data) return df # Replace with your actual values now = datetime.datetime.utcnow() yesterday = now - datetime.timedelta(days=1) start_time_from = int(yesterday.replace(hour=0, minute=0, second=0, microsecond=0).timestamp()) * 1000 start_time_to = int(yesterday.replace(hour=23, minute=59, second=59, microsecond=999999).timestamp()) * 1000 params = { # "start_time_from": start_time_from, # "start_time_to": start_time_to, "expand_tasks": True } baseURI = 'https://adb-xxxxxxxxxxxxxx.azuredatabricks.net' apiToken = 'xxxxxxxxxxxxxxxxxxxxxxxxxx' result_df = fetch_and_process_job_runs(baseURI, apiToken, params) print(result_df) Please help me. | I noticed that the value of next_token wasn't changing in API response and then figured that you have a very small error in your code. The parameter to be passed in request is page_token and not next_page_token. As per documentation at https://docs.databricks.com/api/workspace/jobs/list, page_token string Use next_page_token or prev_page_token returned from the previous request to list the next or previous page of jobs respectively. So params['next_page_token'] needs to change to params['page_token'] | 3 | 3 |
76,891,904 | 2023-8-13 | https://stackoverflow.com/questions/76891904/why-does-this-threadpoolexecutor-execute-futures-way-before-they-are-called | Why does this ThreadPoolExecutor execute futures way before they are called? import concurrent.futures import time def sleep_test(order_number): num_seconds = 0.5 print(f"Order {order_number} - Sleeping {num_seconds} seconds") time.sleep(num_seconds) print(f"Order {order_number} - Slept {num_seconds} seconds") if order_number == 4: raise Exception("Reached order #4") def main(): order_numbers = [i for i in range(10_000)] max_number_of_threads = 2 with concurrent.futures.ThreadPoolExecutor(max_workers=max_number_of_threads) as executor: futures = [] for order in order_numbers: futures.append(executor.submit(sleep_test, order_number=order)) for future in futures: if future.cancelled(): continue try: _ = future.result() except Exception: print("Caught Exception, stopping all future orders") executor.shutdown(wait=False, cancel_futures=True) if __name__ == "__main__": main() Here is a sample execution: $ python3 thread_pool_test.py Order 0 - Sleeping 0.5 seconds Order 1 - Sleeping 0.5 seconds Order 0 - Slept 0.5 seconds Order 1 - Slept 0.5 seconds Order 2 - Sleeping 0.5 seconds Order 3 - Sleeping 0.5 seconds Order 2 - Slept 0.5 seconds Order 4 - Sleeping 0.5 seconds Order 3 - Slept 0.5 seconds Order 5 - Sleeping 0.5 seconds Order 4 - Slept 0.5 seconds Order 6 - Sleeping 0.5 seconds Caught Exception, stopping all future orders Order 5 - Slept 0.5 seconds Order 4706 - Sleeping 0.5 seconds Order 6 - Slept 0.5 seconds Order 4706 - Slept 0.5 seconds All of a sudden Order 4706 is called seemingly out of nowhere which doesn't make sense to me. I expect the threads to stop at around Order 5 or 6 which is when the Exception is hit. Sometimes when I run the script it works as expected but other times it calls a function that is thousands of "futures" in the future. Why is this happening? Can I stop this from happening? | Short Answer: It appears that ThreadPoolExecutor.shutdown has no mechanism to prevent this, based on the CPython implementation. It is difficult to completely avoid this, but if you have a list of futures, you can at least avoid having them executed out of order by canceling them manually in reverse order as below. for f in futures[::-1]: # The key is to cancel in reverse order. f.cancel() In Detail: First, let's look at the ThreadPoolExecutor.submit implementation. The following is a simplified version (See the above link for the actual code). When you submit fn, it is wrapped in _WorkItem and put into the queue. def submit(self, fn, /, *args, **kwargs): f = _base.Future() w = _WorkItem(f, fn, args, kwargs) self._work_queue.put(w) return f The worker thread takes this out and runs it. def _worker(executor_reference, work_queue, initializer, initargs): while True: work_item = work_queue.get(block=True) if work_item is not None: work_item.run() Note that the worker thread is not locking anything during this operation. Instead, the _WorkItem checks the status of the future before executing the fn. If the future is canceled, execution will be aborted here. def run(self): if not self.future.set_running_or_notify_cancel(): return result = self.fn(*self.args, **self.kwargs) Finally, here is the shutdown implementation. def shutdown(self, wait=True, *, cancel_futures=False): if cancel_futures: while True: try: work_item = self._work_queue.get_nowait() except queue.Empty: break if work_item is not None: work_item.future.cancel() It takes all items from the queue and cancels them sequentially. Note that it locks something called self._shutdown_lock, but this lock does not affect the worker because the worker side does not lock anything. Taken together, if the thread that executed the shutdown (in this case, the main thread) releases the GIL for some reason in the middle of emptying the queue, the worker thread can retrieve the item in the middle. And since future is only cancelled when shutdown retrieves it, it will be executed if a worker thread retrieves it. | 4 | 2 |
76,891,209 | 2023-8-12 | https://stackoverflow.com/questions/76891209/how-do-i-create-a-new-dataframe-based-on-row-values-of-multiple-columns-in-pytho | I have multiple columns that contained only 0s or 1s. Apple Orange Pear 1 0 1 0 0 1 1 1 0 I would like to count and input the number of 0s (in "Wrong" column) and 1s (in "Correct" column) of each column in the new dataframe, and total them up into a table that looks like the following. Fruit Correct Wrong Apple 2 1 Orange 1 2 Pear 2 1 I tried a blend of value_counts(), groupby(), and pandas.pivot_table, but got stuck with the manipulation of the table. | Try this: df.apply(pd.Series.value_counts).rename(index={0:'Wrong', 1:'Correct'}).T Use pd.DataFrame.apply to "apply" pd.Series.value_counts to each column of the dataframe, then rename the index values using a dictionary for 0 and 1 to Wrong and Correct. Lastly, use T to transpose the dataframe. Output: Wrong Correct Apple 1 2 Orange 2 1 Pear 1 2 And, you can add .rename_axis('Fruit').reset_index() to get: Fruit Wrong Correct 0 Apple 1 2 1 Orange 2 1 2 Pear 1 2 | 3 | 2 |
76,888,669 | 2023-8-12 | https://stackoverflow.com/questions/76888669/401-unauthorized-from-https-test-pypi-org-legacy | I am using this cmd on windows to upload my package in testpypi twine upload -r testpypi dist/* but it's showing this error. so how to upload in testpypi and pypi when 2FA is enable ? Uploading mypackage-0.1.0-py3-none-any.whl 100% ββββββββββββββββββββββββββββββββββββββββ 8.5/8.5 kB β’ 00:00 β’ ? WARNING Error during upload. Retry with the --verbose option for more details. ERROR HTTPError: 401 Unauthorized from https://test.pypi.org/legacy/ User has two factor auth enabled, an API Token or Trusted Publisher must be used to upload in place of password. | create API token from your pypi account then give username =__token__ and password=APITokenafter cmd twine upload -r testpypi dist/* or if you are using Twine to upload your projects to PyPI, set up your $HOME/.pypirc file like this: [testpypi] username = __token__ password = API Token | 3 | 4 |
76,886,126 | 2023-8-11 | https://stackoverflow.com/questions/76886126/how-to-create-a-gantt-chart-in-python-with-plotly-including-tasks-of-a-duratio | I am trying to create a Gantt chart in Python. Some of the tasks that I have to include in the chart have a duration of 0 days, meaning they have to be completed on the same day. I've tried this code which I've found online that creates a basic Gantt chart with plotly: df = pd.DataFrame([ dict(Task="1", Start='2023-03-15', End='2023-03-15'), dict(Task="2", Start='2023-03-03', End='2023-03-10'), dict(Task="3", Start='2023-03-10', End='2023-03-15'), ]) print(df) fig = px.timeline(df, x_start="Start", x_end="End", y="Task") fig.update_yaxes(autorange="reversed") fig.show() It works fine for tasks that have a duration of at least 1 day (like Task 2 and 3). However, tasks that have to be completed on the same day, like Task 1 in the example above, are not displayed in the Gantt chart after plotting it. The resulting chart only contains Task 2 and 3. The space next to the label of Task 1 stays empty. Is there a way to display Task 1 (and other tasks that have to be completed on the same day) in the same Gantt chart as Task 2 and 3? The Gantt chart doesn't have to be necessarily created with Plotly. Could be also with Matplotlib. Whatever works best and is the easiest most useful option. Grateful for any help!! | The example below provides similar functionality using matplotlib. It is adapted from the similar case at https://stackoverflow.com/a/76836805/21896093 . When there's a task that has a duration of 0 days, a small duration is assigned (0.1 days) so that it shows up. You can adjust it as desired. Output: import pandas as pd from matplotlib import patches import matplotlib.pyplot as plt import numpy as np import matplotlib.dates as mdates # # Example data # #Original data df = pd.DataFrame( {'Task': ['1', '2', '3'], 'Start': ['2023-03-15', '2023-03-03', '2023-03-10'], 'End': ['2023-03-15', '2023-03-10', '2023-03-15'], } ) #Conver to datetime, as we'll do some simple arithmetic between dates for date_col in ['Start', 'End']: df[date_col] = pd.to_datetime(df[date_col], format='%Y-%m-%d') df # # Create plot # height = 0.9 f, ax = plt.subplots(figsize=(10, 6)) for idx in range(len(df)): y0 = (idx + 1) - height / 2 x0 = df.iloc[idx].Start width = df.iloc[idx].End - x0 if not width: width = pd.Timedelta(days=0.1) ax.add_patch( patches.Rectangle((x0, y0), width, height) ) ax.hlines(y0 + height / 2, xmin=df.Start.min(), xmax=x0, color='k', linestyles=':', linewidth=0.5) #DateFormatter required as we're building the plot using patches, #rather than supplying entire series ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) ax.xaxis.set_major_locator(mdates.DayLocator()) ax.set_xticklabels(ax.get_xticklabels(), rotation=30) ax.set_xlabel('Date') ax.set_ylabel('Task') ax.set_yticks(range(1, len(df) + 1)) ax.set_yticklabels(df.Task) plt.show() Update Version with segmented bars, as per request in comments. import pandas as pd from matplotlib import patches import matplotlib.pyplot as plt import numpy as np import matplotlib.dates as mdates # # Example data # #Original data df = pd.DataFrame( {'Task': ['1', '2', '3'], 'Start': ['2023-03-15', '2023-03-03', '2023-03-10'], 'End': ['2023-03-15', '2023-03-10', '2023-03-15'], } ) #Conver to datetime, as we'll do some simple arithmetic between dates for date_col in ['Start', 'End']: df[date_col] = pd.to_datetime(df[date_col], format='%Y-%m-%d') df # # Create plot # height = 0.9 zero_width = pd.Timedelta(days=0.1) segmentation_width = pd.Timedelta(days=1) gap_between_days = pd.Timedelta(days=0.05) one_day = pd.Timedelta(days=1) f, ax = plt.subplots(figsize=(10, 6)) for idx in range(len(df)): y0 = (idx + 1) - height / 2 x0 = df.iloc[idx].Start width = df.iloc[idx].End - x0 if not width: width = pd.Timedelta(days=0.1) n_days = width // segmentation_width days_remainder = width % segmentation_width for day in range(n_days): day_td = pd.Timedelta(days=day) ax.add_patch( patches.Rectangle((x0 + day_td, y0), one_day - gap_between_days, height) ) n_days_td = pd.Timedelta(days=n_days) ax.add_patch(patches.Rectangle((x0 + n_days_td, y0), days_remainder, height)) ax.hlines(y0 + height / 2, xmin=df.Start.min(), xmax=x0, color='k', linestyles=':', linewidth=0.5) #DateFormatter required as we're building the plot using patches, #rather than supplying entire series ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) ax.xaxis.set_major_locator(mdates.DayLocator()) plt.xticks(rotation=30) ax.set_xlabel('Date') ax.set_ylabel('Task') ax.set_yticks(range(1, len(df) + 1)) ax.set_yticklabels(df.Task) plt.show() | 3 | 4 |
76,887,365 | 2023-8-12 | https://stackoverflow.com/questions/76887365/how-to-modify-a-behavior-of-pathlib-path | I want pathlib.Path to automatically output logs for some destructive commands such as path.rename(new_path). I made a subclass of pathlib.Path with logging functions, and replaced from pathlib import Path to from mylib import MyPath as Path. But it does not affect to the existing subclasses of pathlib.Path such as pathlib.WindowsPath, which is the actual implementation class of path instances. from pathlib import Path from mynicelib import MyPath p = MyPath('/path/to/file') isinstance(p, MyPath) # -> False isinstance(p, Path) # -> True type(p) # -> <class 'pathlib.WindowsPath'> | Just do some monkeypatching: from pathlib import Path Path.oldrename = Path.rename def rename(self,b): print("Inside my rename") self.oldrename(b) Path.rename = rename p = Path('./x.c') p.rename('y.c') | 2 | 3 |
76,887,165 | 2023-8-11 | https://stackoverflow.com/questions/76887165/elementwise-multiplication-of-dataframes-in-python | I have a dataframe which represents features of a linear regression model. df1 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'], 'feature1': [1000,2000,4000,3000,5000,2000,8000,2000,4000,3000], 'feature2': [9000,7000,3000,1000,2000,3000,6000,8000,1000,1000], 'feature3': [3000,1000,2000,5000,9000,7000,2000,3000,5000,9000]}) I run the model and calculate the coefficients which produces another dataframe, below. df2 = pd.DataFrame({'feature': ['feature1','feature2','feature3'], 'coefficient': [-1,2,0.5]}) I then want to produce a third dataframe where the contents are the product of the values from df1 and the corresponding coefficients from df2. Desired output below. df3 = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'], 'feature1': [-1000,-2000,-4000,-3000,-5000,-2000,-8000,-2000,-4000,-3000], 'feature2': [18000,14000,6000,2000,4000,6000,12000,16000,2000,2000], 'feature3': [1500,500,1000,2500,4500,3500,1000,1500,2500,4500]}) I have tried to achieve this using mul and multiply in the following manner, however this does not produce the desired result. features = [['feature1', 'feature2', 'feature3']] results = pd.DataFrame() for cols in features: results[cols] = df1[cols] results = df1.mul(df2['coefficient'], axis =0) results | Try this using pandas intrinsic data alignment tenet: df1.set_index('yyyyww').mul(df2.set_index('feature')['coefficient']) Output: feature1 feature2 feature3 yyyyww 2022-01 -1000.0 18000.0 1500.0 2022-02 -2000.0 14000.0 500.0 2022-03 -4000.0 6000.0 1000.0 2022-04 -3000.0 2000.0 2500.0 2022-05 -5000.0 4000.0 4500.0 2022-06 -2000.0 6000.0 3500.0 2022-07 -8000.0 12000.0 1000.0 2022-08 -2000.0 16000.0 1500.0 2022-09 -4000.0 2000.0 2500.0 2022-10 -3000.0 2000.0 4500.0 | 3 | 3 |
76,871,981 | 2023-8-10 | https://stackoverflow.com/questions/76871981/trying-to-get-the-5-most-collinear-points-out-of-multiple-points-gives-wrong-res | Let's say I have a list of 15 different points with x and y coordinates. How can I find the 5 most collinear points? They don't need to be perfectly collinear but they should reliably be the most collinear points of all possible combinations. My current approach: Get all possible combinations using itertools.combinations Iterate over every combination of 5 points Calculate the slope from first point to every other point Calculate the absolute differences between the first slope and all other slopes Sum up all differences Compare the sums of all combinations and return the combination with the smallest sum This works sometimes but sometimes also returns garbage. Here is an example: The algorithm works with the points_0deg list. If I take the same points and rotate them 80 degrees (points_80deg) the red points are found to be the most collinear instead of the green points. import itertools points_0deg = [[818.5, 395.5], [688.5, 586.5], [556.5, 448.5], [819.5, 779.5], [657.5, 892.5], [558.5, 727.5], [658.5, 278.5], [453.5, 279.5], [426.5, 588.5], [877.5, 589.5], [458.5, 893.5], [301.5, 403.5], [296.5, 774.5], [241.5, 583.5], [558.5, 585.5]] points_80deg = [[498.5, 397.5], [665.5, 558.5], [505.5, 665.5], [356.5, 533.5], [781.5, 711.5], [876.5, 463.5], [958.5, 641.5], [619.5, 816.5], [700.5, 373.5], [925.5, 838.5], [414.5, 906.5], [322.5, 737.5], [580.5, 998.5], [776.5, 975.5], [640.5, 687.5]] def check_collinear(points): x1, y1 = points[0] slope = [] for point in points[1:]: x, y = point slope.append((y - y1) / (x - x1) if (x - x1) != 0 else float('inf')) diff = [] for val in slope[1:]: diff.append(abs(slope[0] - val)) return sum(diff) def find_most_collinear(points): combinations = list(itertools.combinations(points, 5)) collinear_points = None min_sum = float('inf') for comb in combinations: s = check_collinear(comb) if s <= min_sum: min_sum = s collinear_points = comb return collinear_points if __name__ == '__main__': print(find_most_collinear(points_0deg)) print(find_most_collinear(points_80deg)) | A simple algorithm is to find the two extreme points of the set (find the average M, then the point A most distant from M and then the point B most distant from A). Then as score use the sum of squared distances from the line passing through A and B. In code: # gg is the candidate group avg = ((sum(x for x, y in gg)/5), (sum(y for x, y in gg)/5)) ax, ay = max(gg, key=lambda p:(p[0]-avg[0])**2 + (p[1]-avg[1])**2) bx, by = max(gg, key=lambda p:(p[0]-ax)**2 + (p[1]-ay)**2) abx, aby = bx - ax, by - ay ab2 = abx**2 + aby**2 def dist2(p): t = ((p[0] - ax)*abx + (p[1] - ay)*aby) / ab2 xx = ax + t*abx yy = ay + t*aby return (p[0]-xx)**2 + (p[1]-yy)**2 score = sum(dist2(p) for p in gg) if best is None or score < best[0]: best = score, gg # lower wins Instead of just the line passing from extremes the correct thing would be to find the line with minimum score but that's a bit annoying to compute exactly and this very simple method may be is enough for your use cases | 3 | 2 |
76,885,853 | 2023-8-11 | https://stackoverflow.com/questions/76885853/apply-a-function-over-last-two-dimensions | How can I apply a function over last two dimensions? E.g. I generated an array below of (2,3,3) dimensions, the resulting array should have the same dimenstions where the function is apply to a[0,:,:] and a[1,:,:]. I understand I can go with a for loop, but might there be an in-built function specially for these type of operations? a = np.arange(18).reshape(2,3,3) c = [] for i in range(2): c.append(np.linalg.pinv(a[i,:,:])) result = np.array(c) Here I used np.linalg.pinv but assume a generic function f: (n,k)-> (n,k), e.g f = lambda x: x**3 +61 | Assuming a 3D array, you can directly map your function on the array, this will loop over the first dimension and apply the function on the remaining dimensions: out = np.array(list(map(np.linalg.pinv, a))) NB. this is not vectorized. Output: array([[[-5.55555556e-01, -1.66666667e-01, 2.22222222e-01], [-5.55555556e-02, 1.98795266e-16, 5.55555556e-02], [ 4.44444444e-01, 1.66666667e-01, -1.11111111e-01]], [[-1.30555556e+00, -1.66666667e-01, 9.72222222e-01], [-5.55555556e-02, 1.31569730e-15, 5.55555556e-02], [ 1.19444444e+00, 1.66666667e-01, -8.61111111e-01]]]) | 2 | 2 |
76,885,758 | 2023-8-11 | https://stackoverflow.com/questions/76885758/only-update-readme-for-a-package-on-pypi | I published a package on PyPI and then realised I should change a few details in the attached README.md. Is it possible to change the readme without uploading a new version of the whole package? And if not, what is the most correct way to update the README on PyPI? I am using poetry to manage the package, and when I change README and run poetry publish --build, there is an error, because PyPI does not allow re-uploading a file with an already used name. It seems like only changing the documentation should be a possibility, as, e.g., writing some extra help or changing a contact should not require a whole round of changing version and testing that everything still works? | Unfortunately, no it isn't, because when you go to publish, the versions are immutable. This is by design. My suggestion would be to increment a minor version z on x.y.z in order to publish the latest version with updates to the README. So something like: 0.0.9 -> 0.0.10 or 2.1.0 -> 2.1.1 | 2 | 4 |
76,885,099 | 2023-8-11 | https://stackoverflow.com/questions/76885099/should-dataclass-use-fields-for-attributes-with-only-defaults | When a python dataclass has a simple attribute that only needs a default value, it can be defined either of these ways. from dataclasses import dataclass, field @dataclass class ExampleClass: x: int = 5 @dataclass class AnotherClass: x: int = field(default=5) I don't see any advantage of one or the other in terms of functionality, and so would go with the less verbose version. Of course field offers other bells and whistles, but I don't need them yet and could easily refactor to use field later. Is there any advantage to using field for a simple default over just a type hint? | No, if all you need is a field with a default value and no other special behavior, assigning the value directly to the class variable is equivalent to a field with only a default parameter. x: int = field(default=5) x: int = 5 In fact, Python goes way out of its way to make sure the two behave equivalently. From PEP 557, If the default value of a field is specified by a call to field(), then the class attribute for this field will be replaced by the specified default value. If no default is provided, then the class attribute will be deleted. The intent is that after the dataclass decorator runs, the class attributes will all contain the default values for the fields, just as if the default value itself were specified. So whether or not you assign a field, at runtime, the value of both of the x's above on the class (as opposed to an instance) will be the number 5. For the sake of completeness, the reasons you would want to call field rather than writing simply a type annotation include: Providing a default_factory, which is a callable that runs every time an instance is constructed (i.e. x: list = [] will be the same list every time, whereas x: list = field(default_factory=list) will be a new list for each instance) Removing the field from the parameters from consideration in some or all of the dataclass-generated functionality. You can remove the field from __init__ params, from the printed __repr__, from the comparison method, and from the generated __hash__. Adding third party metadata. This doesn't affect dataclasses but can be used to share arbitrary information about a dataclass field with a third party library. If none of the above apply to you, stick to the simple syntax and don't call field. | 3 | 3 |
76,878,564 | 2023-8-10 | https://stackoverflow.com/questions/76878564/is-there-a-way-to-multithread-or-batch-rest-api-calls-in-python | I've got a very long list of keys, and I am calling a REST API with each key to GET some metadata about it. The API can only accept one key at a time, but I wondered if there was a way I could batch or multi-thread the calls from my side? | The other reply to this looks like ChatGPT so it should be ignored. I did, however, use its code as a base to write a function that does what I want. import requests from concurrent.futures import ThreadPoolExecutor API_ENDPOINT = 'https://api.example.com/metadata' def get_metadata_for_key(key): url = f"{API_ENDPOINT}/{key}" response = requests.get(url) if response.status_code == 200: return response.json() else: return None def get_save_metadata(keys, workers): results = {} batches = [keys[i : i + workers] for i in range(0, len(keys), workers)] with ThreadPoolExecutor(max_workers=workers) as executor: for batch in tqdm(batches): #tqdm shows a progress bar futures = {key: executor.submit(get_metadata_for_key, key) for key in batch} futures_clean = {k: v.result() for k, v in futures.items() if v is not None} results.update({k: xmltodict.parse(v) for k, v in futures_clean.items()}) return results | 3 | 3 |
76,882,047 | 2023-8-11 | https://stackoverflow.com/questions/76882047/drop-duplicates-in-a-dataframe-and-keep-the-one-with-a-specific-column-value | I am having a dataframe df: columnA columnB columnC columnD columnE A B 10 C C A B 10 D A B C 20 A A B A 20 D A B A 20 D C I want to drop the duplicates if there are duplicates entries for columnA, columnB, columnC in my case the duplicates are: columnA columnB columnC columnD columnE A B 10 C C A B 10 D A B A 20 D A B A 20 D C How can I keep the one of the duplicate rows, where columnE is equal to C ? So that the output for the full dataframe is: columnA columnB columnC columnD columnE A B 10 C C B C 20 A A B A 20 D C | You can use DataFrame.sort_values for prefer C values first with DataFrame.drop_duplicates and or original order add DataFrame.sort_index: out = (df.sort_values('columnE', key=lambda x: x.ne('C')) .drop_duplicates(['columnA','columnB','columnC']) .sort_index()) print (out) columnA columnB columnC columnD columnE 0 A B 10 C C 2 B C 20 A A 4 B A 20 D C Or use DataFrameGroupBy.idxmax for indices with prefer C with DataFrame.loc for select rows and Series.sort_values for original ordering: idx = df['columnE'].eq('C').groupby([df['columnA'],df['columnB'],df['columnC']]).idxmax() out = df.loc[idx.sort_values()] print (out) columnA columnB columnC columnD columnE 0 A B 10 C C 2 B C 20 A A 4 B A 20 D C | 3 | 3 |
76,879,923 | 2023-8-10 | https://stackoverflow.com/questions/76879923/why-is-name-startswitha-returning-true-on-the-name-barry | I am trying to learn how to use the .startswith() method and also the filter() function. For some reason I am not getting the result I expected. I don't know if I am misunderstanding 'startswith' or 'filter'. here is the code i used names = ['aaron','anthony','tom','henry','barry'] def start_a(names): for name in names: if name.startswith('a'): return True print(list(filter(start_a, names))) i was expecting to get ['aaron', 'anthony'] however i got ['aaron', 'anthony', 'barry'] does anyone know where i went wrong? thanks | start_a() is looping over the characters in the name, because it just receives one list element as its parameter. So it's actually checking whether the name contains a, not whether it starts with a. filter() does the looping over the list for you, you don't need another loop in the function. def start_a(name): return name.startswith('a') | 2 | 3 |
76,879,889 | 2023-8-10 | https://stackoverflow.com/questions/76879889/conda-package-not-found-how-to-install-conda-packages-on-apple-m1-m2-chips-whi | Let's say I want to install pybox2d (but this applies to other packages as well), and I can see on the Anaconda website that this package obviously exists, but it cannot be found when trying to install it on my new Macbook (one of the ones with the new Apple M1 or M2 CPUs). What should I do? conda search pybox2d -c conda-forge Loading channels: done No match found for: pybox2d. Search: *pybox2d* PackagesNotFoundError: The following packages are not available from current channels: - pybox2d Current channels: - https://conda.anaconda.org/conda-forge/osx-arm64 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/osx-arm64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/osx-arm64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. Note: There are other Stack Overflow questions which relate to this but don't quite answer the question as I have asked it here, so I wanted to make it more direct. How to specify the architecture or platform for a new conda environment? (Apple Silicon) How to set up a conda osx-64 environment on ARM mac? Cannot install Python 3.7 on osx-arm64 | If you look at the linked webpage above (at the time of writing), you can see that osx-arm64 is not listed under the "Installers". However, in the output above we can see that we are only searching in osx-arm64, and not in osx-64. The explanation for this is that this package is not built for our new Apple architecture (the new M1 and M2 chips are ARM-based, not Intel-based). However, it is possible to run programs which were compiled for Apple Intel using some kind of translation called Rosetta. We can have some python environments which are built off of the x86 instruction set. To do this, just add CONDA_SUBDIR=osx-64 to your env creation. You will also want to configure that environment so that it always installs this type of package. This way, your other environments can still be osx-arm64 by default, and you can just configure an environment as osx-64 only when needed (only when you need an old package that hasn't been cross-compiled to the new architecture). $ CONDA_SUBDIR=osx-64 conda create -n my_intel_env --file environment.yml $ conda activate my_intel_env $ conda config --env --set subdir osx-64 The downside of using the old architecture, as far as I know, is increased startup costs for the program, when the program needs to be translated from one instruction set to another. I'm not sure if this happens only once or more often. See Also If you need to install Tensorflow, see this blog. If you are using PyTorch, you may also need to downgrade mkl for now. How to specify the architecture or platform for a new conda environment? (Apple Silicon) How to set up a conda osx-64 environment on ARM mac? | 3 | 7 |
76,877,041 | 2023-8-10 | https://stackoverflow.com/questions/76877041/how-does-python3-11s-strenums-mro-work-differently-for-str-and-repr | Python3.11 introduced StrEnum and IntEnum which inherit str or int respectively, and also inherit ReprEnum, which in turn inherits Enum. ReprEnum's implementation is actually empty. >>> print(inspect.getsource(ReprEnum)) class ReprEnum(Enum): """ Only changes the repr(), leaving str() and format() to the mixed-in type. """ If I create a StrEnum and check the MRO, I can see that str comes first. class Strings(StrEnum): A = "a" >>> Strings.__mro__ (<enum 'Strings'>, <enum 'StrEnum'>, <class 'str'>, <enum 'ReprEnum'>, <enum 'Enum'>, <class 'object'>) Both str and Enum define a __str__ and a __repr__ >>> str.__repr__ <slot wrapper '__repr__' of 'str' objects> >>> str.__str__ <slot wrapper '__str__' of 'str' objects> >>> Enum.__repr__ <function Enum.__repr__ at 0x7ffff69f72e0> >>> Enum.__str__ <function Enum.__str__ at 0x7ffff69f7380> How then does __repr__ get inherited from Enum and __str__ get inherited from str? | The __repr__ method comes the normal way, inherited from Enum (via StrEnum) >>> Strings.__repr__ is StrEnum.__repr__ is Enum.__repr__ True For the __str__ method, the metaclass EnumType checks for the presence of ReprEnum and "hoists up" the str and format handling of the mixed-in data type into the class namespace at class definition time here: class EnumType(type): ... def __new__(metacls, cls, bases, classdict, *, boundary=None, _simple=False, **kwds): ... # Also, special handling for ReprEnum if ReprEnum is not None and ReprEnum in bases: if member_type is object: raise TypeError( 'ReprEnum subclasses must be mixed with a data type (i.e.' ' int, str, float, etc.)' ) if '__format__' not in classdict: enum_class.__format__ = member_type.__format__ classdict['__format__'] = enum_class.__format__ if '__str__' not in classdict: method = member_type.__str__ if method is object.__str__: # if member_type does not define __str__, object.__str__ will use # its __repr__ instead, so we'll also use its __repr__ method = member_type.__repr__ enum_class.__str__ = method classdict['__str__'] = enum_class.__str__ ... Now that a Strings.__str__ method may be found directly in the class namespace, the MRO needn't be traversed. | 5 | 4 |
76,876,323 | 2023-8-10 | https://stackoverflow.com/questions/76876323/can-regex-identify-characters-interspersed-with-a-limit | I am new to using regex but I feel my pattern may be too complex. I am looking for a pattern of a minimum number of brackets with a maximum number of dots interspersed. I can't see a way for regex to count the numbers of dots in the overall pattern instead of sequentially. For example: ...((((((((.(((..((..((((.(((((((.(..(((((.(((.(((...))).))).)))))..)..))))))).))))..))..))).))))))))(((.((.(((((...((........))))))))))))............ If I want to identify a run of at least 25 (s with a maximum of 15 .s interspersed from the first ( to the last: ...((((((((.(((..((..((((.(((((((.(..(((((.(((.(((...))).))).)))))..)..))))))).))))..))..))).))))))))(((.((.(((((...((........))))))))))))............ My regex is currently searching for a a sequence with a maximum of 15 consecutive .s instead. Is this possible? If not should I be using an alternative (i.e. pyparsing) This is what I have so far: (\.{0,15}\(){25,} | I think you can use a combination of regex and string manipulation in python like this for example : import re #sample data text = "...((((((((.(((..((..((((.(((((((.(..(((((.(((.(((...))).))).)))))..)..))))))).))))..))..))).))))))))(((.((.(((((...((........))))))))))))............" #to match everything between the outer brackets pattern = r"\([^()]*\)" #find all matches of the pattern matches = re.findall(pattern, text) #iterate through the matches for match in matches: dot_count = match.count(".") if dot_count <= 15: print("Pattern matched!") break else: print("Pattern not matched.") Output: Pattern matched! | 2 | 1 |
76,872,744 | 2023-8-10 | https://stackoverflow.com/questions/76872744/connect-to-gmail-using-email-address-and-password-with-python | I am trying to connect to my Gmail account using Python. I want to connect to it using both SMTP and IMAP. I am aware that it is possible to use an app password to make this connection, but is there a way to use the actual email password instead? I have been reading this particular article, https://support.google.com/accounts/answer/6010255#zippy=%2Cif-less-secure-app-access-is-on-for-your-account%2Cif-less-secure-app-access-is-off-for-your-account The warning given at the top tells me that it is not possible to do so, even if 'Less secure app access' is allowed, Do I have that right? Given below is the code I am using to send and search emails. Like I said though, this only works with an app password, import smtplib import imaplib import email from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText import pandas as pd class EmailClient: def __init__(self, email, password, smtp_server='smtp.gmail.com', smtp_port=587, imap_server="imap.gmail.com"): self.email = email self.password = password self.smtp_server = smtplib.SMTP(smtp_server, smtp_port) self.imap_server = imaplib.IMAP4_SSL(imap_server) def send_email(self, to_addr, subject, body): msg = MIMEMultipart() msg['From'] = self.email msg['To'] = to_addr msg['Subject'] = subject msg.attach(MIMEText(body, 'plain')) self.smtp_server.starttls() self.smtp_server.login(self.email, self.password) self.smtp_server.send_message(msg) self.smtp_server.quit() def search_email(self, mailbox="INBOX", subject=None, to=None, from_=None, since_date=None, until_date=None, since_emailid=None): self.imap_server.login(self.email, self.password) self.imap_server.select(mailbox) query_parts = [] if subject is not None: query_parts.append(f'(SUBJECT "{subject}")') if to is not None: query_parts.append(f'(TO "{to}")') if from_ is not None: query_parts.append(f'(FROM "{from_}")') if since_date is not None: since_date_str = since_date.strftime("%d-%b-%Y") query_parts.append(f'(SINCE "{since_date_str}")') if until_date is not None: until_date_str = until_date.strftime("%d-%b-%Y") query_parts.append(f'(BEFORE "{until_date_str}")') if since_emailid is not None: query_parts.append(f'(UID {since_emailid}:*)') query = ' '.join(query_parts) ret = [] resp, items = self.imap_server.uid('search', None, query) items = items[0].split() for emailid in items[::-1]: resp, data = self.imap_server.uid('fetch', emailid, "(BODY[HEADER.FIELDS (SUBJECT TO FROM DATE)])") try: raw_email = data[0][1].decode("utf-8") except UnicodeDecodeError: ValueError(f"Could not decode email with id {emailid}") email_message = email.message_from_string(raw_email) email_line = {} email_line['id'] = emailid email_line["to"] = email_message['To'] email_line["from"] = email_message['From'] email_line["subject"] = str(email_message['Subject']) email_line["created_at"] = email_message['Date'] resp, email_data = self.imap_server.uid('fetch', emailid, '(BODY[TEXT])') email_line["body"] = email_data[0][1].decode('utf-8') ret.append(email_line) self.imap_server.logout() return pd.DataFrame(ret) | I am aware that it is possible to use an app password to make this connection, but is there a way to use the actual email password instead? No there is not, you need to do one of two things. enable 2fa on the account and create an apps password Use Xoauth2 and request authorization of the user to access their account. Your code looks fine to me just crate an apps password. Xoauth sample from __future__ import print_function import base64 import os.path import smtplib from email.mime.text import MIMEText from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow # If modifying these scopes, delete the file token.json. SCOPES = ['https://mail.google.com/'] # user token storage USER_TOKENS = 'token.json' # application credentials CREDENTIALS = 'C:\YouTube\dev\credentials.json' def getToken() -> str: creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists(USER_TOKENS): creds = Credentials.from_authorized_user_file(USER_TOKENS, SCOPES) creds.refresh(Request()) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS, SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open(USER_TOKENS, 'w') as token: token.write(creds.to_json()) return creds.token def generate_oauth2_string(username, access_token) -> str: auth_string = 'user=' + username + '\1auth=Bearer ' + access_token + '\1\1' return base64.b64encode(auth_string.encode('ascii')).decode('ascii') def send_email(host, port, subject, msg, sender, recipients): access_token = getToken() auth_string = generate_oauth2_string(sender, access_token) msg = MIMEText(msg) msg['Subject'] = subject msg['From'] = sender msg['To'] = ', '.join(recipients) server = smtplib.SMTP(host, port) server.starttls() server.docmd('AUTH', 'XOAUTH2 ' + auth_string) server.sendmail(sender, recipients, msg.as_string()) server.quit() def main(): host = "smtp.gmail.com" port = 587 user = "[email protected]" recipient = "[email protected]" subject = "Test email Oauth2" msg = "Hello world" sender = user recipients = [recipient] send_email(host, port, subject, msg, sender, recipients) if __name__ == '__main__': main() | 3 | 2 |
76,871,473 | 2023-8-9 | https://stackoverflow.com/questions/76871473/transposing-each-row-data-to-column-for-each-id-in-dataframe | My Dataframe looks like this. id age Gender snapshot_1 performance_13 snapshot_5 performance_17 snapshot_7 performance_19 1 34 M 80 30 40 30 2 42 F 65 55 60 15 25 45 ALL Id's data need to be grouped for snapshot/performance with ID repetition like below. For Snapshot and its corresponding performance window could be fetched from the number after underscore from 1st dataframe. ID Age Gender Snapshot_Window Performance_window Snapshot_Value Performance_Value 1 34 M 1 13 2 42 F 1 13 65 55 1 34 M 5 17 80 30 2 42 F 5 17 60 15 1 34 M 7 19 40 30 2 42 F 7 19 25 45 | Using a MultiIndex to reshape, to handle an arbitrary number of categories: tmp = df.set_index(['id', 'age', 'Gender']) idx = (tmp.columns.to_series().str.split('_', n=1, expand=True) .assign(n=lambda x: x.groupby(0).cumcount()) ) out = (tmp .set_axis(pd.MultiIndex.from_frame(idx[[0, 'n']]), axis=1) .stack(dropna=False).add_suffix('_value') .reset_index(tmp.index.names) .join(idx.pivot(index='n', columns=0, values=1) .add_suffix('_window') ) .rename_axis(index=None, columns=None) ) Output: id age Gender performance_value snapshot_value performance_window snapshot_window 0 1 34 M NaN NaN 13 1 0 2 42 F 55.0 65.0 13 1 1 1 34 M 30.0 80.0 17 5 1 2 42 F 15.0 60.0 17 5 2 1 34 M 30.0 40.0 19 7 2 2 42 F 45.0 25.0 19 7 | 3 | 2 |
76,867,554 | 2023-8-9 | https://stackoverflow.com/questions/76867554/fastapi-how-to-access-bearer-token | I'm using FastAPI to create a simple api for automating my emails. I want to protect certain routes and I'm using this class: import time import jwt from fastapi import HTTPException, Request from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer #if you need to try it out just swap the following from exchange_api.auth.jwt_handler import JWT_ALGORITHM, JWT_SECRET class JWTBearer(HTTPBearer): def __init__(self, auto_error: bool = True): super(JWTBearer, self).__init__(auto_error=auto_error) async def __call__(self, request: Request): credentials: HTTPAuthorizationCredentials = await super(JWTBearer, self).__call__(request) if credentials: if not credentials.scheme == "Bearer": raise HTTPException(status_code=403, detail="Invalid authentication scheme.") if not self.verify_jwt(credentials.credentials): raise HTTPException(status_code=403, detail="Invalid token or expired token.") return credentials.credentials else: raise HTTPException(status_code=403, detail="Invalid authorization code.") def verify_jwt(self, jwtoken: str) -> bool: isTokenValid: bool = False try: payload = decodeJWT(jwtoken) except: payload = None if payload: isTokenValid = True return isTokenValid def decodeJWT(token: str) -> dict: try: decoded_token = jwt.decode(token, JWT_SECRET, algorithms=[JWT_ALGORITHM]) return decoded_token if decoded_token["expiration"] >= time.time() else None except: return {} Then I wanted to protect certain routes: from fastapi import APIRouter, Depends, HTTPException, Request, Header from exchange_api.auth.jwt_bearer import JWTBearer @auth_router.get("/protected", dependencies=[Depends(JWTBearer())], tags=["auth test"]) def get_user_data(): #my issue: I'd like to access the token and its payload return {} I have no issues with my code so far. But I'd like to periodically refresh the token and I need its payload anyway to do things inside my functions. How can I do that? I don't want my users to get kicked out in the middle of their session because their token expired. It works fine without depends and using the token as the body of the various routes but I don't think that's the right way to do this. fastapi-jwt-auth is too old and it generates dependencies conflicts with my already installed libraries. EDIT: What I mean is that I want to access the bearer token that my users submitted to authenticate to do various things and also because I plan to substitute it with a new one every time the users call a protected route but WITHOUT the need to authenticate again. EDIT: I managed to get the token: @auth_router.get("/protected", dependencies=[Depends(JWTBearer())], tags=["auth test"]) def get_user_data(request: Request): token = request.headers["authorization"] return {token} | I was going to suggest returning the token within your auth function, but then I saw you figured it out. I'd also recommend using FastAPI-Another-JWT-Auth. It speeds things up, declutters everything, and has most of the functions you might need built in. It's a fork of the old main one which is deprecated and no longer maintained. async def some_function(auth: AuthJWT = Depends()): auth.access_token_required() If you need the token subject, you just do: user_id = auth.get_jwt_subject() If you want to set a new jwt each time, you just do: new_access_token = auth.create_access_token() auth.set_access_token(new_access_token) | 4 | 1 |
76,869,803 | 2023-8-9 | https://stackoverflow.com/questions/76869803/get-longest-distance-from-series-containing-comma-separated-strings-with-multipl | I'll preface by saying I have a solution, but it feels overly complicated and inefficient and I'm looking to improve on it. I have a series inside a dataframe that looks like: >>> df = pd.DataFrame({'Distance':[None,None,'13',None,'38.12','5NW','1N,3SW,8.3E',None,'2,7']}) >>> df Distance 0 None 1 None 2 13 3 None 4 38.12 5 5NW 6 1N,3,8.3E 7 None 8 2,7 The non-None values are always strings, and each string is a comma-separated list of distances, each of which might have a fractional part and might not be followed by a cardinal direction. Cardinal directions are always represented as one of the following: N E S W NE NW SE SW My current goal is to split this column into three: A column for the greatest distance, represented as a float (I want to do math on it later) A column for the cardinal direction associated with the greatest value, if present A column for any remaining distances/directions, represented as a list of strings So, for the input above, the desired output would be something like: Distance Direction Additional_Values 0 NaN None None 1 NaN None None 2 13.00 None None 3 NaN None None 4 38.12 None None 5 5.00 NW None 6 8.30 E [1N, 3SW] 7 NaN None None 8 7.00 None [2] So far I'm doing it like this: >>> helper = df['Distance'].str.extractall(r'(?P<distance>[\d\.]+)(?P<direction>[NESW]*)') #Get each distance + possible direction and explode to one row per value / value pair >>> helper['distance'] = helper['distance'].astype(float) #Convert distances to float >>> helper.index.set_names('orig_id',level=0,inplace=True) #Give the level 0 index a name for convenience >>> helper distance direction orig_id match 2 0 13.00 NaN 4 0 38.12 NaN 5 0 5.00 NW 6 0 1.00 N 1 8.30 E 2 3.00 SW 8 0 2.00 NaN 1 7.00 NaN >>> helperMax = helper.sort_values('distance').reset_index().drop_duplicates('orig_id',keep='last') #Get maximum distance for each index from the original df >>> helperMax = helperMax.sort_values('orig_id').set_index(['orig_id','match']) #Convert columns back to MultiIndex >>> helperMax distance direction orig_id match 2 0 13.00 NaN 4 0 38.12 NaN 5 0 5.00 NW 6 1 8.30 E 8 1 7.00 NaN >>> helperRemaining = helper.drop(index=helperMax.index) #Get remaining value pairs by dropping indices of max values >>> helperRemaining['concat'] = helperRemaining['distance'].astype(str) + helperRemaining['direction'].fillna('') #Concat value pairs back into strings >>> helperRemaining = helperRemaining.groupby('orig_id')['concat'].apply(list) #Group strings into lists >>> helperRemaining = helperRemaining.reindex(df.index) #Make index match the starting df >>> helperRemaining 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 [1.0N, 3.0SW] 7 NaN 8 [2.0] >>> helperMax = helperMax.droplevel('match').reindex(df.index) #Make helperMax index match the starting df >>> df['Distance'] = helperMax['distance'] #Replace original df column with max values >>> # Insert Direction column (df actually has many columns and order matters) >>> df.insert( >>> df.columns.get_loc('Distance') + 1, >>> 'Direction', >>> helperMax['direction'] >>> ) >>> # Insert Additional_Values column >>> df.insert( >>> df.columns.get_loc('Direction') + 1, >>> 'Additional_Values', >>> helperRemaining >>> ) >>> df Distance Direction Additional_Values 0 NaN NaN NaN 1 NaN NaN NaN 2 13.00 NaN NaN 3 NaN NaN NaN 4 38.12 NaN NaN 5 5.00 NW NaN 6 8.30 E [1.0N, 3.0SW] 7 NaN NaN NaN 8 7.00 NaN [2.0] This gets an acceptable result, but it's a lot of steps to get there. What can I do to condense this into something less complicated, and hopefully speed it up along the way? Sidenote: I know the first three lines of my process immediately produce output in First Normal Form, and thus they'd better fit with data management best practices. Unfortunately, for compatibility reasons I can only have one table. Additionally: A column must be a single data type, and can contain lists of varying length as long as all elements across all lists are of the same data type. If there's a better way to format my table that fits those constraints and allows me to do math on the max distance, I'm all ears. Edit to clarify: When the max value occurs more than once, I want to get exactly one of them and send the rest to Additional_Values. Direction doesn't matter for this selection (aside from keeping any directions paired with their associated distances); I'm okay with e.g. sorting purely by distance and getting whichever one happens to be last. | Try: import re df = pd.DataFrame( {"Distance": [None, None, "13", None, "38.12", "5NW", "1N,3SW,8.3E", None, "2,7"]} ) pat = re.compile(r"^\d+\.?\d*") mask = df["Distance"].isna() x = ( df.loc[~mask, "Distance"] .apply( lambda x: sorted( x.split(","), key=lambda k: float(pat.search(k).group(0)), reverse=True, ), ) .to_frame() ) x["Distance2"] = x["Distance"].str[0].str.extract(r"^(\d+\.?\d*)") x["Direction"] = x["Distance"].str[0].str.extract(r"([A-Z]*)$").replace("", np.nan) x["Additional_Values"] = x["Distance"].str[1:].apply(lambda x: x if x else np.nan) x = x.reindex(df.index) print(x) Prints: Distance Distance2 Direction Additional_Values 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 [13] 13 NaN NaN 3 NaN NaN NaN NaN 4 [38.12] 38.12 NaN NaN 5 [5NW] 5 NW NaN 6 [8.3E, 3SW, 1N] 8.3 E [3SW, 1N] 7 NaN NaN NaN NaN 8 [7, 2] 7 NaN [2] | 3 | 3 |
76,864,397 | 2023-8-9 | https://stackoverflow.com/questions/76864397/how-to-replicate-a-conda-environment | I have successfully installed some python code on a Win10 machine using Anaconda and conda environments and would like to install exactly the same environment on another computer, also Win10. This page indicates that you can save a file that contains the environment info on computer 1, then recall it on computer 2, using: computer 1: conda list --explicit > spec-file.txt copy the file from the working directory of computer 1 to the working directory of computer 2 computer 2: conda create --name myenv --file spec-file.txt Step 1 works fine for me but step 3 fails with a ResolvePackageNotFound error, listing basically all the packages that are in the text file. Am I missing something? Is there a way to semi-automatically install the packages from that text file instead? Edit: summary from the answers and comments (thank you!): if conda install was used to install packages in computer 1, then what is written above is the best way to do an exact replication. if pip install was used instead for packages installation (as was the case for me), the chosen solution below is the most appropriate one (conda env export and conda env create) | try using export -- Computer 1 - conda env export > spec-file.yml Computer 2 - conda env create -f spec-file.yml | 4 | 1 |
76,866,138 | 2023-8-9 | https://stackoverflow.com/questions/76866138/how-to-effectively-use-pandas-to-change-values-based-unique-identifier-and-one-c | I have the following data frame: identifier loan_identifier cashflow_date cashflow_type amount 0 1 a111 15/07/2023 funding -195.71 1 2 a111 01/07/2023 interest_repayment 3.11 2 3 a111 15/07/2023 interest_repayment 0.04 3 4 a111 20/07/2023 interest_repayment 0.04 4 5 a111 11/06/2023 principal_repayment 195.33 5 6 b222 10/07/2023 funding -3915.45 6 13 b222 10/07/2023 interest_repayment 0.73 7 14 b222 10/07/2023 interest_repayment 0.73 8 15 b222 13/06/2023 principal_repayment 3906.50 cashflow_date of funding cashflow_type should be the earliest for each loan_identifier. Each row with cashflow_type != funding whose date is earlier than funding row of that loan_identifer should be set to funding cashflow_date. So the above dataframe should look like this: identifier loan_identifier cashflow_date cashflow_type amount 0 1 a111 15/07/2023 funding -195.71 1 2 a111 15/07/2023 interest_repayment 3.11 2 3 a111 15/07/2023 interest_repayment 0.04 3 4 a111 20/07/2023 interest_repayment 0.04 4 5 a111 15/07/2023 principal_repayment 195.33 5 6 b222 10/07/2023 funding -3915.45 6 13 b222 10/07/2023 interest_repayment 0.73 7 14 b222 10/07/2023 interest_repayment 0.73 8 15 b222 10/07/2023 principal_repayment 3906.50 Here's the data as a dictionary: data = { 'identifier': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 13, 7: 14, 8: 15}, 'loan_identifier': {0: 'a111', 1: 'a111', 2: 'a111', 3: 'a111', 4: 'a111', 5: 'b222', 6: 'b222', 7: 'b222', 8: 'b222'}, 'cashflow_date': {0: '15/07/2023', 1: '01/07/2023', 2: '15/07/2023', 3: '20/07/2023', 4: '11/06/2023', 5: '10/07/2023', 6: '10/07/2023', 7: '10/07/2023', 8: '13/06/2023'}, 'cashflow_type': {0: 'funding', 1: 'interest_repayment', 2: 'interest_repayment', 3: 'interest_repayment', 4: 'principal_repayment', 5: 'funding', 6: 'interest_repayment', 7: 'interest_repayment', 8: 'principal_repayment'}, 'amount': {0: -195.71, 1: 3.11, 2: 0.04, 3: 0.04, 4: 195.33, 5: -3915.45, 6: 0.73, 7: 0.73, 8: 3906.5} } How can I do this transformation using full advantage of pandas? | You can use boolean indexing: # ensure datetime df['cashflow_date'] = pd.to_datetime(df['cashflow_date'], dayfirst=True) # get rows with "funding" # NB. if more than one per ID, you need to drop duplicates m1 = df['cashflow_type'].eq('funding') # get reference date per ID ref = df['loan_identifier'].map(df.loc[m1].set_index('loan_identifier') ['cashflow_date']) # change if lower than reference m2 = df['cashflow_date'].lt(ref) df.loc[m2, 'cashflow_date'] = ref[m2] Another approach with a custom groupby.apply and clip: df['cashflow_date'] = pd.to_datetime(df['cashflow_date'], dayfirst=True) df['cashflow_date'] = (df.groupby('loan_identifier', group_keys=False) .apply(lambda g: g['cashflow_date'] .clip(lower=g.loc[g['cashflow_type'].eq('funding'), 'cashflow_date'].min()) ) ) Output: identifier loan_identifier cashflow_date cashflow_type amount 0 1 a111 2023-07-15 funding -195.71 1 2 a111 2023-07-15 interest_repayment 3.11 2 3 a111 2023-07-15 interest_repayment 0.04 3 4 a111 2023-07-20 interest_repayment 0.04 4 5 a111 2023-07-15 principal_repayment 195.33 5 6 b222 2023-07-10 funding -3915.45 6 13 b222 2023-07-10 interest_repayment 0.73 7 14 b222 2023-07-10 interest_repayment 0.73 8 15 b222 2023-07-10 principal_repayment 3906.50 | 2 | 4 |
76,817,818 | 2023-8-2 | https://stackoverflow.com/questions/76817818/how-can-i-perform-operations-between-a-list-and-scalar-column-in-polars | In python polars, I was wondering if it will be possible to use .eval() to perform an operation between an element and a column. For example, given the following dataframe: import polars as pl df = pl.DataFrame({"list": [[2, 2, 2], [3, 3, 3]], "scalar": [1, 2]}) Is it possible to subtract each element of the list column by the value of scalar column? i.e. from this shape: (2, 2) βββββββββββββ¬βββββββββ β list β scalar β β --- β --- β β list[i64] β i64 β βββββββββββββͺβββββββββ‘ β [2, 2, 2] β 1 β β [3, 3, 3] β 2 β βββββββββββββ΄βββββββββ to this shape: (2, 3) βββββββββββββ¬βββββββββ¬ββββββββββββ β list β scalar β diff β β --- β --- β --- β β list[i64] β i64 β list[i64] β βββββββββββββͺβββββββββͺββββββββββββ‘ β [2, 2, 2] β 1 β [1, 1, 1] β β [3, 3, 3] β 2 β [1, 1, 1] β βββββββββββββ΄βββββββββ΄ββββββββββββ | I think that native functionality for this is on the roadmap (see this github issue https://github.com/pola-rs/polars/issues/8006) but you can do this as follows: df.with_row_index().pipe( lambda df: df.join( df.explode("list") .with_columns(sub=pl.col("list") - pl.col("scalar")) .group_by("index") .agg(pl.col("sub")), on="index", ) ) Basically, I add an index column to have a unique ID for each row. Then I pipe so I can use this index column in further operations. I do a join to add the arithmetic column. In the join I explode the list column to get it as rows, do the arithmetic then do a group_by to gather things back into a list for each row and join this new column back to the df. I'm sure there are other ways to do it but this should get you going | 5 | 3 |
76,835,610 | 2023-8-4 | https://stackoverflow.com/questions/76835610/tiktok-oauth-api-authorization-code-request-always-expired | I'm trying to use to oAuth of the TikTok API. From what I got from their website, you first need to send the user to a particular link, with your client_key and redirect_uri, then the user need to log in. After he's logged in, there will be a code in the return URL that you can use to get the access_token of the user. Problem is, I always got the error: {'error': 'invalid_grant', 'error_description': 'Authorization code is expired.', 'log_id': 'XXXXXXXX'} Here is the link that explains everything I did : To allow the user to connect and get the code : https://developers.tiktok.com/doc/login-kit-web?enter_method=left_navigation To get user access token : https://developers.tiktok.com/doc/oauth-user-access-token-management/ I already have an authorized application on their developer website with a client key, secret, and the Login kit that allows the user to log in. Here is the Python code I did to test this : import requests import urllib import secrets import time # Constants CLIENT_KEY = "xxxxx" CLIENT_SECRET = "yyyyy" REDIRECT_URI = "https://oxyfoo.com/pierre/tiktok-automation/" AUTHORIZE_URL = 'https://www.tiktok.com/v2/auth/authorize/' TOKEN_URL = 'https://open.tiktokapis.com/v2/oauth/token/' # Function to generate authorization URL def generate_auth_url(): csrf_state = secrets.token_hex(16) params = { 'client_key': CLIENT_KEY, 'scope': 'user.info.basic', 'response_type': 'code', 'redirect_uri': REDIRECT_URI, 'state': csrf_state, } url = AUTHORIZE_URL + '?' + urllib.parse.urlencode(params) return url, csrf_state # Function to get the access token def get_access_token(authorization_code): data = { 'client_key': CLIENT_KEY, 'client_secret': CLIENT_SECRET, 'code': authorization_code, 'grant_type': 'authorization_code', 'redirect_uri': REDIRECT_URI } headers = { 'Content-Type': 'application/x-www-form-urlencoded' } response = requests.post(TOKEN_URL, headers=headers, data=urllib.parse.urlencode(data)) if response.status_code == 200: return response.json() else: print(f"Error: Received status code {response.status_code}") print(f"Response content: {response.content.decode()}") return None # Testing def manual_test(): # Generate authorization URL print("Generating authorization URL...") url, state = generate_auth_url() print("URL:", url) # I go there and log in print("State:", state) # Prompt user for the redirect URL input_ = input("Paste the URL you were redirected to: ") # I put the url of the page that start with my redirect_uri with ?code, scopes, etc here # Extract authorization code from redirect URL code = input_.split("code=")[1].split("&")[0] print("Code:", code) # Fetch access token without delay print("Fetching access token...") token_info = get_access_token(code) print(token_info) # Here, I always have the error invalid grant authorization code is expired. if __name__ == "__main__": manual_test() I tried to really follow the documentation and I thought I did but I can't understand why I do have this error. I'm not familiar with using oAuth so maybe it's a basic error but I just can't seem to solve it. | As Michal pointed out, you need to decode the code you get before using it. Here is how you can do it (Edit of you manual_test() function) : def manual_test(): # Generate authorization URL print("Generating authorization URL...") url, state = generate_auth_url() print("URL:", url) print("State:", state) # Prompt user for the redirect URL input_ = input("Paste the URL you were redirected to: ") # Extract authorization code from redirect URL code = input_.split("code=")[1].split("&")[0] # Decode the code (THAT'S THE EDITING PART) decoded_code = urllib.parse.unquote(code) print("Code:", code) print("Decoded Code:", decoded_code) # Fetch access token without delay print("Fetching access token...") token_info = get_access_token(decoded_code) print(token_info) | 3 | 2 |
76,854,735 | 2023-8-7 | https://stackoverflow.com/questions/76854735/how-to-increase-row-output-limit-in-duckdb-in-python | I'm working with DuckDB in Python (in a Jupyter Notebook). How can I force DuckDB to print all rows in the output rather than truncating rows? I've already increased output limits in the Jupyter Notebook. This would be the equivalent of setting .maxrows in the CLI, but I can't find how to do this in Python. | You can use show: duckdb.sql("select * from range(100)").show(max_rows=100) | 3 | 5 |
76,861,309 | 2023-8-8 | https://stackoverflow.com/questions/76861309/find-number-of-months-between-today-and-a-dataframe-date-field-in-python-polars | I'd like to find the number of months between today and a date field in a python polars dataframe. How do you do that? (datetime.today() - pl.col('mydate')).dt.days() returns it only in days. | I figured out a way to do it using the pl.date_ranges() function. This mimics the Excel DATEDIF formula. Essentially, you are creating a list of unique months between the start date and the end dates, then calculating the length of the list to find the months. df.with_columns(diffInMonths=(pl.date_ranges(pl.col('Start Date'),pl.col('End Date'),'1mo',closed='right')).list.len()) This was the smoothest way for me to calculate the difference between two dates in months. Be aware: pl.date_range() and pl.date_ranges() do different things, so be sure to use the plural version. | 2 | 2 |
76,855,616 | 2023-8-7 | https://stackoverflow.com/questions/76855616/how-can-i-get-hover-info-in-vs-code-for-google-sphinx-style-python-class-attribu | I'm trying to document instance variables in my Python classes so they show up in VS Code when I hover over them. I've found that this works: class TsCity: def __init__(self) -> None: self.name: str = "" """The city name.""" But this is pretty ugly. I would ideally like to use the Google-style docstring instead: self.name: str = "" #: Doc comment *inline* with attribute But this doesn't show up in VS Code properly. Is there a way to make this type of docstring work in VS Code? | This (#: lorem ipsum) is the Sphinx/Google style of attribute documentation (as opposed to the pep-0224 style (""" lorem ipsum """)) I asked the maintainers of the Python extension about whether there's an existing feature request issue ticket asking for Sphinx-style attribute documentation at https://github.com/microsoft/pylance-release/issues/1576#issuecomment-1668718385. One of them (Rich Chiodo) replied: I don't believe so. But if it's not part of standard doc strings, it would probably take a lot of up votes for us to add it. I take that to mean that- no, there is no way to get hover info for this style of attribute doc comments at the time of this writing- at least not with Microsoft's Python extension (the most popular Python extension at the time of this writing). Perhaps there's an extension that adds support for this, but I don't know of it. Consider making a feature-request issue ticket to the Microsoft Python repo for this feature. | 3 | 3 |
76,849,092 | 2023-8-7 | https://stackoverflow.com/questions/76849092/pyscript-always-download-pyodide-everytime-html-page-is-refreshed-leading-to-slo | I'm trying to run PyScript in a webpage, where I intended it to run and produce the output of a python code to a div element. I've succeeded in doing this. However I noticed it always produces the splashscreen "Downloading Pyodide Python Startup" every time I refreshed the webpage. It's taking significantly longer wait time for this. Is there a way to in a way cache the Pyodide installation and make the loading times significantly faster? I'm deploying this webpage to Github Pages here https://reza-nugraha32.github.io/geophysics-wiki/ . I'm aware that it's not better to hide the splash screen entirely. Is there also a way to start this splash screen in a div container or importing the PyScript only when a "Run" button is clicked? For instance, Educative.io here made it so it display "Loading" only when the "Run" button is clicked. I've tried deleting the defer tag from the PyScript import script but it gave me errors. | Sorry, Reza, but that's just the way it works. The whole environment has to be downloaded before any code runs. Pyscript Next (2023.11.1) may improve your situation, especially if you can use Micropython. Check out Jeff Glass's blog - https://jeff.glass/post/whats-new-pyscript-2023-11-1/. | 2 | 1 |
76,837,908 | 2023-8-4 | https://stackoverflow.com/questions/76837908/azure-function-v2-python-deployed-functions-are-not-showing | Locally the functions debug just fine, if I deploy via vscode to my azure function I get No HTTP Triggers found and the devops pipeline does not deploy triggers either. I have "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" set locally and as a function app setting. Code is appropriately decorated @app.route(route="functionname", auth_level=func.AuthLevel.FUNCTION) def functioname(req: func.HttpRequest) -> func.HttpResponse: Deployments succeed both ways but no functions show Azure Pipeline shows correct files: Azure function app files show function_app.py at the root folder test function app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS) @app.function_name("personas") @app.route(route="character-managment/personas") def personas(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.HttpResponse("ok", status_code=200) Folder structure Works locally | I tried to reproduce the same in my environment. I could see the deployed function in Azure function app. Created a test default Python v2 function and deployed to Azure function app. Local: Make sure you added the setting "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" in local.settings.json: { "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python", "AzureWebJobsFeatureFlags": "EnableWorkerIndexing", "AzureWebJobsStorage": "<storage_connection_string>" } } Create and activate a new virtual environment using the commands: py -m venv .venv .\.venv\Scripts\activate Include all the required packages in requirements.txt file. Install all the packages by running the command pip install -r requirements.txt in the terminal. Created a Python function app with Consumption plan and version 3.10. Deployed the function with the command func azure functionapp publish <function_app_name>: Portal: I could also deploy the function with custom packages to Azure Function App. requirements.txt: azure-functions numpy pandas Code(function_app.py): import azure.functions as func import logging import numpy as np import pandas as pd app = func.FunctionApp() @app.route(route="HttpTrigger", auth_level=func.AuthLevel.ANONYMOUS) def HttpTrigger(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') arr = np.array([1, 2, 3]) df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully with {arr} and {df}.") else: return func.HttpResponse( "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", status_code=200 ) Local: Portal: References: Create a Python function from the command line You can also refer GitHub ticket raised on the same issue. | 6 | 3 |
76,825,089 | 2023-8-3 | https://stackoverflow.com/questions/76825089/loss-increasing-to-extremely-high-numbers-during-training | I'm trying to fit my Tensorflow model on my Macbook Pro (M1). This model works completely fine on my other system, running Ubuntu on WSL2 with the same python version, where the loss steadily decreases to around 0.05, but somehow when I run it on my Mac, the loss numbers increase to ridiculously large numbers, at one point reaching 200 trillion. This is not specific to this model; I tried it on another model as well with the same result. Here's the code for my model: model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(16,(6,6),activation = 'relu', input_shape=(212,212,3)), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Conv2D(32,(5,5),activation = 'relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Dropout(0.1), tf.keras.layers.Conv2D(64,(3,3),activation = 'relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Conv2D(128,(3,3),activation = 'relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Dropout(0.1), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.Dense(2, activation='softmax') ]) summary = model.summary() tf.keras.utils.plot_model(model, to_file="model_plot3.png", show_shapes=True, show_layer_names=True) checkpoint = ModelCheckpoint("best_model.h5", monitor = "val_loss", verbose = 0, save_best_only = True, mode = "auto") model.compile(loss="categorical_crossentropy", optimizer=Adam(learning_rate=0.001), metrics=["accuracy"]) hist = model.fit(train_generator, epochs = 30, validation_data=valid_generator, callbacks = [checkpoint]) Here's the output: WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.Adam` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.Adam`. WARNING:absl:There is a known slowdown when using v2.11+ Keras optimizers on M1/M2 Macs. Falling back to the legacy Keras optimizer, i.e., `tf.keras.optimizers.legacy.Adam`. Epoch 1/30 2023-08-02 15:01:30.003906: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled. 400/400 [==============================] - ETA: 0s - loss: 0.4036 - accuracy: 0.81422023-08-02 15:02:18.532815: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled. /Users/williamli/miniconda3/envs/kg2/lib/python3.11/site-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`. saving_api.save_model( 400/400 [==============================] - 65s 161ms/step - loss: 0.4036 - accuracy: 0.8142 - val_loss: 0.6669 - val_accuracy: 0.7453 Epoch 2/30 400/400 [==============================] - 70s 174ms/step - loss: 0.8683 - accuracy: 0.8273 - val_loss: 2.7992 - val_accuracy: 0.7753 Epoch 3/30 400/400 [==============================] - 57s 144ms/step - loss: 609.9819 - accuracy: 0.7615 - val_loss: 2737.7434 - val_accuracy: 0.8334 Epoch 4/30 400/400 [==============================] - 82s 206ms/step - loss: 1593462.3750 - accuracy: 0.6877 - val_loss: 8623920.0000 - val_accuracy: 0.4119 Epoch 5/30 400/400 [==============================] - 83s 206ms/step - loss: 122928104.0000 - accuracy: 0.6355 - val_loss: 40522244.0000 - val_accuracy: 0.8847 Epoch 6/30 400/400 [==============================] - 74s 184ms/step - loss: 1330869888.0000 - accuracy: 0.6237 - val_loss: 982521344.0000 - val_accuracy: 0.8397 Epoch 7/30 400/400 [==============================] - 80s 200ms/step - loss: 7602212864.0000 - accuracy: 0.6055 - val_loss: 7827622912.0000 - val_accuracy: 0.5456 Epoch 8/30 400/400 [==============================] - 81s 202ms/step - loss: 26032189440.0000 - accuracy: 0.6040 - val_loss: 42498854912.0000 - val_accuracy: 0.4313 Epoch 9/30 400/400 [==============================] - 66s 165ms/step - loss: 47032242176.0000 - accuracy: 0.6094 - val_loss: 9544238080.0000 - val_accuracy: 0.8656 Epoch 10/30 400/400 [==============================] - 67s 168ms/step - loss: 128274128896.0000 - accuracy: 0.6000 - val_loss: 208268017664.0000 - val_accuracy: 0.2744 Epoch 11/30 400/400 [==============================] - 59s 146ms/step - loss: 240008052736.0000 - accuracy: 0.5925 - val_loss: 75725373440.0000 - val_accuracy: 0.7259 Epoch 12/30 400/400 [==============================] - 42s 104ms/step - loss: 301967835136.0000 - accuracy: 0.6059 - val_loss: 407859658752.0000 - val_accuracy: 0.4909 Epoch 13/30 400/400 [==============================] - 55s 137ms/step - loss: 486205947904.0000 - accuracy: 0.6050 - val_loss: 1384417722368.0000 - val_accuracy: 0.4391 Epoch 14/30 400/400 [==============================] - 43s 107ms/step - loss: 1128794685440.0000 - accuracy: 0.5770 - val_loss: 735326240768.0000 - val_accuracy: 0.5809 Epoch 15/30 400/400 [==============================] - 44s 109ms/step - loss: 1682992136192.0000 - accuracy: 0.5792 - val_loss: 269883539456.0000 - val_accuracy: 0.8350 Epoch 16/30 400/400 [==============================] - 42s 106ms/step - loss: 1772778946560.0000 - accuracy: 0.5947 - val_loss: 345572409344.0000 - val_accuracy: 0.8394 Epoch 17/30 400/400 [==============================] - 41s 104ms/step - loss: 1817638862848.0000 - accuracy: 0.5984 - val_loss: 3161192660992.0000 - val_accuracy: 0.5013 Epoch 18/30 400/400 [==============================] - 42s 104ms/step - loss: 3075231186944.0000 - accuracy: 0.5902 - val_loss: 367501869056.0000 - val_accuracy: 0.8988 Epoch 19/30 400/400 [==============================] - 42s 104ms/step - loss: 3854084079616.0000 - accuracy: 0.5852 - val_loss: 2003185041408.0000 - val_accuracy: 0.5578 Epoch 20/30 400/400 [==============================] - 42s 104ms/step - loss: 5827094118400.0000 - accuracy: 0.5778 - val_loss: 1107054166016.0000 - val_accuracy: 0.8131 Epoch 21/30 400/400 [==============================] - 42s 104ms/step - loss: 7197869211648.0000 - accuracy: 0.5852 - val_loss: 14864055533568.0000 - val_accuracy: 0.1506 Epoch 22/30 400/400 [==============================] - 42s 104ms/step - loss: 10607804809216.0000 - accuracy: 0.5769 - val_loss: 4783043248128.0000 - val_accuracy: 0.6831 Epoch 23/30 400/400 [==============================] - 42s 104ms/step - loss: 14316861390848.0000 - accuracy: 0.5797 - val_loss: 4704773865472.0000 - val_accuracy: 0.7094 Epoch 24/30 400/400 [==============================] - 42s 104ms/step - loss: 18032501981184.0000 - accuracy: 0.5738 - val_loss: 34486202925056.0000 - val_accuracy: 0.0291 Epoch 25/30 400/400 [==============================] - 42s 104ms/step - loss: 17980863807488.0000 - accuracy: 0.5898 - val_loss: 5609672409088.0000 - val_accuracy: 0.7353 Epoch 26/30 400/400 [==============================] - 42s 104ms/step - loss: 29986731851776.0000 - accuracy: 0.5722 - val_loss: 85421698580480.0000 - val_accuracy: 0.0000e+00 Epoch 27/30 400/400 [==============================] - 42s 104ms/step - loss: 42101488222208.0000 - accuracy: 0.5700 - val_loss: 61102549368832.0000 - val_accuracy: 0.2134 Epoch 28/30 400/400 [==============================] - 42s 104ms/step - loss: 38928790847488.0000 - accuracy: 0.5879 - val_loss: 36724038172672.0000 - val_accuracy: 0.2400 Epoch 29/30 400/400 [==============================] - 42s 104ms/step - loss: 59314614042624.0000 - accuracy: 0.5813 - val_loss: 205399596728320.0000 - val_accuracy: 0.3913 Epoch 30/30 400/400 [==============================] - 42s 104ms/step - loss: 72781760823296.0000 - accuracy: 0.5737 - val_loss: 191339199201280.0000 - val_accuracy: 0.2197 (8141, 39) (132, 39) Found 260 validated image filenames belonging to 2 classes. Test loss: 87099814445056.0 Test accuracy: 0.5230769515037537 I've tried restarting my computer and reinstalling tensorflow. I also made a pretty simple model that runs on the CIFAR-10 dataset, where the same thing happens, although not to this level. Does anyone know why this is happening? I'm on Python 3.11.4, tensorflow 2.13.0, and tensorflow-metal 1.0.1. I saw this SO post, but I do have MaxPool2D layers, and I'm working on an image classification problem. | I have a similar issue on my Apple Silicon M2 Max. The Keras folks confirmed this is a problem. It will be fixed in TensorFlow 2.15 For now, you can use tf-nightly which doesn't have the same issue. pip install tf-nightly https://github.com/keras-team/keras/issues/18370 | 3 | 1 |
76,857,722 | 2023-8-8 | https://stackoverflow.com/questions/76857722/huggingface-sft-for-completion-only-not-working | I have a project where I am trying to finetune Llama-2-7b on a dataset for Parameter extraction, which is linked here: <GalaktischeGurke/parameter_extraction_1500_mail_contract_invoice>. The problem with the dataset is that the context for a response is very big, meaning that training on the entire dataset with context, not only on the response results in a huge loss of performance. To fix this issue, I wanted to use SFT_trainer together with the DataCollatorForCompletionOnlyLM, which allows finetuning only for response. Now, before adjusting my training loop, I wanted to try the examples given here: https://huggingface.co/docs/trl/main/en/sft_trainer. Specifically, I used this code from the page: from transformers import AutoModelForCausalLM, AutoTokenizer from datasets import load_dataset from trl import SFTTrainer, DataCollatorForCompletionOnlyLM dataset = load_dataset("timdettmers/openassistant-guanaco", split="train") output_dir = "./results" model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") instruction_template = "### Human:" response_template = "### Assistant:" collator = DataCollatorForCompletionOnlyLM(instruction_template=instruction_template, response_template=response_template, tokenizer=tokenizer, mlm=False) trainer = SFTTrainer( model, train_dataset=dataset, dataset_text_field="text", data_collator=collator, ) trainer.train() import os output_dir = os.path.join(output_dir, "final_checkpoint") trainer.model.save_pretrained(output_dir) The training loop did not crash, but it never seemed to train at all - There was no train/loss curve on wandb and the model saved didnt seem to have changed. These are the things I tried: -Using the other code with preformat function -setting packing=False on the trainer -implementing it with my own loop, which yielded the same results -trying to find documentation on the collator, however it is not in the official docs at https://huggingface.co/docs/transformers/main_classes/data_collator Does anyone know what the issue is here? | I have a similar issue. I think you're forgetting to add formatting_func function. Also, by default setting dataset_text_field overrides the use of the collator, so try without that argument. Here's how I call it. It runs and stores things to wandb, but my problem is my loss is always NaN. Lemme know if you found the issue! trainer = SFTTrainer( model, train_dataset=vanilla_data_set, eval_dataset=vanilla_data_set, args=training_args, # dataset_text_field="gpt-4", # torch_dtype=torch.bfloat16, peft_config=peft_config, max_seq_length=512, formatting_func=formatting_prompts_func, data_collator=collator ) | 3 | 1 |
76,824,794 | 2023-8-3 | https://stackoverflow.com/questions/76824794/how-can-i-get-an-arrow-to-center-align-with-text-in-matplotlib | I have a chart that I'm annotating some text and would like to draw an arrow to a point, however I cannot get the arrow to center align with the text. It always aligns to the left of the text. I came across this article which shows an arrow tail center aligned with the text and points to the area of concern. This is what I'd like on my chart. import matplotlib.pyplot as plt import numpy as np plt.rcParams['figure.figsize']=(10,6) x= np.arange(0,2*np.pi,0.01) y= np.sin(x) plt.axhline(y=0, color='k',linewidth=1) # Simple arrow plt.annotate('Maximum value of sin(x)',xy=(1.55,0.98),xytext=(1,-0.25), arrowprops={"width":6,"headwidth":20,'headlength':20}, horizontalalignment='center',fontsize=15) plt.plot(x,y,label='sine curve',color='r') plt.legend() plt.grid() plt.show() The result is this: Where the arrow tail is center aligned with the text. However, when I try to use this snippet of code, mine is aligned to the left of the text. See below. import matplotlib.pyplot as plt import numpy as np x = np.arange(0, 200.25, 0.25) x2 = x * 1.852 y1 = (np.sqrt(x2**2 + 72227335.111 + (2*x2)*(8498.66667)*(np.sin(0.5*0.0174533))) - 8498.66667) * 3280.84 y15 = (np.sqrt(x2**2 + 72227335.111 + (2*x2)*(8498.66667)*(np.sin(0.025*0.0174533))) - 8498.66667) * 3280.84 y29 = (np.sqrt(x2**2 + 72227335.111 + (2*x2)*(8498.66667)*(np.sin(0.975*0.0174533))) - 8498.66667) * 3280.84 fig, ax = plt.subplots(figsize = (14,9)) ax.plot(x,y1,lw = 2, color = 'b') ax.plot(x,y15, lw = 0.5, color = 'b', alpha = 0.0) ax.plot(x,y29, lw = 0.5, color = 'b', alpha = 0.0) ax.set_ylim(ymin = 0) ax.set_ylim(ymax = 60000) ax.set_xlim(xmin = 0) ax.set_xlim(xmax = 200) ax.fill_between(x, y15,y29,color='b',alpha=.2) ax.grid(which='major', linestyle='-', linewidth='0.5', color='gray') ax.grid(which='minor', linestyle=':', linewidth='0.5', color='gray') ax.tick_params(axis = 'both', which = 'both', top = False, bottom = False, left = False, right = False, labelsize = 12) plt.annotate('5045 feet', xy = (50, 1500), xytext = (41.8, 7800), fontsize = 12, fontweight = 'bold', arrowprops = dict(arrowstyle = '<->', color = 'red', lw = 2)) plt.annotate('10085 feet', xy = (100, 6500), xytext = (90.5, 18000), fontsize = 12, fontweight = 'bold', arrowprops = dict(arrowstyle = '<->', color = 'red', lw = 2)) plt.annotate('15110 feet', xy = (150, 15000), xytext = (140.5, 31000), fontsize = 12, fontweight = 'bold', arrowprops = dict(arrowstyle = '<->', color = 'red', lw = 2)) plt.annotate(' 20200 feet', xy = (200, 26500), xytext = (190, 48000), fontsize = 12, fontweight = 'bold', arrowprops = dict(arrowstyle = '<->', color = 'red', lw = 2)) plt.annotate('Test Text Goes Here',xy=(21,3000),xytext=(33,20000), arrowprops={"width":6,"headwidth":20,'headlength':20}, horizontalalignment='center',fontsize=15) plt.scatter(20, 1300, s = 200, marker = '*', color = 'purple', zorder = 10) plt.scatter(98, 11600, s = 200, marker = '*', color = 'purple', zorder = 10) plt.show() Resulting in something like this: Is there a way to get this arrow center-aligned like in the top image? | Set xycoords='data' and xytext=(0, y)* and both the horizontal & vertical alignment parameters to "center" for always text-centered [arrow] annotations *Where y corresponds to the apparent heightΒΉ of the arrow. ΒΉ (In this example, measured in units of "offset points" - which will be dependent on the set DPI. [default: 72]) E.g., the following modification: labels = [ "Labels", "Composed Of", "[Test Text Goes Here]", "Varying Lengths Increasingly", "Getting Longer With Each Subsequent Label", ][::-1] # Center-aligned text-arrow annotations for x, y in zip(range(21, 21 * 11, 21 * 2), range(3000, 3000 * 20, 3000 * 3)): plt.annotate( labels.pop(), xy=(x, y), xycoords="data", xytext=(0, 140), textcoords="offset points", arrowprops={"width": 6, "headwidth": 20, "headlength": 20}, horizontalalignment="center", verticalalignment="center", fontsize=15, ) plt.scatter(x, y, s=200, marker="*", color="purple", zorder=10) produces: | 3 | 1 |
76,856,317 | 2023-8-8 | https://stackoverflow.com/questions/76856317/creating-an-iceberg-table-on-s3-using-pyiceberg-and-glue-catalog | I am attempting to create an Iceberg Table on S3 using the Glue Catalog and the PyIceberg library. My goal is to define a schema, partitioning specifications, and then create a table using PyIceberg. However, despite multiple attempts, I haven't been able to achieve this successfully and keep encountering an error related to empty path components in metadata paths. Here's a simplified version of the code I'm using: import boto3 from pyiceberg.catalog import load_catalog from pyiceberg.schema import Schema from pyiceberg.types import TimestampType, DoubleType, StringType, NestedField from pyiceberg.partitioning import PartitionSpec, PartitionField from pyiceberg.transforms import YearTransform, MonthTransform, DayTransform def create_iceberg_table(): # Replace with your S3 bucket and table names s3_bucket = "my-bucket-name" table_name = "my-table-name" database_name = "iceberg_catalog" # Define the table schema schema = Schema( NestedField(field_id=1, name="field1", field_type=DoubleType(), required=False), NestedField(field_id=2, name="field2", field_type=StringType(), required=False), # ... more fields ... ) # Define the partitioning specification with transformations partition_spec = PartitionSpec( PartitionField(field_id=3, source_id=3, transform=YearTransform(), name="year"), PartitionField(field_id=3, source_id=3, transform=MonthTransform(), name="month"), # ... more partition fields ... ) # Create the Glue client glue_client = boto3.client("glue") # Specify the catalog URI where Glue should store the metadata catalog_uri = f"s3://{s3_bucket}/catalog" # Load the Glue catalog for the specified database catalog = load_catalog("test", client=glue_client, uri=catalog_uri, type="GLUE") # Create the Iceberg table in the Glue Catalog catalog.create_table( identifier=f"{database_name}.{table_name}", schema=schema, partition_spec=partition_spec, location=f"s3://{s3_bucket}/{table_name}/" ) print("Iceberg table created successfully!") if __name__ == "__main__": create_iceberg_table() My understanding is that the PyIceberg library interacts with the Glue Catalog to manage metadata, schema, and partitions, but I seem to be missing a crucial step or misconfiguring something. How can I properly generate an Iceberg Table on S3 using the Glue Catalog and PyIceberg? Traceback: Traceback (most recent call last): File "/home/workspaceuser/app/create_iceberg_tbl.py", line 72, in <module> create_iceberg_table() File "/home/workspaceuser/app/create_iceberg_tbl.py", line 62, in create_iceberg_table catalog.create_table( File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/catalog/glue.py", line 220, in create_table self._write_metadata(metadata, io, metadata_location) File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/catalog/__init__.py", line 544, in _write_metadata ToOutputFile.table_metadata(metadata, io.new_output(metadata_path)) File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/serializers.py", line 71, in table_metadata with output_file.create(overwrite=overwrite) as output_stream: File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/io/pyarrow.py", line 256, in create if not overwrite and self.exists() is True: File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/io/pyarrow.py", line 200, in exists self._file_info() # raises FileNotFoundError if it does not exist File "/home/workspaceuser/layers/paketo-buildpacks_cpython/cpython/lib/python3.8/site-packages/pyiceberg/io/pyarrow.py", line 182, in _file_info file_info = self._filesystem.get_file_info(self._path) File "pyarrow/_fs.pyx", line 571, in pyarrow._fs.FileSystem.get_file_info File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Empty path component in path ua-weather-data/hourly_forecasts//metadata/00000-232e3e60-1c1a-4eb8-959e-6940b563acd4.metadata.json | I came across this post in LinkedIn that had an example of how to accomplish this - thanks dipankar mazumdar!!! Removed the boto3 library, instantiated the glue catalog with the proper syntax, and created a properly formed catalog.create_table command. Here is the adjusted working code: from pyiceberg.catalog import load_catalog from pyiceberg.table import Table from pyiceberg.schema import Schema from pyiceberg.types import DoubleType, StringType, TimestampType, NestedField from pyiceberg.partitioning import PartitionSpec, PartitionField from pyiceberg.transforms import YearTransform, MonthTransform, DayTransform from pyiceberg.table.sorting import SortOrder, SortField from pyiceberg.transforms import IdentityTransform def create_iceberg_table(): # Specify the Glue Catalog database name and URI glue_database_name = "iceberg_catalog" glue_catalog_uri = "s3://ua-weather-data/catalog" # Replace with your Glue Catalog URI # Instantiate glue catalog catalog = load_catalog("glue", **{"type": "glue"}) #catalog = load_catalog(catalog_impl="org.apache.iceberg.aws.glue.GlueCatalog", name=glue_database_name, uri=glue_catalog_uri) # Define the Iceberg schema schema = Schema( NestedField(field_id=1, name="cloudCover", field_type=DoubleType(), required=False), NestedField(field_id=2, name="dayOfWeek", field_type=StringType(), required=False), NestedField(field_id=3, name="dayOrNight", field_type=StringType(), required=False), NestedField(field_id=4, name="expirationTimeUtc", field_type=TimestampType(), required=False), NestedField(field_id=5, name="iconCode", field_type=DoubleType(), required=False), NestedField(field_id=6, name="iconCodeExtend", field_type=DoubleType(), required=False), NestedField(field_id=7, name="precipChance", field_type=DoubleType(), required=False), NestedField(field_id=8, name="precipType", field_type=StringType(), required=False), NestedField(field_id=9, name="pressureMeanSeaLevel", field_type=DoubleType(), required=False), NestedField(field_id=10, name="qpf", field_type=DoubleType(), required=False), NestedField(field_id=11, name="qpfSnow", field_type=DoubleType(), required=False), NestedField(field_id=12, name="relativeHumidity", field_type=DoubleType(), required=False), NestedField(field_id=13, name="temperature", field_type=DoubleType(), required=False), NestedField(field_id=14, name="temperatureFeelsLike", field_type=DoubleType(), required=False), NestedField(field_id=15, name="temperatureHeatIndex", field_type=DoubleType(), required=False), NestedField(field_id=16, name="temperatureWindChill", field_type=DoubleType(), required=False), NestedField(field_id=17, name="uvDescription", field_type=StringType(), required=False), NestedField(field_id=18, name="uvIndex", field_type=DoubleType(), required=False), NestedField(field_id=19, name="validTimeLocal", field_type=TimestampType(), required=True), NestedField(field_id=20, name="validTimeUtc", field_type=DoubleType(), required=False), NestedField(field_id=21, name="visibility", field_type=DoubleType(), required=False), NestedField(field_id=22, name="windDirection", field_type=DoubleType(), required=False), NestedField(field_id=23, name="windDirectionCardinal", field_type=StringType(), required=False), NestedField(field_id=24, name="windGust", field_type=DoubleType(), required=False), NestedField(field_id=25, name="windSpeed", field_type=DoubleType(), required=False), NestedField(field_id=26, name="wxPhraseLong", field_type=StringType(), required=False), NestedField(field_id=27, name="wxPhraseShort", field_type=StringType(), required=False), NestedField(field_id=28, name="wxSeverity", field_type=DoubleType(), required=False), NestedField(field_id=29, name="data_origin", field_type=StringType(), required=True) ) # Define the partitioning specification with year, month, and day partition_spec = PartitionSpec( PartitionField(field_id=19, source_id=19, transform=YearTransform(), name="validTimeLocal_year"), PartitionField(field_id=19, source_id=19, transform=MonthTransform(), name="validTimeLocal_month"), PartitionField(field_id=19, source_id=19, transform=DayTransform(), name="validTimeLocal_day") ) # Define the sorting order using validTimeUtc field sort_order = SortOrder(SortField(source_id=20, transform=IdentityTransform())) # Create the Iceberg table using the Iceberg catalog table_name = "iceberg_catalog.hourly_forecasts" catalog.create_table( identifier=table_name, location="s3://ua-weather-data/catalog", schema=schema, partition_spec=partition_spec, sort_order=sort_order ) print("Iceberg table created using AWS Glue Catalog.") if __name__ == "__main__": create_iceberg_table() | 6 | 4 |
76,853,836 | 2023-8-7 | https://stackoverflow.com/questions/76853836/match-a-row-with-the-rows-of-another-table-to-be-able-to-classify-the-row-in-dat | How can I Classify the values of the Clients table with the values of the rows of the Combinations table? I decide to create a combinations table to develop all combinations from main row (Clients Table). I am planning to check that the row of the customers coincides with a row of the combinations table to classify it as sector B (Combinations Table). I have this flow but Dtabricks returns error: for i,j in select_df.iterrows(): for u,v in dfCombinacionesDias.iterrows(): if ( (select_df["MONDAY"][i] == registro["LUNES"][u]) and (select_df["TUESDAY"][i] == registro["MARTES"][u]) and (select_df["WEDNESDAY"][i] == registro["MIERCOLES"][u]) and (select_df["THURSDAY"][i] == registro["JUEVES"][u]) and (select_df["FRIDAY"][i] == registro["VIERNES"][u]) and (select_df["SATURDAY"][i] == registro["SABADO"][u]) and (select_df["SUNDAY"][i] == registro["DOMINGO"][u]) ): Sector = "B" else: Sector = "A" vSubSeq = "('{}','{}')".format(select_df["IDClient"][i],Sector) sqlInsertSequence = "Insert into {0}.{1} values {2}".format(dSCHEMA, Table, vSubSeq,vdataDeltaPath) print(sqlInsertSequence) dfTables = spark.sql(sqlInsertSequence) I add the image with the different tables (Clients, Combinations and Sector) I think that I need a for to loop a table row by row (Combinations table) to compare with a row in clients table if there is a match I save this value in a new table (sector table) and obviously will exist other for to loop the clients table. But I would like to know an algorithm that helps look tables to compare? | Idea I assume that "x" in the posted data example works like a boolean trigger. So why not to replace it with True and empty space with False? After that, we can apply logical operators directly to data. For example, what does it mean that the client's days do not fit in the "Sector B" pattern? Schematically it means any(client_days and not sector_b) is True, as in the following model: import pandas as pd week_days = 'mon tue wed thu fri sat sun'.split() client_days = pd.Series([0,1,0,0,1,0,0], index=week_days) sector_b = pd.Series([1,0,1,0,1,0,0], index=week_days) assert any(client_days & ~sector_b) How to implement this in Pandas pandas 1.5.1 Let's model this idea in Pandas, as if we could apply toPandas to the original data: import pandas as pd week_days = 'mon tue wed thu fri sat sun'.split() data = [ [0,1,0,0,1,0,0], [1,0,1,0,1,0,0], [1,0,1,0,0,0,0], [1,0,0,0,0,0,0], [1,0,0,0,1,0,0], [0,0,1,0,1,0,0], [0,0,0,0,1,0,0], [0,0,1,0,0,0,0], [1,1,1,1,1,1,1], [1,0,1,0,0,0,0], ] clients = pd.DataFrame( data, index=1 + pd.Index(range(len(data)), name='Client'), columns=week_days, dtype=bool ) sectors = pd.DataFrame( data=[[1,0,1,0,1,0,0]], index=pd.Index(['Sector B'], name='sector'), columns=week_days, dtype=bool, ) In this case we could use dot operator, i.e. scalar product, keeping in mind that addition and multiplication correspond to the or/and operations in the case of boolean data: answer = (clients @ ~sectors.loc['Sector B']).map({True: 'A', False: 'B'}) Implementation on PySpark pyspark 3.4.1 Suppose that for some reason we can't use toPandas. Let's reorganize data as if they are PySpark DataFrame: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() clients_sdf = spark.createDataFrame(clients.reset_index()) sectors_sdf = spark.createDataFrame(sectors.reset_index()) How would we implement the idea being restricted to this data type? First, sector's data are small and we can extract them in some sequence (e.g. list). Next, we can apply map for logical AND, then reduce for logical OR, which gives us True for "Sector A" cases and "False" otherwise. After that we apply when from pyspark.sql.functions to map values: from pyspark.sql.functions import lit, when from functools import reduce client_data = clients_sdf[week_days] sector_b = [*sectors_sdf.where('sector == "Sector B"')[week_days].first()] not_in_B = map(lambda x, y: x & lit(not y), client_data, sector_b) is_in_sector_A = reduce(lambda x, y: x | y, not_in_B) client_sector = when(is_in_sector_A, 'A').otherwise('B') answer = clients_sdf.withColumn('Sector', client_sector).select('Client', 'Sector') Output: >>> answer.show() +------+------+ |Client|Sector| +------+------+ | 1| A| | 2| B| | 3| B| | 4| B| | 5| B| | 6| B| | 7| B| | 8| B| | 9| A| | 10| B| +------+------+ General case This is just a fantasy on what it might look like in the general case. Suppose we have these data: import pandas as pd week_days = 'mon tue wed thu fri sat sun'.split() data = [ [0,1,0,1,0,0,0], # changed to fit a new Sector A [1,0,1,0,1,0,0], [1,0,1,0,0,0,0], [1,0,0,0,0,0,0], [1,0,0,0,1,0,0], [0,0,1,0,1,0,0], [0,0,0,0,1,0,0], [0,0,1,0,0,0,0], [1,1,1,1,1,1,1], # fit Sector C [1,0,1,0,0,0,0], ] clients = pd.DataFrame( data, index=1 + pd.Index(range(len(data)), name='Client'), columns=week_days, dtype=bool ) sectors = pd.DataFrame( # add Sector A, Sector C data=[[0,1,0,1,0,1,0], [1,0,1,0,1,0,0], [1,1,1,1,1,1,1]], index=pd.Index(['Sector A', 'Sector B', 'Sector C'], name='sector'), columns=week_days, dtype=bool, ) We can see here 3 sectors presumably arranged in descending order of their priority, which we might want to represent in the final frame by their last letter. Let's do it in Pandas: isin_sector = ~(clients @ ~sectors.T) answer = ( isin_sector .apply(lambda column: column.map({True: column.name[-1]})) .agg(lambda row: row.dropna()[0], axis='columns') ) display(answer) Now in PySpark, trying to avoid Pandas API. Here, when applying coalesce, I rely on the fact that dictionaries in Python preserve the order in which items are added: from pyspark.sql import SparkSession from pyspark.sql.functions import lit, coalesce from functools import reduce spark = SparkSession.builder.getOrCreate() clients_sdf = spark.createDataFrame(clients.reset_index()) sectors_sdf = spark.createDataFrame(sectors.reset_index()) client_data = clients_sdf[week_days] def is_in_sector(sector): '''sector : a boolean sequence''' return ~reduce(lambda x, y: x | y, map(lambda x, y: x & lit(not y), client_data, sector)) sectors = { (rest:=rec.asDict()).pop('sector')[-1]: is_in_sector(rest.values()) for rec in sectors_sdf.collect() } client_sector = coalesce( *(when(is_in_sec, sec_name) for sec_name, is_in_sec in sectors.items()) ) answer = clients_sdf.withColumn('Sector', client_sector).select('Client', 'Sector') answer.show() | 4 | 0 |
76,862,713 | 2023-8-8 | https://stackoverflow.com/questions/76862713/sqlalchemy-2-0-orm-filter-show-wrong-type-in-pycharm | I'm using Pycharm to develop an app with SQLAlchemy 2.0. When I attempt to query some table using ORM approach. Pycharm always display type error in the filter query. For example, in the code snippet below: with Session(engine) as session: session.scalars(select(Albums.AlbumId).where(Albums.Id > user_last_sync_id)) ^^^ Show wrong type Get the following message Expected type 'ColumnElement[bool] | _HasClauseElement | SQLCoreOperations[bool] | ExpressionElementRole[bool] | () -> ColumnElement[bool] | LambdaElement'οΌ, got 'bool' instead Even though it indicates a type error, but scripts still be executed (And get correct data) without showing any error messages. What could potentially be causing this issue in the code? Is there a way to make code more "correct" to let Pycharm not to display type error? | PyCharm assumes that expressions of the form a > b evaluate to bool when no other type information is available. Most likely, SQLAlchemy isn't providing rich enough type hints and/or stubfiles for PyCharm to correctly infer the type of that expression. To resolve that warning, you can inform PyCharm about the true type of the expression using typing.cast: .where( cast("ColumnElement[bool]", Albums.Id > user_last_sync_id) ) Alternatively, you can suppress the warning by adding # type: ignore to the end of the line. | 7 | 17 |
76,860,320 | 2023-8-8 | https://stackoverflow.com/questions/76860320/plotting-a-pandas-dataframe-with-rgb-values-and-coordinates | I have a pandas DataFrame with the columns ["x", "y", "r", "g", "b"] where x and y denote the coordinates of a pixel and r, g, b denote its RGB value. The rows contain entries for each coordinate of a grid of pixels and are unique. How can I display this DataFrame using matplotlibs's imshow()? This requires reshaping the data into a array of shape (M, N, 3). My usual approach of using plt.imshow(df.pivot(columns="x", index="y", values="i"), interpolation="nearest") does only work for greyscale images. Placing ["r", "g", "b"] as the values argument yields a DataFrame with a MultiIndex as columns. However I fail to convert this into a correct image. Simply calling .reshape(M, N, 3) creates a wrong image. I also had the idea of creating a new column with df["rgb"] = list(zip(df.r, df.g, df.b)) However I'm not sure on how to convert the resulting tuples into a new axis for the ndarray. | There exists an easy way to do this. First, you make sure the DataFrame is sorted by x- and y-values using df = df.sort_values(by=['x', 'y']). Next, you select only the three columns for r, g and b from the DataFrame by calling df[['r', 'g', 'b']]. You convert the values into a numpy array by calling df[['r', 'g', 'b']].values, which will return an array of the shape (M*N, 3), assuming that M and N are the width and height of your image. Now, reshape that array into the shape (M, N, 3) and you are done. df = df.sort_values(by=['x', 'y']) values = df[['r', 'g', 'b']].values image = values.reshape(df['x'].max() + 1 , df['y'].max() + 1, 3) I'm assuming here that your x and y values in the DataFrame start at 0, therefore I add 1 for the dimensions. If your x and y values start at 1, the reshaping can be done like this (df['x'].max(), df['y'].max(), 3). Depending on what you consider the x and y dimensions of your image, you might have to transpose the array in the end. | 2 | 6 |
76,860,119 | 2023-8-8 | https://stackoverflow.com/questions/76860119/append-first-item-to-end-of-iterable-in-python | I need to append the first item of a (general) iterable as the final item of that iterable (thus "closing the loop"). I've come up with the following: from collections.abc import Iterable from itertools import chain def close_loop(iterable: Iterable) -> Iterable: iterator = iter(iterable) first = next(iterator) return chain([first], iterator, [first]) # Examples: list(close_loop(range(5))) # [0, 1, 2, 3, 4, 0] ''.join(close_loop('abc')) # 'abca' Which works, but seems a bit "clumsy". I was wondering if there's a more straightforward approach using the magic of itertools. Solutions using more_itertools are also highly welcome. | Here is a version which doesn't use itertools def close_loop(iterable): iterator = iter(iterable) first = next(iterator) yield first yield from iterator yield first | 3 | 6 |
76,859,963 | 2023-8-8 | https://stackoverflow.com/questions/76859963/how-can-i-log-sql-queries-in-the-sqlmodel | How could I see/log queries sent by sqlmodel to database. | Since sqlmodel uses Sqlalchemy as backend ORM engine, we can use Sqlalchemy logging answer like answer here: import logging logging.basicConfig() logger = logging.getLogger('sqlalchemy.engine') logger.setLevel(logging.DEBUG) # run sqlmodel code after this You should be able to see sql queries in the console. | 2 | 4 |
76,858,406 | 2023-8-8 | https://stackoverflow.com/questions/76858406/relationship-between-python-asyncio-loop-and-executor | I generally understand the concept of async vs threads/processes, I am just a bit confused when I am reading about the event loop of asyncio. When you use asyncio.run() I presume it creates an event loop? Does this event loop use an executor? The link above says the event loop will use the default executor, which after a brief google search it appears to be ThreadPoolExecutor. I am a bit confused if I am writing coroutines why would that use ThreadPoolExecutor since I don't want to execute my async code in threads. | When you use asyncio.run() I presume it creates an event loop? Yes, from https://docs.python.org/3/library/asyncio-runner.html#asyncio.run: This function always creates a new event loop and closes it at the end. Does this event loop use an executor? The event loop provides single-threaded concurrency for IO-bound work. For CPU-bound work, which would otherwise block the event loop, asyncio lets you execute code outside the event loop in a separate thread or process, by using an executor. This is done by calling run_in_executor. It wraps the executor API and returns an awaitable which lets you use it transparently in async/await code, whereas if you used an executor directly, you would have to deal with futures and callbacks.1 The event loop has a default executor which will be used by run_in_executor if no other executor is provided as argument. A custom default executor can be provided by calling set_default_executor. If you don't call run_in_executor yourself, the code you pass to asyncio.run will not be executed in an executor, but asyncio might still use the default executor internally for some of its own code, I guess. asyncio.run will automatically shut down the default executor at the end by calling shutdown_default_executor internally (since Python 3.9).2 1 To illustrate the differences, suppose you want to do some CPU-intensive calculation in a separate thread and then report on the result. Using ThreadPoolExecutor and Future.add_done_callback: from concurrent.futures import ThreadPoolExecutor from time import sleep def calculate(): print("Calculating...") sleep(1) return 42 def report(future): print(f"The result is {future.result()}") with ThreadPoolExecutor() as executor: future = executor.submit(calculate) future.add_done_callback(report) Using ThreadPoolExecutor and concurrent.futures.as_completed: from concurrent.futures import ThreadPoolExecutor, as_completed from time import sleep def calculate(): print("Calculating...") sleep(1) return 42 with ThreadPoolExecutor() as executor: futures = [executor.submit(calculate)] for future in as_completed(futures): print(f"The result is {future.result()}") Using asyncio: import asyncio from time import sleep def calculate(): print("Calculating...") sleep(1) return 42 async def main(): loop = asyncio.get_event_loop() result = await loop.run_in_executor(None, calculate) print(f"The result is {result}") asyncio.run(main()) 2 I'm not sure if the remark about the change in Python 3.9 in https://docs.python.org/3/library/asyncio-runner.html#asyncio.run means that in earlier versions it did not shut down the executor at all, or if it did it by other means than calling shutdown_default_executor: This function runs the passed coroutine, taking care of managing [...], and closing the threadpool. [...] Changed in version 3.9: Updated to use loop.shutdown_default_executor(). | 2 | 3 |
76,858,513 | 2023-8-8 | https://stackoverflow.com/questions/76858513/python-pytz-zoneinfo-and-daylight-savings-time | I am currently attempting to migrate a code base from using pytz to using Python's zoneinfo library. I've run into an issue with how zoneinfo handles daylight saving time transitions when compared to how pytz handles them. Suppose I have a naive datetime object: >>> import datetime >>> start = datetime.datetime(2021, 10, 30, 23, 0) The approach used for making this object timezone-aware using pytz was to use localize, as follows: >>> local_timezone = pytz.timezone('Europe/Copenhagen') >>> start_cet = local_timezone.localize(start) >>> start_cet datetime.datetime(2021, 10, 30, 23, 0, tzinfo=<DstTzInfo 'Europe/Copenhagen' CEST+2:00:00 DST>) Notice that start_cet is very close to the end of CEST 2021. If I add, say, 5 hours onto this time, I get the following: >>> end = start_cet + datetime.timedelta(hours=5) >>> end datetime.datetime(2021, 10, 31, 4, 0, tzinfo=<DstTzInfo 'Europe/Copenhagen' CEST+2:00:00 DST>) Now, in reality, adding 5 hours on to start_cet should in fact land us on 3:00 on 2021/10/31, since leaving DST takes us back one hour, moving from CEST to CET. We can get the correct value by applying astimezone with our pytz timezone: >>> end.astimezone(local_timezone) datetime.datetime(2021, 10, 31, 3, 0, tzinfo=<DstTzInfo 'Europe/Copenhagen' CET+1:00:00 STD>) Question: What is the equivalent method of finding the actual time when moving through DST when using zoneinfo? Here's what I've tried. zoneinfo has no localize method, instead we are recommended to simply use replace: >>> local_timezone = zoneinfo.ZoneInfo('Europe/Copenhagen') >>> start_cet = start.replace(tzinfo=local_timezone) >>> start_cet datetime.datetime(2021, 10, 30, 23, 0, tzinfo=backports.zoneinfo.ZoneInfo(key='Europe/Copenhagen')) >>> start_cet.utcoffset() datetime.timedelta(seconds=7200) zoneinfo has the benefit over pytz that adding a timedelta will successfully transition us into the correct timezone (as can be seen by the utcoffset result: >>> end = start_cet + datetime.timedelta(hours=5) >>> end datetime.datetime(2021, 10, 31, 4, 0, tzinfo=backports.zoneinfo.ZoneInfo(key='Europe/Copenhagen')) >>> end.utcoffset() datetime.timedelta(seconds=3600) Despite this, with zoneinfo timezones there seems to be no way to get what the actual time would have been when adding 5 hours onto start_cet. As shown above, with pytz I can use astimezone, but with zoneinfo, this does not change anything: >>> end.astimezone(tz=local_timezone) datetime.datetime(2021, 10, 31, 4, 0, tzinfo=backports.zoneinfo.ZoneInfo(key='Europe/Copenhagen')) | In addition to @deceze's comment (CET specifies a UTC offset, not a time zone), note that timedelta arithmetic in Python is wall time arithmetic. from datetime import datetime, timedelta from zoneinfo import ZoneInfo eu_berlin_tz = ZoneInfo("Europe/Berlin") start = datetime(2021, 10, 30, 23, 0, tzinfo=eu_berlin_tz) dur = timedelta(hours=5) print(start) # 2021-10-30 23:00:00+02:00 => CEST print(start + dur) # 2021-10-31 04:00:00+01:00 => CET 2021-10-30T23:00:00+02:00 to 2021-10-31T04:00:00+01:00 is 5 hours on a wall clock, although a duration of 6 hours passed in reality due to DST becoming inactive. If you add the duration in UTC and then convert back to the specified time zone, the result is more what you expect IIUC: def aware_add(dt, duration): return (dt.astimezone(ZoneInfo("UTC")) + duration).astimezone(dt.tzinfo) print(aware_add(start, dur)) # 2021-10-31 03:00:00+01:00 2021-10-30T23:00:00+02:00 to 2021-10-31T03:00:00+01:00 is 4 hours on the wall clock but 5 hours in reality. | 3 | 3 |
76,858,143 | 2023-8-8 | https://stackoverflow.com/questions/76858143/is-there-a-simple-way-to-extract-variable-values-given-a-formatted-f-string-and | Assume we have an f-string-style template: "{kid} ate {number} {fruit}" and a formatted version of that: "Jack ate 42 apples" How could I, very generally, extract "Jack", "42", and "apples" from this string based on the fact that they match kid, number, and fruit respectively? Assume that the f-string could be anything and contain any number of simple string or integer variables (no formatting instructions), and that the string for extraction has already been generated from that f-string, and so matches it perfectly. This also means that I can't use regex rather than f-string-style formatting, unless there's a way to generate a regex based on the f-string. I already have something that works for my case, but it's not general, and feels a little hacky to me. I feel like there must be something simpler that I just can't find the right words to google: template = "{kid} ate {number} {fruit}" example = "Jack ate 42 apples" extracted = {"kid": "", "number": "", "fruit": ""} for variable in extracted.keys(): extracted[variable] = example for chunk in template.split(f"{{variable}}"): extracted[variable] = extracted[variable].replace(chunk, "") If it's not possible to get both more simple and more general, then more simple with the same level of generality would be great too :) | You can do this with regex and match group like this: To create a group with name: (?'name') Match any characters unlimited times: .+ import re pattern = r"(?P<name>.+) ate (?P<number>.+) (?P<fruit>.+)" text = "John ate 3 apples" match = re.match(pattern, text) if match: name = match.group('name') number = match.group('number') fruit = match.group('fruit') print(f"{name} ate {number} {fruit}") else: print("Pattern not found in the text.") I hope this is what you want | 2 | 2 |
76,858,073 | 2023-8-8 | https://stackoverflow.com/questions/76858073/printing-slowly-to-mimic-typing | I'm trying to make a text based game and I want the text to print out slowly to simulate typing. I also want it to be an input. Is it possible to do this? I saw someone else use this code(bolded is what i added): import sys,time def sinput(str): for c in str + '\n': sys.stdout.write(c) sys.stdout.flush() time.sleep(4./90) slowtest=sinput('Does it work? ') **if slowtest == 'yes': print('Amazing!')** And it was able to print out the code slowly just like I wanted, unfortunately I can not answer the question ("Does it work?"). | You can assign input() to slowtest. sinput() function is not returning anything. Once you print slowly you can use input() for slowtest variable then check the if condition. import sys,time def sinput(str): for c in str + '\n': sys.stdout.write(c) sys.stdout.flush() time.sleep(4./90) sinput('Does it work? ') slowtest = input() if slowtest == 'yes': print('Amazing!') | 3 | 3 |
76,855,796 | 2023-8-8 | https://stackoverflow.com/questions/76855796/how-can-i-slice-a-numpy-array-into-another-numpy-array-of-different-size | I'm trying to broadcast the contents of one array into another array like this: A = np.array([[1, 3], [2, 4]]) A_broadcast = np.array([[1, 0, 3, 0], [0, 2, 0, 4], [1, 2, 3, 4]]) My current approach is by initializing A_broadcast with np.zeros((3, 4)) and slicing the contents of A into A_broadcast one line at a time like this: A_broadcast[::2][0] = A[0] A_broadcast[1::2][1] = A[1] A_broadcast[::2][2] = A[0] A_broadcast[1::2][2] = A[1] But I get this error: ValueError: could not broadcast input array from shape (2,) into shape (4,) This approach works in Matlab so I thought something similar would work here. Is there a way this approach can work? If not, what could I do to get a similar effect? | Your 2 arrays: In [86]: A = np.array([[1, 3], [2, 4]]) ...: A_broadcast = np.array([[1, 0, 3, 0], [0, 2, 0, 4], [1, 2, 3, 4]]) In [88]: A Out[88]: array([[1, 3], [2, 4]]) In [89]: A_broadcast Out[89]: array([[1, 0, 3, 0], [0, 2, 0, 4], [1, 2, 3, 4]]) the blank: In [87]: res = np.zeros((3,4),int) The first row of A goes into a square that alternates in both directions: In [90]: res[::2,::2]=A[0] In [91]: res Out[91]: array([[1, 0, 3, 0], [0, 0, 0, 0], [1, 0, 3, 0]]) Actually I wasn't sure if A[0] was right its (2,1) version. I could have analzed the case, but instead just tried it. Sometimes in an interactive session it's easier to try a few alternatives than to carefully work out the details in my head before hand. It's easy to experient in an interactive session with modules like numpy. Now it's easy to do the same with the 2nd row, In [92]: res[1:,1::2]=A[1] In [93]: res Out[93]: array([[1, 0, 3, 0], [0, 2, 0, 4], [1, 2, 3, 4]]) Nothing wrong with indexing the target one row at a time as you attempt: In [94]: res[0,::2] Out[94]: array([1, 3]) In [95]: res[2,1::2] Out[95]: array([2, 4]) Both rows could also be indexed with advanced indexing - we just have to get the broadcasting right, using a (2,1), and (2,) arrays (or list equivalents): In [96]: res[[[0],[2]],[0,2]] Out[96]: array([[1, 3], [1, 3]]) In [97]: res[[[1],[2]],[1,3]] Out[97]: array([[2, 4], [2, 4]]) | 2 | 1 |
76,855,609 | 2023-8-7 | https://stackoverflow.com/questions/76855609/automatically-send-all-elements-in-list-inside-of-a-for-loop-list-comprehension | I access a function via this list comprehension and have made it work by explicitly creating a variable for each element of the list_of_lists. I want a better way to access the elements in the function in a list comprehension. Example: list_of_lists = [[0, 1, 2, 3], [0, 1, 2, 3], ...] [function(i, j, k, l) for i, j, k, l in list_of_lists] It's a very annoying syntax as I need to update (i, j, k, l) for i, j, k, l if the number of elements in a sub-list of list_of_lists changes. E.g., a change to [[0, 1, 2, 3, 4, 5, 6], ...] needs the syntax to be (i, j, k, l, m, n) for i, j, k, l, m, n and I need to be sure I do not miscount. This gets worse for more elements per sub-list and if the function changes during coding. Is there a way to say something like: [function(*) for * in list_of_lists] So my woes are ameliorated? I tried searching for something like this but it's clear I don't know the right words to be able to search this. | You're looking for * to unpack the list https://peps.python.org/pep-3132/ [function(*sublist) for sublist in list_of_lists] >>> def foo(a, b, c, d): ... return a + b + c + d ... >>> lst = [1, 2, 3, 4] >>> foo(*lst) # iterable unpack 10 >>> d = {'a':1, 'b':2, 'c':3, 'd':4} >>> foo(**d) # dict unpack 10 | 2 | 4 |
76,849,633 | 2023-8-7 | https://stackoverflow.com/questions/76849633/selenium-4-11-2-with-chromedriver-and-chrome | I'm trying to run this simple code.. But I get a error that I can not fix. Can someone help me ? Chrome driver is installed I check: pi@Rpi:~ $ chromedriver --version ChromeDriver 92.0.4515.98 (564abd8de2c05f45308eec14f9110a10aff40ad9-refs/branch-heads/4515@{#1501}) Code: from selenium import webdriver driver=webdriver.Chrome() driver.get("https://www.google.com/") Error: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/selenium_manager.py", line 123, in run completed_proc = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) File "/usr/lib/python3.7/subprocess.py", line 472, in run with Popen(*popenargs, **kwargs) as process: File "/usr/lib/python3.7/subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) OSError: [Errno 8] Exec format error: '/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/linux/selenium-manager' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/driver_finder.py", line 38, in get_path path = SeleniumManager().driver_location(options) if path is None else path File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/selenium_manager.py", line 90, in driver_location output = self.run(args) File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/selenium_manager.py", line 129, in run raise WebDriverException(f"Unsuccessful command executed: {command}") from err selenium.common.exceptions.WebDriverException: Message: Unsuccessful command executed: /usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/linux/selenium-manager --browser chrome --output json The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/pi/src/test.py", line 3, in <module> driver=webdriver.Chrome() File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 50, in __init__ keep_alive, File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/chromium/webdriver.py", line 51, in __init__ self.service.path = DriverFinder.get_path(self.service, options) File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/driver_finder.py", line 41, in get_path raise NoSuchDriverException(msg) from err selenium.common.exceptions.NoSuchDriverException: Message: Unable to obtain driver for chrome using Selenium Manager.; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors/driver_location I tried reinstalling selenium couple of times. I also tried reinstalling webdriver. Tried to locate the path. But I read it is not necessary anymore. I put the Chromedriver in the Same directory as the Python project Snapshot of project + Chromedriver: | I put the Chromedriver in the Same directory as the Python project As you are using Selenium v4.11.2 you don't need to explicitly download ChromeDriver, GeckoDriver or any browser drivers or even need use webdriver_manager any more. You just need to ensure that the desired browser client i.e. google-chrome, firefox or microsoft-edge is installed. Selenium Manager Selenium Manager is the new tool integrated with selenium4 that would help to get a working environment to run Selenium out of the box. Beta 1 of Selenium Manager will configure the browser drivers for Chrome, Firefox, Edge, etc, browser clients, if they are not present on the PATH. Solution As a solution you can simply do: from selenium import webdriver driver = webdriver.Chrome() driver.get("https://www.google.com/") tl; dr Chrome for Testing: reliable downloads for browser automation | 3 | 1 |
76,853,872 | 2023-8-7 | https://stackoverflow.com/questions/76853872/slicestart-stop-none-vs-slicestart-stop-1 | I was surprised to read here that The start and step arguments default to None since it also says: slice(start, stop, step=1) Return a slice object representing the set of indices specified by range(start, stop, step). So I expected the default argument value for the step parameter to be 1. I know that slice(a, b, None) == slice(a, b, 1) returns False, but I am curious if slice(a, b, None) always returns the same slice as slice(a, b, 1), or if there is some example that I haven't been able to think of for which they will return different slices. I couldn't find anything about this in the extensive post on slicing here | Slice's step indeed defaults to None, but using step 1 and None should be equivalent for all practical purposes. That's because in the C code where the step is actually used, there are checks which transform None into 1 anyway: int PySlice_GetIndices(PyObject *_r, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step) { PySliceObject *r = (PySliceObject*)_r; if (r->step == Py_None) { *step = 1; } ... } And: int PySlice_Unpack(PyObject *_r, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step) { PySliceObject *r = (PySliceObject*)_r; ... if (r->step == Py_None) { *step = 1; } ... } If you're wondering why they don't just default to 1 instead, perhaps it's because users may still want to slice using None explicitly (e.g. L[1:2:None]), and/or to give 3rd-party types the opportunity to handle 1 and None differently in their __getitem__/__setitem__/__delitem__ implementation. | 10 | 7 |
76,851,676 | 2023-8-7 | https://stackoverflow.com/questions/76851676/using-the-same-template-recursively-in-jinja2 | Suppose I have a tree-like data structure in Python: class Node: def __init__(self, name: str, neighbors: Optional[Iterable[Node]] = None) -> None: self.name = name self.neghbors = neighbors or [] grand_child1 = Node("grand_child1") grand_child2 = Node("grand_child1") child = Node("child", [grand_child1, grand_child2]) root = Node("root", [child]) Now I want to render this structure into a ul in HTML. I want the output to be something like this: <ul> <li>root</li> <ul> <li>child</li> <ul> <li>grand_child1</li> <li>grand_child2</li> </ul> <ul> </ul> To do this I want to use a Jinja template. The what I want to extract from each node is the same, it's just that I want to capture the tree structure and avoid duplicate code. Is there a way to use a template recursively? Something like this: <ul> {% for root in roots %} <li>{{ root.name }}</li> {# this template applied for the neighbors #} {% endfor %} </ul> | Turns out you can define macros in jinja which work like functions. Something like this will do the trick: {% macro make_ul(roots) -%} <ul> {% for root in roots %} <li>{{ root.name }}</li> {{ make_ul(root.neighbors) }} {% endfor %} </ul> {%- endmacro %} {{ make_ul(roots) }} This way the macro will call itself recursively till it hits the leaves of the tree. | 3 | 3 |
76,835,264 | 2023-8-4 | https://stackoverflow.com/questions/76835264/memory-leak-in-multithreaded-python-project | I have a small project of mine (please keep in mind that I'm just a python beginner). This project consists of few smaller .py files. First there is main.py that looks like this: from Controller import Controller import config as cfg if __name__ == "__main__": for path in cfg.paths.values(): if not os.path.exists(path): os.system(f"mkdir {path} -p") Con = Controller() Con.start() so this program just creates some directories, creates Controller object and run its method. Controller.py loks like this: import multiprocessing import watchdog import watchdog.events import watchdog.observers import watchdog.utils.dirsnapshot import concurrent.futures from AM import AM import config as cfg m = multiprocessing.Manager() q = m.Queue() class Handler(watchdog.events.PatternMatchingEventHandler): def __init__(self): # Set the patterns for PatternMatchingEventHandler watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['TEST*'], ignore_directories=True, case_sensitive=False) def on_created(self, event): logging.info("AM Watchdog received created event - % s." % event.src_path) q.put(event.src_path) def on_moved(self, event): logging.info("AM Watchdog received modified event - % s." % event.src_path) q.put(event.src_path) class Controller: def __init__(self): pass def _start_func(self, newFname): try: res = AM(f"{newFname}").start() return res except: return 1 def start(self): event_handler = Handler() observer = watchdog.observers.Observer() observer.schedule(event_handler, path=cfg.paths["ipath"], recursive=True) observer.start() try: while True: time.sleep(1) with concurrent.futures.ThreadPoolExecutor(max_workers=cfg.workers) as executor: futures = {} while not q.empty(): newFname = q.get() futures_to_work = executor.submit(self._start_func, newFname) futures[futures_to_work] = newFname for future in concurrent.futures.as_completed(futures): name = futures.pop(future) print(f"{name} completed") except KeyboardInterrupt: observer.stop() observer.join() This program is more complex than last one (and there is probalby few things wrong with it). Its purpose is to observe a directory (cfg.paths["ipath"]) and wait for TEST* file to appear. When it does, its name is added to queue. When queue is not empty a new future from concurrent.futures.ThreadPoolExecutor is created, the name is passed to _start_func method. This methods create a new object from AM.py and run it. Thought process behind it was that I wanted a program that wait for a TEST* file to appear and then do some operations on it, while being able to work on multiple files at once and working on them in the order they appear. AM.py looks like this: import subprocess class AM(): def __init__(self, fname): pass def test_func(self, fname): c = f"some_faulty_unix_program {fname}".split(" ") p = subprocess.run(c, capture_output=True, text = True) out, err = p.stdout, p.stderr if out: print(out) if err: print(err) return 1 return 0 def start(self, fname): res = self.test_func(fname) return res This program is running some unix program (on file detected in Controller.py) in new process. This program will produce errors more often than not (due to TEST* files not always being valid). I don't think it matters what this program is, but just in case this program is solve-field from astrometry.net and TEST* files are images of sky. This whole project is being ran as a service, like this: [Unit] Description = test astrometry service After = network.target [Service] Type = simple ExecStart = /bin/bash -c "/home/project/main.py" Restart = always RestartSec = 2 TimeoutStartSec = infinity User = root Group = users PrivateTmp = true Environment = "PATH=/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/home/project" [Install] WantedBy = multi-user.target When I enable this service and check it with systemctl status my_project.service it takes about 76.0M memory. I tend to leave this working for whole night (I have a system that takes a night-sky photo every minute, this project is for calculating astrometry of this night-sky photo). Next morning, when I test with systemctl status memory is around ~200-300M if there was no errors and ~3.5G if something went wrong (for example, I moved a config file used by this unix program so it produces an error right at the start). Why memory increases like that? Is there something wrong in my code that casues it, or there is something wrong with this unix program? | It's not clear to me where the memory leak is occurring. If it's in "some_faulty_unix_program" being run in AM.test_func, then you would need to find or create a replacement for it. But I believe there are some simplifications/optimizations that could be made to the code to reduce the likelihood of a memory leak if it is occurring elsewhere. First, I don't think that you need to be re-creating the multithreading pool over and over. It also appears that watchdog uses multithreading so you could use the more efficient queue.Queue instance instead of a managed queue. But ultimately I would think that with some refactoring of the Controller.py code so that your handler submits tasks to the multithreading pool you could eliminate an explicit queue altogether. Would the following work? import concurrent.futures from threading import Event import watchdog import watchdog.events import watchdog.observers import watchdog.utils.dirsnapshot from AM import AM import config as cfg class Handler(watchdog.events.PatternMatchingEventHandler): def __init__(self): # Create multithreading pool just once: self._executor = concurrent.futures.ThreadPoolExecutor(max_workers=cfg.workers) # Set the patterns for PatternMatchingEventHandler watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['TEST*'], ignore_directories=True, case_sensitive=False) def on_created(self, event): logging.info("AM Watchdog received created event - %s.", event.src_path) self._run_start_func(event.src_path) def on_moved(self, event): logging.info("AM Watchdog received modified event - %s.", event.src_path) self._run_start_func(event.src_path) def _run_start_func(self, newFname): future = self._executor.submit(self._start_func, newFname) future.result() # Wait for completion print(f"{newFname} completed") def _start_func(self, newFname): try: res = AM(newFname).start() return res except: return 1 class Controller: def __init__(self): pass def start(self): event_handler = Handler() observer = watchdog.observers.Observer() observer.schedule(event_handler, path=cfg.paths["ipath"], recursive=True) observer.start() event = Event() try: # Block until keyboard interrupt: event.wait() except KeyboardInterrupt: observer.stop() observer.join() | 5 | 1 |
76,846,418 | 2023-8-6 | https://stackoverflow.com/questions/76846418/twitter-x-login-using-selenium-triggers-anti-bot-detection | I am currently working on automating the login process for my Twitter account using Python and Selenium. However, I'm facing an issue where Twitter's anti-bot measures seem to detect the automation and immediately redirect me to the homepage when clicking the next button. I have attempted to use send_keys and ActionChains to create more human-like interactions, but the problem persists. Here's a simplified code snippet that illustrates my current approach: # imports... driver.get(URLS.login) username_input = driver.find_element(By.NAME, 'text') username_input.send_keys(username) next_button = driver.find_element(By.XPATH, '//div[@role="button"]') # These attempts all failed and return to the homepage next_button.click() next_button.send_keys(Keys.ENTER) ActionChains(driver).move_to_element(next_button).click().perform() What's weird is that besides manually clicking the next button, execute a click in console also works. I suspect that my automation attempts are still being detected by Twitter's security mechanisms, but I'm unsure about the root cause or how to bypass it successfully. | You may try this to log in to Twitter: import time from selenium import webdriver from selenium.webdriver import ChromeOptions, Keys from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.wait import WebDriverWait options = ChromeOptions() options.add_argument("--start-maximized") options.add_experimental_option("excludeSwitches", ["enable-automation"]) driver = webdriver.Chrome(options=options) url = "https://twitter.com/i/flow/login" driver.get(url) username = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'input[autocomplete="username"]'))) username.send_keys("your_username") username.send_keys(Keys.ENTER) password = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'input[name="password"]'))) password.send_keys("your_password") password.send_keys(Keys.ENTER) time.sleep(10) reference: https://github.com/ajeet214/Web_Scraping_with_Selenium/blob/main/twitter_login.py | 2 | 3 |
76,845,743 | 2023-8-6 | https://stackoverflow.com/questions/76845743/numpy-polynomial-fit-returns-unrealistic-values | I would like to fit a polynomial to a series of x/y-datapoints and evaluate it at arbitrary x-values. I am using the new numpy polynomial API: import matplotlib.pyplot as plt import numpy as np x = [50.0, 150.0, 250.0, 400.0, 600.0, 850.0, 1000.0] y = [3.2, 10.1, 16.3, 43.7, 69.1, 45.2, 10.8] mypol = np.polynomial.polynomial.Polynomial.fit( x = x, y = y, deg = 3, ) np.polyval(mypol.coef, x) Unfortunately, this returns values which are far greater than the original y-values: array([7.34024294e+06, 1.96122504e+08, 9.06027339e+08, 3.70658027e+09, 1.25012342e+10, 3.55289563e+10, 5.78446450e+10]) I played around with the order of the polynomial, but wasn't able to get anything useful from the function. What am I doing wrong? | Numpy's polyval should be evaluated at the x-points, not the y-points. On top of that, the coefficients should be given in order of highest to lowest degree. That said, mypol.coef does not necessarily return the expanded polynomial coefficients, so mypol.coef may be useless for np.polyval. Instead, the numpy.polynomial.polynomial.Polynomial object returned by numpy.polynomial.Polynomial.fit can be evaluated at the x-points of interest, i.e. you can do mypol(x). Just note that x here should be a numpy array. Also note that the fit polynomial is a least-squares fit, meaning it is not an exact interpolation. import matplotlib.pyplot as plt import numpy as np from numpy.polynomial import Polynomial x = np.array([50.0, 150.0, 250.0, 400.0, 600.0, 850.0, 1000.0]) y = np.array([3.2, 10.1, 16.3, 43.7, 69.1, 45.2, 10.8]) mypol = Polynomial.fit(x, y, 3) x_fine = np.linspace(x.min(), x.max(), 100) y_fine = mypol(x_fine) fig, ax = plt.subplots() ax.scatter(x, y, label="data") ax.plot(x_fine, y_fine, "tab:orange", label="fit") ax.set(xlabel="x", ylabel="y", title="NumPy Polynomial Fit") ax.legend() | 2 | 2 |
76,847,159 | 2023-8-6 | https://stackoverflow.com/questions/76847159/is-it-possible-to-open-browser-with-webbrowser-open-and-somehow-take-over-by-sel | I'm wondering if it's possible to use webbrowser.open() method, open the browser, get its handler (?), and use it in driver = webdriver.Chrome() command somehow (webdriver is from selenium)? Is it feasible at all? I'm using Python. | No, webbrowser.open() is from another package unrealted to Selenium. To interact with Selenium you have to use a Selenium driven WebDriver initiated Browsing Context. | 2 | 3 |
76,822,673 | 2023-8-2 | https://stackoverflow.com/questions/76822673/langchain-querying-a-document-and-getting-structured-output-using-pydantic-with | I am trying to get a LangChain application to query a document that contains different types of information. To facilitate my application, I want to get a response in a specific format, so I am using Pydantic to structure the data as I need, but I am running into an issue. Sometimes ChatGPT doesn't respect the format from my Pydantic structure, and so I get an exception raised and my program stops. Sure, I can handle the exception, but I much rather that ChatGPT respects the format, and I wonder if I am doing something wrong. More specifically: The date is not formatted in the right way from ChatGPT since it returns the date from the document as it found it, and not in a datetime.date format. The Enum Field from Pydantic also doesn't work well, as sometimes the documents have Lastname, and not Surname, and ChatGPT formats it as Lastname and it doesn't transform it to Surname. Lastly, I do not know if I am using the chains correctly because I keep getting confused with all the different examples in the LangChain documentation. After loading all the necessary packages, this is the code I have: FILE_PATH = 'foo.pdf' class NameEnum(Enum): Name = 'Name' Surname = 'Surname' class DocumentSchema(BaseModel): date: datetime.date = Field(..., description='The date of the doc') name: NameEnum = Field(..., description='Is it name or surname?') def main(): loader = PyPDFLoader(FILE_PATH) data = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=10) all_splits = text_splitter.split_documents(data) vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings()) llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0) question = """What is the date on the document? Is it about a name or surname? """ doc_prompt = PromptTemplate( template="Content: {page_content}\nSource: {source}", input_variables=["page_content", "source"], ) prompt_messages = [ SystemMessage( content=( "You are a world class algorithm for extracting information in structured formats." ) ), HumanMessage(content="Answer the questions using the following context"), HumanMessagePromptTemplate.from_template("{context}"), HumanMessagePromptTemplate.from_template("Question: {question}"), HumanMessage( content="Tips: Make sure to answer in the correct format" ), ] chain_prompt = ChatPromptTemplate(messages=prompt_messages) chain = create_structured_output_chain(output_schema=DocumentSchema, llm=llm, prompt=chain_prompt) final_qa_chain_pydantic = StuffDocumentsChain( llm_chain=chain, document_variable_name="context", document_prompt=doc_prompt, ) retrieval_qa_pydantic = RetrievalQA( retriever=vectorstore.as_retriever(), combine_documents_chain=final_qa_chain_pydantic ) data = retrieval_qa_pydantic.run(question) Depending on the file that I am checking, executing the script will raise an error because of the formats from Pydantic not being respected by the return of ChatGPT. What am I missing here? Thank you! | I managed to solve my issue and here is what I did to solve them. try/except block First, I added a try/except block around the chain execution code to catch those naughty errors without stopping my execution. Cleaning vectorstore I also noticed that the vectorstore variable was not getting "cleaned" on each run I would do for different documents on the same execution so that I would have old data on new documents. I realized that I needed to clean the vectorstore on each run: try: # Retrieve the data data = retrieval_qa_pydantic.run(question) # Delete the embeddings for the next run vectorstore.delete() except error_wrappers.ValidationError as e: log.error(f'Error parsing file: {e}') else: return data return None Tips about formatting Then, I noticed I needed to be more explicit with the data formatting. I modified the instructions to fit my requirements with extra help like this: HumanMessage( content="Tips: Make sure to answer in the correct format. Dates should be in the format YYYY-MM-DD." ), The key was the Tips part of the message. From that moment on, I had no more formatting problems regarding the date. Enum with None To solve the issue of the Enum, I modified the class to account for a None value, meaning when the LLM cannot find the info I need, it sets the variable to None. This is how I fixed it: class NameEnum(Enum): Name = 'Name' Surname = 'Surname' NON = None Last but not least, I noticed that I was getting a lot of wrong information from my documents, so I had to tweak some extra things: Bigger splits and gpt-4 I increased the splits to 500 instead of 200, and to improve the accuracy of my task, I used gpt-4 as a model and not gpt-3.5-turbo anymore. By changing the size of the chunks and using gpt-4, I removed any inconsistencies, and the data extraction works almost flawlessly. text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=10) all_splits = text_splitter.split_documents(data) vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings()) llm = ChatOpenAI(model_name='gpt-4', temperature=0) I hope these tips help someone in the future. | 4 | 1 |
76,845,690 | 2023-8-6 | https://stackoverflow.com/questions/76845690/how-to-expand-columns-from-web-scraping | I am trying to web scrape a site with tis code: import requests import pandas as pd import numpy as np from bs4 import BeautifulSoup import json import numpy as np url = 'https://www.quantalys.com/Categories' soup = BeautifulSoup(requests.get(url).content, "html.parser") data = soup.select_one("#DataCatsWithPerfs")["value"] data = json.loads(data) df = pd.DataFrame(data) df but when I tried to print the df (just the last 2 columns), I get: dtRetMonth perfCalendaire 0 2023-07-01 [{'nYear': 2015, 'nRet': 0.1204}, {'nYear': 20... 1 2023-07-01 [{'nYear': 2015, 'nRet': 0.143}, {'nYear': 201... 2 2023-07-01 [{'nYear': 2015, 'nRet': 0.2041}, {'nYear': 20... 3 2023-07-01 [{'nYear': 2015, 'nRet': 0.0155}, {'nYear': 20... 4 2023-07-01 [{'nYear': 2015, 'nRet': -0.1601}, {'nYear': 2... How to expand the perfCalendaire columns into '2015', '2016',...columns ? I would like to expand columns after web scraping operation. | import requests import pandas as pd from bs4 import BeautifulSoup import json url = 'https://www.quantalys.com/Categories' soup = BeautifulSoup(requests.get(url).content, "html.parser") data = soup.select_one("#DataCatsWithPerfs")["value"] data = json.loads(data) df = pd.DataFrame(data) for index, row in df.iterrows(): for item in row['perfCalendaire']: year = str(item['nYear']) df.loc[index, year] = item['nRet'] df = df.drop(columns=['perfCalendaire']) df = df.dropna(axis=1, how='all') print(df) years_columns = [str(year) for year in range(2015, 2022)] print(df[years_columns]) | 3 | 2 |
76,843,614 | 2023-8-5 | https://stackoverflow.com/questions/76843614/python-move-parts-of-rows-to-previous-row-based-on-matching-rows-from-another-da | I have two data frames df1: Name Month Amount Status 0 Bill Apr 0 1 Bill May 0 2 Bill Jun 100 member 3 Sally Apr 0 4 Sally May 0 5 Sally Jun 200 member 6 Tom Apr 0 7 Tom May 300 member 8 Tom Jun 0 and df2: Name Month 0 Bill Jun 1 Tom May I am looking to update df1 whenever there is a match in df2 on Name and Month and move the Amount and Status to the previous Month. It can be assumed there is always a previous Month to move to for each person and that the rows are already in the correct order by Name and Month. The intended result would be as follows: Name Month Amount Status 0 Bill Apr 0 1 Bill May 100 member 2 Bill Jun 0 3 Sally Apr 0 4 Sally May 0 5 Sally Jun 200 member 6 Tom Apr 300 member 7 Tom May 0 8 Tom Jun 0 I believe there is a way to do this with itterows() but is there a more direct approach to use here? | Try: # merge to find rows with Name, Month in df2 exist = df1[['Name','Month']].merge(df2.assign(exist=1), how='left')['exist'].notna() # find the previous rows prev_rows = exist.groupby(df1['Name']).shift(-1, fill_value=False) # fill the previous rows df1.loc[prev_rows, ['Amount','Status']] = df1.loc[exist, ['Amount','Status']].to_numpy() # remove the given rows df1.loc[exist, ['Amount','Status']] = [0, None] Output: Name Month Amount Status 0 Bill Apr 0 None 1 Bill May 100 member 2 Bill Jun 0 None 3 Sally Apr 0 None 4 Sally May 0 None 5 Sally Jun 200 member 6 Tom Apr 300 member 7 Tom May 0 None 8 Tom Jun 0 None | 2 | 2 |
76,842,824 | 2023-8-5 | https://stackoverflow.com/questions/76842824/auxilary-space-complexity-across-iterations | Suppose we have the function below: def ReverseStr(s, k): """ s: list of characters (length n) k: integer """ for i in range(0, len(s), 2*k): s = s[:i] + s[i:i+k][::-1] + s[i+k:] return s In this function, each iteration involves creating four distinct sublists. First, we create a sub-list of length i. Then, we create a sub-list of size k; we create another sub-list of size k when we reverse this sub-list of size k. Finally, we create a sub-list of size n - (i + k). So, in total, we create sub-lists with a total space of i + k + k + n - (i + k) or, simplified, n + k. Space complexity is defined as the maximum amount of space that a function will occupy at any given point. While this definition makes sense to me, I am struggling to understand what happens on successive iterations. For instance, suppose we are on the fourth iteration. Are the sub-lists that were created on the first, second, and third iterations still being stored in memory somewhere? If so, then our space complexity analysis must factor in memory accumulation over successive iterations. If not, then we know that the maximum space consumed at any given point occurs during a single, arbitrary iteration; O(n+k). | each iteration involves creating four distinct sublists. In fact, there are also the lists that are created by executing the + operator (list concatenation) So, in total, we create sub-lists with a total space of i + k + k + n - (i + k) or, simplified, n + k. When determining auxiliary space complexity, it is common practice to discard the memory that can be garbage collected, even though in practice the garbage collector may not free the memory immediately. We start out with the memory for s, which has n elements. But this does not belong to auxiliary space, so we can discard it. Also the memory for the list that will be returned at the end could be excluded from what we call auxiliary memory (it depends how you define it). The order of execution is like this: s[:i] This allocates memory for a list of length i s[i:i+k] This allocates memory for a list of length k ...[::-1] This allocates memory for another list of length k, after this allocation, the original list (of the previous step) is no longer referenced, so it can be garbage collected (after a peak of 2k). We could therefore conclude this is a break-even operation that does not increase the memory used by referenced objects. ... + ... The first two terms are concatenated, which creates a list of length i + k. But also here, the operands of this concatenation are no longer referenced, so after a temporary peak in terms of memory this is rather a break-even operation (ignoring the overhead for a list -- we went from two lists to just one). s[i+k:] This allocates memory for a list of length n - i - k. So we are at a total auxiliary memory of n now -- in terms of number of referenced objects by these lists, again ignoring the overhead for having 2 active lists. ... + ... This is the second concatenation. Once allocated, we have a peak of 2n, and then the memory for the operands will become garbage collectable, and only the resulting list of n elements remains usable. Then finally the assignment to s kicks in, which disposes of the original value of s (except when it was the original version that the caller has -- in the first iteration), reducing the "active" memory to n object references, which arguably stop being "auxiliary", as they now are referenced by s which eventually will be the returned list. So the peak is at 2n object references of auxiliary memory, excluding the lists overhead for at most 2 lists. The auxiliary memory complexity is thus O(2π) = O(π). Improvement You could reduce the auxiliary memory complexity to O(π) as follows: def ReverseStr(s, k): for i in range(0, len(s), 2*k): s[i:i+k] = s[i+k-1:i-1 if i else None:-1] return s Here s is modified in place. The reversed slice is created in one operation, requiring O(π) auxiliary memory, and once the assignment is completed this auxiliary memory is discarded again. If it is required that the caller's list is not mutated, then just start by making a copy which will distinguish the returned list from the input list (both not counted as auxiliary memory): s = s[:] You can bring down the auxiliary memory usage even more by copying values one by one, not in slices: def ReverseStr(s, k): for i in range(0, len(s), 2*k): k = min(k, len(s) - i) for j in range(k//2): s[i+k-1-j], s[i+j] = s[i+j], s[i+k-1-j] # swap return s This has an auxiliary memory complexity of O(1). But this is only interesting in theory: it will run slower than the above implementation because it performs the inner iteration in Python code instead of the slicing performed by compiled code. | 3 | 3 |
76,838,859 | 2023-8-4 | https://stackoverflow.com/questions/76838859/dataframe-column-with-quoted-csv-to-named-dataframe-columns | I am pulling some JSON formatted log data out of my SEIM and into a pandas dataframe. I am able to easily convert the JSON into multiple columns within the dataframe, but there is a "message" field in the JSON that contains a quoted CSV, like this. # dummy data dfMyData = pd.DataFrame({"_raw": [\ """{"timestamp":1691096387000,"message":"20230803 20:59:47,ip-123-123-123-123,mickey,321.321.321.321,111111,10673010,type,,'I am a, quoted, string, with commas,',0,,","logstream":"Blah1","loggroup":"group 1"}""", """{"timestamp":1691096386000,"message":"20230803 21:00:47,ip-456-456-456-456,mouse,654.654.654.654,222222,10673010,type,,'I am another quoted string',0,,","logstream":"Blah2","loggroup":"group 2"}""" ]}) # Column names for the _raw.message field that is generated. MessageColumnNames = ["Timestamp","dest_host","username","src_ip","port","number","type","who_knows","message_string","another_number","who_knows2","who_knows3"] # Convert column to json object/dict dfMyData['_raw'] = dfMyData['_raw'].map(json.loads) # convert JSON into columns within the dataframe dfMyData = pd.json_normalize(dfMyData.to_dict(orient='records')) I've seen this done before with str.split() to split on columns and then concat it back to the original dataframe, however the str.split method doesn't handle quoted values within the CSV. pd.read_csv can handle the quoted CSV correctly, but I can't figure out how to apply it across the dataframe and expand the output of that into new dataframe columns. Additionally, when I split dfMyData['_raw.message'] out into new columns, I'd also like to supply a list of column names for the data and have the new columns be created with those names. Anyone know of an easy way to split a quoted CSV string in a dataframe column into new named columns within the dataframe? | Update Actually all you need is to pass your data into a csv-reader, which in turn is an appropriate data type for pandas.DataFrame: pd.DataFrame(csv.reader(dfMyData['_raw.message'], quotechar="'"), columns=columns) Previous answer We can try to convert the data into a csv and read them back with appropriate parameters: import csv from tempfile import TemporaryFile seq = dfMyData.iloc[:,1] # column of interest in the original data columns = [*'ABCDEFGHIJKL'] # custom names of future data columns with TemporaryFile() as file: seq.to_csv( file, sep='\N{unit separator}', header=False, index=False, quoting=csv.QUOTE_NONE ) file.seek(0) # read data from the start df = pd.read_csv( file, header=None, names=columns, quotechar="\'" ) print(df) Notes: quoting=csv.QUOTE_NONE to avoid \" at the ends of each line sep='\N{unit separator}' to avoid confusion with commas quotechar="\'" when reading back because of a specific quoting inside the lines since we are dumping a sequence without indexes, the '\N{unit separator}' delimiter will never make it into the final data Transformed data: | 3 | 1 |
76,825,015 | 2023-8-3 | https://stackoverflow.com/questions/76825015/cython-execution-speed-vs-msvc-and-gcc-versions | Intro I have a fairly simple Cython module - which I simplified even further for the specific tests I have carried on. This module has only one class, which has only one interesting method (named run): this method accepts as an input a Fortran-ordered 2D NumPy array and two 1D NumPy arrays, and does some very, very simple things on those (see below for code). For the sake of benchmarking, I have compiled the exact same module with MSVC, GCC 8, GCC 11, GCC 12 and GCC 13 on Windows 10 64 bit, Python 3.9.10 64 bit, Cython 0.29.32. All the GCC compilers I have obtained from the excellent Brecht Sanders GitHub page (https://github.com/brechtsanders). Main Question The overarching question of this very long post is: I am just curious to know if anyone has any explanation regarding why GCC12 and GCC13 are so much slower than GCC11 (which is the fastest of all). Looks like performances are going down at each release of GCC, rather than getting better... Benchmarks In the benchmarking, I simply vary the array dimensions of the 2D and 1D arrays (m and n) and the number on nonzero entries in the 2D and 1D arrays. I repeat the run method 20 times per compiler version, per set of m and n and nonzero entries. Optimization settings I am using: MVSC MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL'] GCC GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables'] GCC_EXTRA_LINK_ARGS = ['-flto'] + GCC_EXTRA_COMPILE_ARGS What I am observing is the following: MSVC is by far the slowest at executing the benchmark (why would that be on Windows?) The progression GCC8 -> GCC11 is promising, as GCC11 is faster than GCC8 GCC12 and GCC13 are both significantly slower than GCC11, with GCC13 being the worst (twice as slow as GCC11 and much worse than GCC12) Table of Results: Runtimes are in milliseconds (ms) Graph (NOTE: Logarithmic Y axis!!): Runtimes are in milliseconds (ms) Code Cython file: ############################################################################### import numpy as np cimport numpy as np import cython from cython.view cimport array as cvarray from libc.float cimport FLT_MAX DTYPE_float = np.float32 ctypedef np.float32_t DTYPE_float_t cdef float SMALL = 1e-10 cdef int MAXSIZE = 1000000 cdef extern from "math.h" nogil: cdef float fabsf(float x) ############################################################################### @cython.final cdef class CythonLoops: cdef int m, n cdef int [:] col_starts cdef int [:] row_indices cdef double [:] x def __cinit__(self): self.m = 0 self.n = 0 self.col_starts = cvarray(shape=(MAXSIZE,), itemsize=sizeof(int), format='i') self.row_indices = cvarray(shape=(MAXSIZE,), itemsize=sizeof(int), format='i') self.x = cvarray(shape=(MAXSIZE,), itemsize=sizeof(double), format='d') @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) @cython.initializedcheck(False) @cython.cdivision(True) cpdef run(self, DTYPE_float_t[::1, :] matrix, DTYPE_float_t[:] ub_values, DTYPE_float_t[:] priority): cdef Py_ssize_t i, j, m, n cdef int nza, collen cdef double too_large, ok, obj cdef float ub, element cdef int [:] col_starts = self.col_starts cdef int [:] row_indices = self.row_indices cdef double [:] x = self.x m = matrix.shape[0] n = matrix.shape[1] self.m = m self.n = n nza = 0 collen = 0 for i in range(n): for j in range(m+1): if j == 0: element = priority[i] else: element = matrix[j-1, i] if fabsf(element) < SMALL: continue if j == 0: obj = <double>element # Do action 1 with external library else: collen = nza + 1 col_starts[collen] = i+1 row_indices[collen] = j x[collen] = <double>element nza += 1 ub = ub_values[i] if ub > FLT_MAX: too_large = 0.0 # Do action 2 with external library elif ub > SMALL: ok = <double>ub # Do action 3 with external library # Use x, row_indices and col_starts in the external library Setup file: I use the following to compile it: python setup.py build_ext --inplace --compiler=mingw32 gcc13 Where the last argument is the compiler I want to test #!/usr/bin/env python from setuptools import setup from setuptools import Extension from Cython.Build import cythonize from Cython.Distutils import build_ext import numpy as np import os import shutil import sys import getpass MODULE = 'loop_cython_%s' GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables'] GCC_EXTRA_LINK_ARGS = ['-flto'] + GCC_EXTRA_COMPILE_ARGS MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL'] MSVC_EXTRA_LINK_ARGS = MSVC_EXTRA_COMPILE_ARGS def remove_builds(kind): for folder in ['build', 'bin']: if os.path.isdir(folder): if folder == 'bin': continue shutil.rmtree(folder, ignore_errors=True) if os.path.isfile(MODULE + '_%s.c'%kind): os.remove(MODULE + '_%s.c'%kind) def setenv(extra_args, doset=True, path=None, kind='gcc8'): flags = '' if doset: flags = ' '.join(extra_args) for key in ['CFLAGS', 'FFLAGS', 'CPPFLAGS']: os.environ[key] = flags user = getpass.getuser() if doset: path = os.environ['PATH'] if kind == 'gcc8': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_8.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc11': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc12': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_12.2.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc13': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_13.2.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'msvc': os.environ['PATH'] = r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\bin\Hostx64\x64;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\J0514162\WinPython39\WPy64-39100\python-3.9.10.amd64;C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64' os.environ['LIB'] = r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\lib\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64' os.environ["DISTUTILS_USE_SDK"] = '1' os.environ["MSSdk"] = '1' else: os.environ['PATH'] = path return path class CustomBuildExt(build_ext): def build_extensions(self): # Override the compiler executables. Importantly, this # removes the "default" compiler flags that would # otherwise get passed on to to the compiler, i.e., # distutils.sysconfig.get_var("CFLAGS"). self.compiler.set_executable("compiler_so", "gcc -mdll -O -Wall -DMS_WIN64") self.compiler.set_executable("compiler_cxx", "g++ -O -Wall -DMS_WIN64") self.compiler.set_executable("linker_so", "gcc -shared -static") self.compiler.dll_libraries = [] build_ext.build_extensions(self) if __name__ == '__main__': os.system('cls') kind = None for arg in sys.argv: if arg.strip() in ['gcc8', 'gcc11', 'gcc12', 'gcc13', 'msvc']: kind = arg sys.argv.remove(arg) break base_file = os.path.join(os.getcwd(), MODULE[0:-3]) source = base_file + '.pyx' target = base_file + '_%s.pyx'%kind shutil.copyfile(source, target) if kind == 'msvc': extra_compile_args = MSVC_EXTRA_COMPILE_ARGS[:] extra_link_args = MSVC_EXTRA_LINK_ARGS[:] + ['/MANIFEST'] else: extra_compile_args = GCC_EXTRA_COMPILE_ARGS[:] extra_link_args = GCC_EXTRA_LINK_ARGS[:] path = setenv(extra_compile_args, kind=kind) remove_builds(kind) define_macros = [('WIN32', 1)] nname = MODULE%kind include_dirs = [np.get_include()] if kind == 'msvc': include_dirs += [r'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\ucrt', r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\include', r'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\shared'] extensions = [ Extension(nname, [nname + '.pyx'], extra_compile_args=extra_compile_args, extra_link_args=extra_link_args, include_dirs=include_dirs, define_macros=define_macros)] # build the core extension(s) setup_kwargs = {'ext_modules': cythonize(extensions, compiler_directives={'embedsignature' : False, 'boundscheck' : False, 'wraparound' : False, 'initializedcheck': False, 'cdivision' : True, 'language_level' : '3str', 'nonecheck' : False}, force=True, cache=False, quiet=False)} if kind != 'msvc': setup_kwargs['cmdclass'] = {'build_ext': CustomBuildExt} setup(**setup_kwargs) setenv([], False, path) remove_builds(kind) Test code: import os import numpy import time import loop_cython_msvc as msvc import loop_cython_gcc8 as gcc8 import loop_cython_gcc11 as gcc11 import loop_cython_gcc12 as gcc12 import loop_cython_gcc13 as gcc13 # M N NNZ(matrix) NNZ(priority) NNZ(ub) DIMENSIONS = [(1661 , 2608 , 3560 , 375 , 2488 ), (2828 , 3512 , 4333 , 413 , 2973 ), (780 , 985 , 646 , 23 , 984 ), (799 , 1558 , 1883 , 301 , 1116 ), (399 , 540 , 388 , 44 , 517 ), (10545, 10486, 14799 , 1053 , 10041), (3369 , 3684 , 3684 , 256 , 3242 ), (2052 , 5513 , 4772 , 1269 , 3319 ), (224 , 628 , 1345 , 396 , 594 ), (553 , 1475 , 1315 , 231 , 705 )] def RunTest(): print('M N NNZ MSVC GCC 8 GCC 11 GCC 12 GCC 13') for m, n, nnz_mat, nnz_priority, nnz_ub in DIMENSIONS: print('%-6d %-6d %-8d'%(m, n, nnz_mat), end='') for solver, label in zip([msvc, gcc8, gcc11, gcc12, gcc13], ['MSVC', 'GCC 8', 'GCC 11', 'GCC 12', 'GCC 13']): numpy.random.seed(123456) size = m*n idxes = numpy.arange(size) matrix = numpy.zeros((size, ), dtype=numpy.float32) idx_mat = numpy.random.choice(idxes, nnz_mat) matrix[idx_mat] = numpy.random.uniform(0, 1000, size=(nnz_mat, )) matrix = numpy.asfortranarray(matrix.reshape((m, n))) idxes = numpy.arange(m) priority = numpy.zeros((m, ), dtype=numpy.float32) idx_pri = numpy.random.choice(idxes, nnz_priority) priority[idx_pri] = numpy.random.uniform(0, 1000, size=(nnz_priority, )) idxes = numpy.arange(n) ub_values = numpy.inf*numpy.ones((n, ), dtype=numpy.float32) idx_ub = numpy.random.choice(idxes, nnz_ub) ub_values[idx_ub] = numpy.random.uniform(0, 1000, size=(nnz_ub, )) solver = solver.CythonLoops() time_tot = [] for i in range(20): start = time.perf_counter() solver.run(matrix, ub_values, priority) elapsed = time.perf_counter() - start time_tot.append(elapsed*1e3) print('%-8.4g'%numpy.mean(time_tot), end=' ') print() if __name__ == '__main__': os.system('cls') RunTest() EDIT After @PeterCordes comments, I have changed the optimization flags to this: MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL', '/QIntel-jcc-erratum'] GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables', '-Wa,-mbranches-within-32B-boundaries'] MSVC appears to be marginally faster than before (between 5% and 10%), but GCC12 and GCC13 are slower than before (between 3% and 20%). Below a graph with the results on the largest 2D matrix: Note: "Current" means with the latest optimization flags suggested by @PeterCordes, "Previous" is the original set of flags. | This is a partial answer providing the generated C code produced by Cython once it has been simplified a bit to be shorter, more human-readable and easy to compile without any Cython, Python, or NumPy dependencies (the transformations are not expected to drastically impact the timings). It also show the generated assembly code and few comments. Here is the C code: #include <stdlib.h> #include <stdint.h> #include <math.h> #include <float.h> /* Remove external dependencies */ typedef int64_t Py_ssize_t; typedef void memoryview_obj; typedef void* VTable; typedef struct { struct memoryview_obj *memview; char *data; Py_ssize_t shape[8]; Py_ssize_t strides[8]; Py_ssize_t suboffsets[8]; } memviewslice; struct Self { /* PyObject_HEAD */ struct VTable* tab; int m; int n; memviewslice col_starts; memviewslice row_indices; memviewslice x; }; static float test_SMALL = 1e-10; extern void INC_MEMVIEW(memviewslice* slice, int count); extern void XDEC_MEMVIEW(memviewslice* slice, int count); void run(struct Self* self, memviewslice matrix, memviewslice ub_values, memviewslice priority) { Py_ssize_t i; Py_ssize_t j; Py_ssize_t m; Py_ssize_t n; int nza; int collen; double too_large; double ok; double obj; float ub; float element; memviewslice col_starts = { 0, 0, { 0 }, { 0 }, { 0 } }; memviewslice row_indices = { 0, 0, { 0 }, { 0 }, { 0 } }; memviewslice x = { 0, 0, { 0 }, { 0 }, { 0 } }; memviewslice t_1 = { 0, 0, { 0 }, { 0 }, { 0 } }; memviewslice t_2 = { 0, 0, { 0 }, { 0 }, { 0 } }; Py_ssize_t t_3; Py_ssize_t t_4; Py_ssize_t t_6; Py_ssize_t t_7; int t_9; Py_ssize_t t_10; Py_ssize_t t_11; t_1 = self->col_starts; INC_MEMVIEW(&t_1, 1); col_starts = t_1; t_1.memview = NULL; t_1.data = NULL; t_1 = self->row_indices; INC_MEMVIEW(&t_1, 1); row_indices = t_1; t_1.memview = NULL; t_1.data = NULL; t_2 = self->x; INC_MEMVIEW(&t_2, 1); x = t_2; t_2.memview = NULL; t_2.data = NULL; m = matrix.shape[0]; n = matrix.shape[1]; self->m = m; self->n = n; nza = 0; collen = 0; t_3 = n; t_4 = t_3; for (i = 0; i < t_4; i++) { t_6 = m + 1; t_7 = t_6; for (j = 0; j < t_7; j++) { t_9 = (j == 0) != 0; if (t_9) { t_10 = i; element = (*((float*) ( /* dim=0 */ (priority.data + t_10 * priority.strides[0]) ))); goto L7; } { t_10 = j - 1; t_11 = i; element = (*((float*) ( /* dim=1 */ (( /* dim=0 */ ((char *) (((float*) matrix.data) + t_10)) ) + t_11 * matrix.strides[1]) ))); } L7:; t_9 = (fabsf(element) < test_SMALL) != 0; if (t_9) { goto L5_continue; } t_9 = ((j == 0) != 0); if (t_9) { obj = (double)element; goto L9; } { collen = nza + 1; t_11 = collen; *((int *) ( /* dim=0 */ (col_starts.data + t_11 * col_starts.strides[0]) )) = i + 1; t_11 = collen; *((int *) ( /* dim=0 */ (row_indices.data + t_11 * row_indices.strides[0]) )) = j; t_11 = collen; *((double *) ( /* dim=0 */ (x.data + t_11 * x.strides[0]) )) = ((double)element); nza = nza + 1; } L9:; L5_continue:; } t_11 = i; ub = (*((float*) ( /* dim=0 */ (ub_values.data + t_11 * ub_values.strides[0]) ))); t_9 = (ub > FLT_MAX) != 0; if (t_9) { too_large = 0.0; goto L10; } t_9 = (ub > test_SMALL) != 0; if (t_9) { ok = (double)ub; } L10:; } XDEC_MEMVIEW(&col_starts, 1); XDEC_MEMVIEW(&row_indices, 1); XDEC_MEMVIEW(&x, 1); } Here is the resulting main loop compiled in assembly using GCC 11.4 (with the compiler option -Ofast -funroll-loops -ftree-vectorize -march=skylake -fno-asynchronous-unwind-tables): .L4: lea rdi, [r12-1] xor eax, eax and edi, 3 je .L5 vmovss xmm4, DWORD PTR [rcx] mov eax, 1 vandps xmm5, xmm4, xmm1 vcomiss xmm9, xmm5 ja .L25 inc edx movsx r14, edx mov r15, r9 imul r15, r14 vcvtss2sd xmm6, xmm4, xmm4 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, r14 imul r14, rbp mov DWORD PTR [r10+r15], 1 vmovsd QWORD PTR [rbx+r14], xmm6 .L25: cmp rdi, 1 je .L5 cmp rdi, 2 je .L26 inc rax vmovss xmm7, DWORD PTR [rcx-4+rax*4] vandps xmm2, xmm7, xmm1 vcomiss xmm9, xmm2 ja .L26 inc edx movsx rdi, edx mov r14, r9 mov r15, r11 imul r14, rdi imul r15, rdi imul rdi, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm3, xmm7, xmm7 mov DWORD PTR [r10+r15], eax vmovsd QWORD PTR [rbx+rdi], xmm3 .L26: inc rax vmovss xmm6, DWORD PTR [rcx-4+rax*4] vandps xmm8, xmm6, xmm1 vcomiss xmm9, xmm8 jbe .L36 jmp .L5 .L7: vmovss xmm10, DWORD PTR [rcx-4+rax*4] vandps xmm11, xmm10, xmm1 vcomiss xmm9, xmm11 ja .L24 inc edx movsx rdi, edx mov r14, r9 mov r15, r11 imul r14, rdi imul r15, rdi imul rdi, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm12, xmm10, xmm10 mov DWORD PTR [r10+r15], eax vmovsd QWORD PTR [rbx+rdi], xmm12 .L24: lea r14, [rax+1] vmovss xmm13, DWORD PTR [rcx-4+r14*4] vandps xmm14, xmm13, xmm1 vcomiss xmm9, xmm14 ja .L27 inc edx movsx rdi, edx mov r15, r9 imul r15, rdi vcvtss2sd xmm15, xmm13, xmm13 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, rdi imul rdi, rbp mov DWORD PTR [r10+r15], r14d vmovsd QWORD PTR [rbx+rdi], xmm15 .L27: lea r14, [rax+2] vmovss xmm0, DWORD PTR [rcx-4+r14*4] vandps xmm4, xmm0, xmm1 vcomiss xmm9, xmm4 ja .L28 inc edx movsx rdi, edx mov r15, r9 imul r15, rdi vcvtss2sd xmm5, xmm0, xmm0 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, rdi imul rdi, rbp mov DWORD PTR [r10+r15], r14d vmovsd QWORD PTR [rbx+rdi], xmm5 .L28: add rax, 3 vmovss xmm6, DWORD PTR [rcx-4+rax*4] vandps xmm7, xmm6, xmm1 vcomiss xmm9, xmm7 ja .L5 .L36: inc edx movsx rdi, edx mov r14, r9 mov r15, r11 imul r14, rdi imul r15, rdi imul rdi, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm2, xmm6, xmm6 mov DWORD PTR [r10+r15], eax vmovsd QWORD PTR [rbx+rdi], xmm2 .L5: inc rax cmp r12, rax jne .L7 lea rax, [rsi+1] add rcx, QWORD PTR [rsp+8] cmp r13, rsi je .L2 mov rsi, rax jmp .L4 .L2: Same with GCC 12.2: .L4: lea rdi, [r12-1] xor eax, eax and edi, 3 je .L5 vmovss xmm15, DWORD PTR [rcx] mov eax, 1 vandps xmm4, xmm15, xmm1 vcomiss xmm0, xmm4 ja .L27 inc edx movsx r14, edx mov r15, r9 imul r15, r14 vcvtss2sd xmm5, xmm15, xmm15 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, r14 imul r14, rbp mov DWORD PTR [r10+r15], 1 vmovsd QWORD PTR [rbx+r14], xmm5 .L27: cmp rdi, 1 je .L5 cmp rdi, 2 je .L28 inc rax vmovss xmm6, DWORD PTR [rcx-4+rax*4] vandps xmm7, xmm6, xmm1 vcomiss xmm0, xmm7 ja .L28 inc edx movsx rdi, edx mov r14, r9 mov r15, r11 imul r14, rdi imul r15, rdi imul rdi, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm2, xmm6, xmm6 mov DWORD PTR [r10+r15], eax vmovsd QWORD PTR [rbx+rdi], xmm2 .L28: inc rax vmovss xmm5, DWORD PTR [rcx-4+rax*4] vandps xmm3, xmm5, xmm1 vcomiss xmm0, xmm3 jbe .L39 jmp .L5 .L41: vmovss xmm8, DWORD PTR [rcx-4+rax*4] vandps xmm9, xmm8, xmm1 vcomiss xmm0, xmm9 ja .L26 inc edx movsx r15, edx mov r14, r9 mov rdi, r11 imul r14, r15 imul rdi, r15 imul r15, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm10, xmm8, xmm8 mov DWORD PTR [r10+rdi], eax vmovsd QWORD PTR [rbx+r15], xmm10 .L26: lea r14, [rax+1] vmovss xmm11, DWORD PTR [rcx-4+r14*4] vandps xmm12, xmm11, xmm1 vcomiss xmm0, xmm12 ja .L29 inc edx movsx rdi, edx mov r15, r9 imul r15, rdi vcvtss2sd xmm13, xmm11, xmm11 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, rdi imul rdi, rbp mov DWORD PTR [r10+r15], r14d vmovsd QWORD PTR [rbx+rdi], xmm13 .L29: lea r14, [rax+2] vmovss xmm14, DWORD PTR [rcx-4+r14*4] vandps xmm15, xmm14, xmm1 vcomiss xmm0, xmm15 ja .L30 inc edx movsx rdi, edx mov r15, r9 imul r15, rdi vcvtss2sd xmm4, xmm14, xmm14 mov DWORD PTR [r8+r15], esi mov r15, r11 imul r15, rdi imul rdi, rbp mov DWORD PTR [r10+r15], r14d vmovsd QWORD PTR [rbx+rdi], xmm4 .L30: add rax, 3 vmovss xmm5, DWORD PTR [rcx-4+rax*4] vandps xmm6, xmm5, xmm1 vcomiss xmm0, xmm6 ja .L5 .L39: inc edx movsx rdi, edx mov r14, r9 mov r15, r11 imul r14, rdi imul r15, rdi imul rdi, rbp mov DWORD PTR [r8+r14], esi vcvtss2sd xmm7, xmm5, xmm5 mov DWORD PTR [r10+r15], eax vmovsd QWORD PTR [rbx+rdi], xmm7 .L5: inc rax cmp r12, rax jne .L41 mov rdi, QWORD PTR [rsp+8] lea rax, [rsi+1] add rcx, rdi cmp rsi, r13 je .L2 mov rsi, rax jmp .L4 .L2: Here is the same with GCC 13.1 (GCC 13.2 is not available on Godbolt): .L4: mov QWORD PTR [rsp+16], r14 xor eax, eax .L20: mov r8, rax sub r8, r9 not r8 and r8d, 3 je .L6 inc rax vmovss xmm15, DWORD PTR [rcx-4+rax*4] vandps xmm0, xmm15, xmm2 vcomiss xmm1, xmm0 ja .L20 inc edx mov r15, r10 mov rdi, QWORD PTR [rsp+24] vcvtss2sd xmm4, xmm15, xmm15 movsx r14, edx imul r15, r14 mov DWORD PTR [rdi+r15], esi mov r15, rbx imul r15, r14 imul r14, r13 mov DWORD PTR [r11+r15], eax vmovsd QWORD PTR [r12+r14], xmm4 cmp r8, 1 je .L6 cmp r8, 2 je .L21 inc rax vmovss xmm5, DWORD PTR [rcx-4+rax*4] vandps xmm6, xmm5, xmm2 vcomiss xmm1, xmm6 ja .L20 inc edx mov r14, r10 vcvtss2sd xmm7, xmm5, xmm5 movsx r8, edx imul r14, r8 mov DWORD PTR [rdi+r14], esi mov rdi, rbx imul rdi, r8 imul r8, r13 mov DWORD PTR [r11+rdi], eax vmovsd QWORD PTR [r12+r8], xmm7 .L21: inc rax vmovss xmm5, DWORD PTR [rcx-4+rax*4] vandps xmm3, xmm5, xmm2 vcomiss xmm1, xmm3 ja .L20 inc edx mov r14, r10 mov rdi, QWORD PTR [rsp+24] movsx r8, edx imul r14, r8 .L32: mov r15, rbx mov DWORD PTR [rdi+r14], esi vcvtss2sd xmm8, xmm5, xmm5 imul r15, r8 imul r8, r13 mov DWORD PTR [r11+r15], eax vmovsd QWORD PTR [r12+r8], xmm8 .L6: lea r8, [rax+1] mov rax, r8 cmp r9, r8 je .L7 vmovss xmm9, DWORD PTR [rcx-4+r8*4] vandps xmm10, xmm9, xmm2 vcomiss xmm1, xmm10 ja .L20 inc edx mov r15, r10 mov rdi, QWORD PTR [rsp+24] vcvtss2sd xmm11, xmm9, xmm9 movsx r14, edx mov DWORD PTR [rsp+4], edx imul r15, r14 mov DWORD PTR [rdi+r15], esi mov r15, rbx imul r15, r14 imul r14, r13 mov DWORD PTR [r11+r15], eax inc rax vmovss xmm12, DWORD PTR [rcx-4+rax*4] vmovsd QWORD PTR [r12+r14], xmm11 vandps xmm13, xmm12, xmm2 vcomiss xmm1, xmm13 ja .L20 inc edx mov r15, r10 vcvtss2sd xmm14, xmm12, xmm12 movsx r14, edx imul r15, r14 mov DWORD PTR [rdi+r15], esi mov r15, rbx imul r15, r14 imul r14, r13 mov DWORD PTR [r11+r15], eax lea rax, [r8+2] vmovss xmm15, DWORD PTR [rcx-4+rax*4] vmovsd QWORD PTR [r12+r14], xmm14 vandps xmm0, xmm15, xmm2 vcomiss xmm1, xmm0 ja .L20 mov edx, DWORD PTR [rsp+4] mov r15, r10 vcvtss2sd xmm4, xmm15, xmm15 add edx, 2 movsx r14, edx imul r15, r14 mov DWORD PTR [rdi+r15], esi mov r15, rbx imul r15, r14 imul r14, r13 mov DWORD PTR [r11+r15], eax lea rax, [r8+3] vmovss xmm5, DWORD PTR [rcx-4+rax*4] vmovsd QWORD PTR [r12+r14], xmm4 vandps xmm6, xmm5, xmm2 vcomiss xmm1, xmm6 ja .L20 mov edx, DWORD PTR [rsp+4] mov r14, r10 add edx, 3 movsx r8, edx imul r14, r8 jmp .L32 .L7: mov rdi, QWORD PTR [rsp+8] mov r14, QWORD PTR [rsp+16] lea rax, [rsi+1] add rcx, rdi cmp r14, rsi je .L2 mov rsi, rax jmp .L4 .L2: Discussion First of all, we can see that the main loop is full of conditional branches mixed with goto. Such kind of code are generally not written by humans. Compiler can expect them to be rare and so optimize them less efficiently than ones without goto (since assertions are easier to compute). The thing is the j == 0 condition is not really useful here: it can be done only once before the j-based loop. GCC does not seem to see that (especially due to the complex control-flow, amplified by Cython goto instructions). On to of that, the compiler do not know if fabsf(element) < SMALL is often true or often false, or even just hard to predict at runtime. If the loop is easy to predict (the first two cases), then a good compiler can write a more efficient code than the one produced by GCC (possibly using SIMD instructions). Actually, I am not even sure -funroll-loops is useful here because of the complex control-flow. It makes the hot-loop significantly bigger (i.e., more space in caches) and the processor migh take more time to predict the probability of the taken/not-taken branches. This is dependent of the target processor. Having smaller code is also good for profiling and optimizing your code: simpler code are easier to optimize. To optimize the code, the compiler make assumptions on the cost of the loops and the probability of the branches to be taken. If can assume for example that the inner loop if far more expensive than the outer loop. Heuristics are used to optimize the code and their parameters are carefully tuned so to maximize the performance of benchmark suites (and avoid regressions in code like the Linux kernel for GCC). Thus, a side effect is that your code can be slower while most code can be faster using a newer version of GCC, or most compilers. This is also one reason the performance of a given compiler is not very stable from one version to another. For example, if a matrix has a lot of zeros, then fabsf(element) < SMALL will be often true, causing the outer-loop to be more expensive than expected. The thing is the outer loop might less well optimized in GCC 13.2 than GCC 12.2, resulting in a slower execution time with GCC 13.2. For example, GCC 13.2 starts the outer loop with: mov r8, rax sub r8, r9 not r8 and r8d, 3 je .L6 Note that modern processors, like Skylake, can execute multiple instructions in parallel per cycle. This sequence of instructions can only be executed sequentially because of the dependency chain on the r8 register. Meanwhile, GCC 12.2 generate this in the beginning of the outer loop: lea rdi, [r12-1] xor eax, eax and edi, 3 je .L5 The dependency chain (on rdi/edi) is apparently significantly smaller in this case. In practice, the assembly code is pretty big, so I do not expect this to be the only issue. One need to track the actual path taken for the input matrices in all versions of GCC (very tedious), or just run the code and profile it with tools like VTune (the recommended solution in this case). We can see that there is a lot of imul instructions. Such instructions are typically due to the strides of the array that are not known at compile-time. GCC does not generate code for the strides that are 1. You should really mention this information in the memory views using ::1 if possible (as mentioned by ead in the comments) since imul instructions tends to be a bit expensive in hot loops like this (not to mention they can also increase the size of the code which is already bigger due to the partial unrolling). | 4 | 2 |
76,837,612 | 2023-8-4 | https://stackoverflow.com/questions/76837612/good-way-to-view-matrices-and-higher-dimensional-arrays-in-vscode | When working with PyTorch/numpy and similar packages, is there a good way to view matrices (or, in general, arrays with two or more dimensions) in debug mode, similar to the way Matlab (or even pyCharm if I remember correctly) present it? This is, for example, a PyTorch tensor, which is very confusing -- opening H here gives me the same thing again and again. As opposed to Matlab, where I can watch it like that: Would appreciate any help with that! | Yes, make sure you have the Jupyter extension installed and then simply right click the variable in the Debug menu and select the View Value in Data Viewer option. | 6 | 3 |
76,816,186 | 2023-8-2 | https://stackoverflow.com/questions/76816186/interpreting-an-array-of-values-using-skfuzzy | I am using Skfuzzy to interpret two arrays: 1. distances to a stream [dist]; 2. Strahler order[order]. I then want to calculate the consequent (a vulnerability value) for each stream distance and Strahler order pairs using a set of custom membership values I have created for the antecedents and consequent. I have the program working for a single input of values, but I don't understand how to get skfuzzy to loop through an array of input values and output the consequent for each value pair. The control system: # Creation of control system using the defined rules vulnerability_ctrl = ctrl.ControlSystem([rule1, rule2, rule3]) vulnerability = ctrl.ControlSystemSimulation(vulnerability_ctrl) The inputs and computation: # input of distance and Strahler order arrays vulnerability.input[dist] vulnerability.input[order] vulnerability.compute() I get this error because I don't know how skfuzzy deals with an array of inputs: TypeError: '_InputAcceptor' object is not subscriptable | One can iterate down an array through the use of a for loop as shown below. First ensure that your array is converted to float type using 'numpy.asfarray'. # Convert to float array dist_x = np.asfarray(strm_dist) order_x = np.asfarray(strm_order) # iterate down an array of input values strm_vul = [] for i in range(len(dist)): vulnerability.input['dist'] = dist_x[i] vulnerability.input['order'] = order_x[i] vulnerability.compute() strm_vul.append(vulnerability.output['vul']) | 3 | 1 |
76,836,793 | 2023-8-4 | https://stackoverflow.com/questions/76836793/jupyter-notebook-cannot-import-pyldavis-sklearn | I am using Jupyter Notebook to run python code. I already did the following: !pip install pyldavis I can successfully import pyLDAvis via the following codes: import pyLDAvis pyLDAvis.enable_notebook() However, I cannot import pyLDAvis.sklearn via the following codes: import pyLDAvis.sklearn It returns: ModuleNotFoundError Traceback (most recent call last) Cell In[52], line 1 ----> 1 import pyLDAvis.sklearn_models ModuleNotFoundError: No module named 'pyLDAvis.sklearn_models' Why is that and what should I do to deal with it? | It looks like there has been a change in how the software handles this pattern. This issue posted here in May of this year (2023) that looks to be the same as yours. It links over to a solution that details how the use of the software has recently developed: "pyLDAvis v 3.4.0 no longer has the file sklearn.py in the pip package." Replace any logic involving: import pyLDAvis.sklearn ... pyLDAvis.sklearn.prepare "with" import pyLDAvis.lda_model ... pyLDAvis.lda_model.prepare How this was found via troubleshooting: I went to the page for the package in the Python Package Index (PyPI) and clicked on 'GitHub statistics:' on the left side under 'GitHub statistics:'. Then in the 'filters' slot I entered 'pyLDAvis.sklearn'. The four that came up as open didn't look too similar to the OP, and so I clicked on the '7 closed' tag above the listing. The most recent one listed 'ModuleNotFoundError: No module named 'pyLDAvis.sklearn' looked to be a good match to this post, and so I examined it. | 5 | 7 |
76,836,454 | 2023-8-4 | https://stackoverflow.com/questions/76836454/avoid-for-loops-over-colum-values-in-a-pandas-dataframe-with-a-function | I have the following structur of a dataframe: df = pd.DataFrame({'Level': ["a","b", "c"], 'Kontogruppe': ["a", "a", "b"], 'model': ["alpha", "beta", "alpha"], 'MSE': [0, 1 ,1], 'actual_value': [1,2,3], 'forecast_value': [2,2,2]}) For this dataframe I run severel functions, for example: def metrics(df): df_map= pd.DataFrame({'Level': ["a"], 'Kontogruppe': ["a"], 'model': ["alpha"], 'MSE': [0]}) for i in df['Level'].unique(): for j in df['Kontogruppe'].unique(): for k in df['model'].unique(): df_lkm = df.loc[(df['Level'] == i) & (df['Kontogruppe'] == j) & (df['model'] == k)] if df_lkm.empty: out_MSE = 10000000000 else: out_MSE = sum(df_lkm['actual_value'])/sum(df_lkm['forecast_value']) df_map_map = pd.DataFrame({'Level': [i], 'Kontogruppe': [j], 'model': [k], 'out_MSE': [out_MSE]}) df_map = pd.concat([df_map, df_map_map]) df = pd.merge(df, df_map, how='left', on=['Level', 'Kontogruppe', 'model']) return df df = metrics(df) so basically I loop over the unique column values and filter the dataframe based on this. In this case I get for every Level, Kontogruppe and model the value 'out_MSE' gets calculated over all entries of actual_values and forecast_values. And is appended as a value for every row in a new column. For this problem is there are more efficient way to this? Is there any pythonic way in general to avoid this for loops, my dataframe is big and this costs a lot of performance. | If I understand correctly, you might just want a simple groupby.sum with a bit of post-processing. Because you only care about the existing combinations, there is no need to loop over all of them and assign a large value. (df.groupby(['Level', 'Kontogruppe', 'model'], as_index=False) [['actual_value', 'forecast_value']].sum() .eval('out_MSE = actual_value/forecast_value') ) Output: Level Kontogruppe model actual_value forecast_value out_MSE 0 a a alpha 1 2 0.5 1 b a beta 2 2 1.0 2 c b alpha 3 2 1.5 Output of your code for comparison: Level Kontogruppe model MSE_x actual_value forecast_value MSE_y out_MSE 0 a a alpha 0 1 2 0.0 NaN 1 a a alpha 0 1 2 NaN 0.5 2 b a beta 1 2 2 NaN 1.0 3 c b alpha 1 3 2 NaN 1.5 | 3 | 1 |
76,836,403 | 2023-8-4 | https://stackoverflow.com/questions/76836403/typeerror-when-using-super-in-a-dataclass-with-slots-true | I have a dataclass with (kind of) a getter method. This code works as expected: from dataclasses import dataclass @dataclass() class A: def get_data(self): # get some values from object's fields # do some calculations return "a calculated value" @dataclass() class B(A): def get_data(self): data = super().get_data() return data + " (modified)" b = B() print(b.get_data()) # a calculated value (modified) However, if I add slots=True, I get a TypeError: from dataclasses import dataclass @dataclass(slots=True) class A: def get_data(self): return "a calculated value" @dataclass(slots=True) class B(A): def get_data(self): data = super().get_data() return data + " (modified)" b = B() print(b.get_data()) # TypeError: super(type, obj): obj must be an instance or subtype of type The error vanishes if I use the old-style super(), contrary to pep-3135: from dataclasses import dataclass @dataclass(slots=True) class A: def get_data(self): return "a calculated value" @dataclass(slots=True) class B(A): def get_data(self): data = super(B, self).get_data() return data + " (modified)" b = B() print(b.get_data()) # a calculated value (modified) Why does this happen and how to fix it the right way? | slots: If true (the default is False), __slots__ attribute will be generated and new class will be returned instead of the original one. If __slots__ is already defined in the class, then TypeError is raised. https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass Taking this as an example: @dataclass(slots=True) class Foo: pass This means this works something like this: class Foo: pass Foo = dataclass(slots=True)(Foo) You define a class Foo, and then it gets replaced with a different, altered class. Now, your method: def get_data(self): data = super().get_data() ... This super() was written in the original Foo class and will assume it's supposed to look up the parent of this original Foo class; but the instance it currently has is not actually an instance of that class, it's an instance of the other, altered class. When you do this instead: data = super(Foo, self).get_data() This looks up what currently refers to the name Foo, which again matches what self also refers to at this moment. | 9 | 2 |
76,832,124 | 2023-8-3 | https://stackoverflow.com/questions/76832124/where-was-python-installed | How can I find out where Python was installed in a Windows 11 machine, so that I can use the address to add Python to the PATH variable? The documentation I have found on this assumes that the user can already use the python command in the cli. But in this case, the cli cannot find python yet because python has not been added to the PATH yet. Also, I looked closely in Windows file explorer and was not able to find Python in Program Files, Program Files (x86), the root of the C drive, or any of the many other places that I looked. Here are the commands that first install python and then try to check the resulting Python version. PS C:\Users\Administrator> C:\temp\python-3.11.4-amd64.exe /passive InstallAllUsers=0 InstallLauncherAllUsers=0 PrependPath=1 Include_test=0 PS C:\Users\Administrator> python --version python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + python --version + ~~~~~~ + CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS C:\Users\Administrator> The /passive flag above results in the python install GUI launching while the install happens, and then closing without giving any error message, so it seems clear that the install command is indeed running. ... If you can suggest any alterations to the command to elucidate any possible error message, I would be happy to try your suggested fully automated commands that might leave better logs. This is on an Amazon EC2 instance being accessed remotely using an RDP file, if that makes any difference. This provisioning process must be completely automated. Powershell is running as an administrator while invoking the above commands. | The default install location for user installations on Windows uses the LOCALAPPDATA variable. This typically points towards C:\users\<username>\Appdata\Local\Programs\Python\Python<XY>\ where <XY> are the major and minor versions. In your case this would be C:\users\Administrator\Appdata\Local\Programs\Python\Python311\. Source, specifically, here. You might note that there are PythonXY-32 and PythonXY-64 alternatives for 32 and 64 bit installations respectively, however, I believe these are no longer used in Windows 11 since the installer is now 64 bit by default. If you are using the same installer for all installs you should be able to rely on %LOCALAPPDATA%\Programs\Python\Python311\ as the path. (Or "$($env:LOCALAPPDATA)\Programs\Python\Python311\" in Powershell. Python typically adds both "$($env:LOCALAPPDATA)\Programs\Python\Python311\" and "$($env:LOCALAPPDATA)\Programs\Python\Python311\Scripts\" to PATH since the Scripts directory will include helpful tools like pip, mypy, and pylint (if installed). These could be run as subcommands (python -m mypy my_file.py) if Scripts isn't on the PATH, but most documentation will demonstrate them as being run directly (mypy my_file.py). | 5 | 4 |
76,833,042 | 2023-8-4 | https://stackoverflow.com/questions/76833042/how-to-get-a-list-of-all-the-methods-of-a-class-and-its-parameters | I have a requirement where i need to generate a list of all the methods and its parameters belonging to the Python class passed as argument. I want something like this class MyClass: def __init__(self,attr1): pass def method1(self,param1,param2): pass def method2(self,param3): pass def method3(self): pass def get_all_methods_details(className): # gets all the methods associated to the class along with its method. When I do: get_all_methods_details(myClass) it should return something like this: [(method1, param1,param2),(method2,param3),(method3)] This is what I have tried: def get_all_method_details(className): return [ m for m in dir(className) if not m.startswith('__')] print(get_all_method_details(MyClass)) this prints: [method1, method2, method3] I saw solutions for getting parameters from normal function but those does not work with methods. | You can use the .co_varnames attribute of the code object: def get_all_methods_details(class_name): return [ (m, getattr(class_name, m).__code__.co_varnames) for m in dir(class_name) if not m.startswith("__") ] This should return: [('method1', ('self', 'param1', 'param2')), ('method2', ('self', 'param3')), ('method3', ('self',))] As a side note, those are not class methods. They are instance methods. | 3 | 1 |
76,829,328 | 2023-8-3 | https://stackoverflow.com/questions/76829328/align-text-in-the-center-of-the-bounding-box | I'm trying to create some labels manually which should align exactly with the tick locations. However, when plotting text ha='center' aligns the bounding box of the text in the center, but the text itself within the bounding box is shifted to the left. How can I align the text itself in the center? I found this question but it doesn't help as it shifts the bounding box, while I need to shift the text. import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt print(matplotlib.__version__) # 3.5.3 fig, ax = plt.subplots() ax.plot([.5, .5], [0, 1], transform=ax.transAxes) ax.text(.5, .5, 'This text needs to be center-aligned'.upper(), ha='center', va='center', rotation='vertical', transform=ax.transAxes, bbox=dict(fc='blue', alpha=.5)) ax.set_title('The box is center-aligned but the text is too much to the left') plt.show() | You can set the transform parameter of the text object with an actual matplotlib transform, e.g.: import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.transforms as transforms mpl.rcParams['figure.dpi'] = 300 # mpl.use("TkAgg") print(mpl.__version__) # 3.5.3 fig, ax = plt.subplots() dx, dy = 5.5 / 300.0, 0 / 300.0 offset = transforms.ScaledTranslation(dx, dy, fig.dpi_scale_trans) text_transform = ax.transData + offset ax.plot( [0.5, 0.5], [0, 1], ) ax.text( 0.5, 0.5, "This text needs to be center-aligned".upper(), ha="center", va="center", rotation="vertical", transform=text_transform, bbox=dict(fc="blue", alpha=0.5), ) ax.set_title("The box is center-aligned but the text is too much to the left") plt.show() produced: Note: The exact setting of dx (the amount to shift the plotted text in the x-dimension of the Axes coordinate system of plot) will depend on the value set for the figure DPI (here I've set it to 300 and was also running this inside a jupyter notebook). | 4 | 1 |
76,823,121 | 2023-8-2 | https://stackoverflow.com/questions/76823121/reduce-amount-of-silence-needed-for-pythons-speechrecognition-to-stop-capturing | I'm using Python's SpeechRecognition to geenerate captions for a livestream. I noticed that when I listen to mic input, recognizer would need a couple of seconds of silence in order to stop capturing audio. Is there a way to reduce that amount of silence needed to say .5 seconds? I'm open to using other methods/libraries, as long as it's not something too low level. Here's my code so far: import speech_recognition as sr from googletrans import Translator import threading # Config OUTPUT_FILE_NAME = "transcription.txt" def listen(recognizer, microphone): with microphone as source: audio = recognizer.listen(source) return audio def transcribe(audio, recognizer, translator): try: uk_text = recognizer.recognize_google(audio, language="uk-UA") translated_text = translator.translate(uk_text, src="uk", dest="en") write_to_file(OUTPUT_FILE_NAME, translated_text.text) except sr.UnknownValueError: print("Could not understand audio.") write_to_file(OUTPUT_FILE_NAME, "") except sr.RequestError as e: print(f"Error occurred during recognition: {e}") def write_to_file(file_path, text): with open(file_path, "w", encoding="utf-8") as file: file.write(text) def get_mic(): for index, source in enumerate(sr.Microphone.list_microphone_names()): print(f"{index}: {source}") while True: index = input("Select an index from the list above: ") try: return int(index) except ValueError: print("Invalid index") if __name__ == "__main__": mic_index = get_mic() translator = Translator() recognizer = sr.Recognizer() microphone = sr.Microphone(device_index=mic_index) print("Adjusting for ambient noise, please don't say anything...") with microphone as source: recognizer.adjust_for_ambient_noise(source) print("Listening...") try: while True: audio = listen(recognizer, microphone) transcription_thread = threading.Thread( target=transcribe, kwargs={"audio":audio, "recognizer":recognizer, "translator":translator} ) transcription_thread.setDaemon(True) transcription_thread.start() except KeyboardInterrupt: print("\nShutting down recognition service...") write_to_file("transcription.txt", "Recognition service inactive. This is sample text.") I've tried using phrase_time_limit on a .listen() function, but that's not what I'm looking for, as it would sometimes cut me off in the middle of the word. | From the source code of the SpeechRecognition library, the parameter you need is pause_threshold, which is a parameter taken by the Recognizer object. self.pause_threshold = 0.8 # seconds of non-speaking audio before a phrase is considered complete In your code above, it would be passed like: recognizer = sr.Recognizer(pause_threshold=0.5) # or other value Try experimenting with the pause_threshold value. | 3 | 2 |
76,831,468 | 2023-8-3 | https://stackoverflow.com/questions/76831468/keep-items-with-same-keys-in-two-dictionary-and-discard-other-items | I am trying to remove all non-matching items (values with different keys) in two dicts dict_a and dict_b. What is a better way of achieving this? Example: dict_a = {key1: x1, key2: y1, key4: z1} dict_b = {key1: x2, key3: y2, key4: w1} # becomes: # dict_a = {key1: x1, key4: z1} # dict_b = {key1: x2, key4: w1} My attempt: keep_keys = [keyA for keyA in dict_a.keys() if keyA in dict_b.keys()] for i in dict_b.copy().keys(): if i not in keep_keys: del dict_b[i] for i in dict_a.copy().keys(): if i not in keep_keys: del dict_a[i] I am looking for shorter (pythonic) way of doing this. | You can do: d1 = {"key1": "x1", "key2": "y1", "key4": "z1"} d2 = {"key1": "x2", "key3": "y2", "key4": "w1"} common_keys = d1.keys() & d2.keys() d1 = {k: d1[k] for k in common_keys} d2 = {k: d2[k] for k in common_keys} print(d1) print(d2) Prints: {'key1': 'x1', 'key4': 'z1'} {'key1': 'x2', 'key4': 'w1'} | 3 | 2 |
76,830,702 | 2023-8-3 | https://stackoverflow.com/questions/76830702/is-there-a-way-of-getting-the-inset-axes-by-asking-the-axes-it-is-embedded-in | I have several subplots, axs, some of them with embedded inset axes. I would like to get the data plotted in the insets by iterating over the main axes. Let's consider this minimal reproducible example: fig, axs = plt.subplots(1, 3) x = np.array([0,1,2]) for i, ax in enumerate(axs): if i != 1: ins = ax.inset_axes([.5,.5,.4,.4]) ins.plot(x, i*x) plt.show() Is there a way of doing something like data = [] for ax in axs: if ax.has_inset(): # "asking" if ax has embedded inset ins = ax.get_inset() # getting the inset from ax line = ins.get_lines()[0] dat = line.get_xydata() data.append(dat) print(data) # [array([[0., 0.], # [1., 0.], # [2., 0.]]), # array([[0., 0.], # [1., 2.], # [2., 4.]])] | You could use get_children and a filter to retrieve the insets: from matplotlib.axes import Axes def get_insets(ax): return [c for c in ax.get_children() if isinstance(c, Axes)] for ax in fig.axes: print(get_insets(ax)) Output: [<Axes:label='inset_axes'>] [] [<Axes:label='inset_axes'>] For your particular example: data = [] for ax in fig.axes: for ins in get_insets(ax): line = ins.get_lines()[0] dat = line.get_xydata() data.append(dat) Output: [array([[0., 0.], [1., 0.], [2., 0.]]), array([[0., 0.], [1., 2.], [2., 4.]])] | 4 | 2 |
76,828,644 | 2023-8-3 | https://stackoverflow.com/questions/76828644/python-pandas-select-all-nan-row-and-fill-with-previous-row | I have a dataframe look like this, pd.DataFrame([list(range(8))+[np.nan]*2, [np.nan]*len(range(10)), range(10), [np.nan]*2+list(range(8)), range(10)]) Out[31]: 0 1 2 3 4 5 6 7 8 9 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 3 NaN NaN 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 4 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 I want to select the row with all NaN, that is the second in this case and fill it with the previous row. 0 1 2 3 4 5 6 7 8 9 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 1 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 2 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 3 NaN NaN 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 4 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 How to do that? Thanks a lot. | IIUC, you can do: mask = df.isna().all(axis=1) df.loc[mask, :] = df.loc[mask.shift(-1).fillna(False), :].values print(df) Prints: 0 1 2 3 4 5 6 7 8 9 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 1 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 2 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 3 NaN NaN 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 4 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 | 2 | 3 |
76,826,939 | 2023-8-3 | https://stackoverflow.com/questions/76826939/python-count-number-of-occurrences-based-on-other-columns-dictionary-words | Here is an example of my dataframe: text name1 name2 name3 count_name barbie and ken live in a house barbie ken sophie 2 bond is preparing a bond movie bond james NaN 2 homer likes donuts homer bart NaN 1 where does mary live peter NaN NaN 0 i am travelling with john john NaN NaN 1 barbie ken barbieken barbie ken NaN 2 The last column "count_name" shows what I want to obtain. I want to count the number of times each "name1", "name2" and "name3" appear in column "text". I tried this code following a previous question (which probably would not give me exactly what I wanted) (Count occurrences of a string ocurring in multiple columns at the same time): import pandas as pd df['count_name'] = pd.MultiIndex.from_arrays(df[['text', 'name1', 'name2', 'name3']].to_numpy().T).value_counts() but I obtain a column full of NaNs. | Using a regex: import re cols = ['name1', 'name2', 'name3'] # or # cols = list(df.filter(regex=r'name\d+')) pat = df[cols].stack().groupby(level=0).agg('|'.join) df['count_name'] = [len(re.findall(p, t)) for t, p in zip(df['text'], pat)] To account for full words only: df['count_name'] = [len(re.findall(fr'\b(?:{p})\b', t)) for t, p in zip(df['text'], pat)] Output: text name1 name2 name3 count_name 0 barbie and ken live in a house barbie ken sophie 2 1 bond is preparing a bond movie bond james NaN 2 2 homer likes donuts homer bart NaN 1 3 where does mary live peter NaN NaN 0 4 i am travelling with john john NaN NaN 1 If you just want to count the full words, then use a set: import re cols = ['name1', 'name2', 'name3'] S = df[cols].agg(set, axis=1)-{np.nan} df['count_name'] = [sum(1 for w in t.split() if w in s) for t, s in zip(df['text'], S)] Output: text name1 name2 name3 count_name 0 barbie and ken live in a house barbie ken sophie 2 1 bond is preparing a bond movie bond james NaN 2 2 homer likes donuts homer bart NaN 1 3 where does mary live peter NaN NaN 0 4 i am travelling with john john NaN NaN 1 5 barbie ken barbieken barbie ken NaN 2 | 2 | 2 |
76,818,020 | 2023-8-2 | https://stackoverflow.com/questions/76818020/select-specific-range-of-elements-from-a-python-dictionary-based-on-condition | I have the following dictionary: ip_dict = { "doc_1" : { "img_1" : ("FP","some long text"), "img_2" : ("LP", "another long text"), "img_3" : ("Others", "long text"), "img_4" : ("Others", "some loong text"), "img_5" : ("FP", "one more text"), "img_6" : ("FP", "another one"), "img_7" : ("LP", "ANOTHER ONE"), "img_8" : ("Others", "some text"), "img_9" : ("Others", "some moretext"), "img_10" : ("FP", "more text"), "img_11" : ("Others", "whatever"), "img_12" : ("Others", "more whatever"), "img_13" : ("LP", "SoMe TeXt"), "img_14" : ("Others", "some moretext"), "img_15" : ("FP", "whatever"), "img_16" : ("Others", "whatever"), "img_17" : ("LP", "whateverrr") }, "doc_2" : { "img_1" : ("FP", "text"), "img_2" : ("FP", "more text"), "img_3" : ("LP", "more more text"), "img_4" : ("Others", "some more"), "img_5" : ("Others", "text text"), "img_6" : ("FP", "more more text"), "img_7" : ("Others", "lot of text"), "img_8" : ("LP", "still more text") } } Here FP represents the first page and LP the last page. For all the docs I only want to extract the FP and LP. For the Others, if they lie between FP and LP only then extract them, as they represent the pages between FP and LP. If they lie outside FP and LP then ignore them. Also for FP which are not followed by a LP, treat them as a single page and extract them. So my output dictionary would look like: op_dict = { "doc_1" : [ { "img_1" : ("FP","some long text"), "img_2" : ("LP", "another long text") }, { "img_5" : ("FP", "one more text") }, { "img_6" : ("FP", "another one"), "img_7" : ("LP", "ANOTHER ONE") }, { "img_10" : ("FP", "more text"), "img_11" : ("Others", "whatever"), "img_12" : ("Others", "more whatever"), "img_13" : ("LP", "SoMe TeXt"), }, { "img_15" : ("FP", "whatever"), "img_16" : ("Others", "whatever"), "img_17" : ("LP", "whateverrr"), } ], "doc_2" : [ { "img_1" : ("FP", "text") }, { "img_2" : ("FP", "more text"), "img_3" : ("LP", "more more text") }, { "img_6" : ("FP", "more more text"), "img_7" : ("Others", "lot of text"), "img_8" : ("LP", "still more text") }, ] } As you can see, all the FP and LP have been extracted, but also those Others which are in between FP and LP have also been extracted and stored in a dictionary. Also those FP which are not followed by a LP have also been extracted. PS: ip_dict = { "doc_1" : { "img_1" : ("LP","some long text"), "img_2" : ("Others", "another long text"), "img_3" : ("Others", "long text"), "img_4" : ("FP", "long text"), "img_5" : ("Others", "long text"), "img_6" : ("LP", "long text") } } op_dict = { "doc_1" : [{ "img_1" : ("LP","some long text") }, { "img_4" : ("FP", "long text"), "img_5" : ("Others", "long text"), "img_6" : ("LP", "long text") } ] } Any help is appreciated! | With extended sequential logic: def select_page_ranges(d: dict): def _del_excess_items(): # if previous block was not closed and has excess entries if start and last_mark != 'FP': res[pk][-1] = {start_key: res[pk][-1][start_key]} res = {} for pk, v in ip_dict.items(): res[pk] = [] start, start_key, last_mark = None, None, '' for k, v in v.items(): if v[0] == 'FP': _del_excess_items() res[pk].append({k: v}) start = True start_key = k elif v[0] == 'LP': res[pk][-1].update({k: v}) start = False elif start: res[pk][-1].update({k: v}) last_mark = v[0] _del_excess_items() return res print(select_page_ranges(ip_dict)) {'doc_1': [{'img_1': ('FP', 'some long text'), 'img_2': ('LP', 'another long text')}, {'img_5': ('FP', 'one more text')}, {'img_6': ('FP', 'another one'), 'img_7': ('LP', 'ANOTHER ONE')}, {'img_61': ('FP', 'another one'), 'img_71': ('LP', 'ANOTHER ONE')}, {'img_62': ('FP', 'another one'), 'img_72': ('LP', 'ANOTHER ONE')}, {'img_54': ('FP', 'one more text')}, {'img_540': ('FP', 'one more text')}, {'img_541': ('FP', 'one more text')}, {'img_13': ('FP', 'more text'), 'img_14': ('Others', 'whatever'), 'img_140': ('Others', 'whatever'), 'img_141': ('Others', 'whatever'), 'img_142': ('Others', 'whatever'), 'img_15': ('Others', 'more whatever'), 'img_16': ('LP', 'SoMe TeXt')}, {'img_18': ('FP', 'whatever'), 'img_19': ('Others', 'whatever'), 'img_20': ('LP', 'whateverrr')}], 'doc_2': [{'img_1': ('FP', 'text')}, {'img_2': ('FP', 'more text'), 'img_3': ('LP', 'more more text')}, {'img_6': ('FP', 'more more text'), 'img_7': ('Others', 'lot of text'), 'img_8': ('LP', 'still more text')}, {'img_69': ('FP', 'more more text')}]} | 2 | 3 |
76,820,218 | 2023-8-2 | https://stackoverflow.com/questions/76820218/finding-matching-elements-in-arrays-of-different-lengths | I have 2 arrays of different lengths. For each element in array_1, I want to find the position of the matching element in array_2. This includes duplicates in array_1. For example: array_1 = np.array([555, 641, 1000, 641, 4, 641]) array_2 = np.array([4, 555, 641, 1000]) The desired output would be: out = [1,2,3,2,0,2] For every element in array_1 there is a matching element in array_2. array_2 is unique and sorted. In reality, each array has around a million terms. The solution below works, but it is too slow for what I need: out = [np.where(array_2 == x) for x in array_1] | Assuming that all unique items are present in both arrays. if array_2 is sorted, then it's not even needed to use it, numpy.unique is sufficient. out = np.unique(array_1, return_inverse=True)[1] Output: array([1, 2, 3, 2, 0, 2]) If array_2 is not sorted, then one needs a bit of post-processing with numpy.argsort: # let's take a non sorted example array_2 = np.array([555, 641, 4, 1000]) uniq, idx = np.unique(array_1, return_inverse=True) out = np.argsort(array_2)[idx] Output: array([0, 1, 3, 1, 2, 1]) | 4 | 2 |
76,821,158 | 2023-8-2 | https://stackoverflow.com/questions/76821158/specify-that-a-typevar-supports-the-operator-among-its-values | Basically, I want to say that I have a type T which has a T.__sub__(self, other: T) -> T defined. I can currently make it typecheck as if it has a T.__sub__(self, other: Any) -> Any defined, but only works if T is a class I defined myself. I'm trying to type the following class from typing import TypeVar, Generic from dataclasses import dataclass T = TypeVar('T') @dataclass class Consumption(Generic[T]): first: T second: T def difference(self) -> T: return self.second - self.first However, my LSP (pyright) complains that "Operator "-" not supported for types "T@Consumption" and "T@Consumption"". I've inspired myself in the typing library and SupportsAbs implementation to try and circumvent this in the following way from typing import TypeVar, Generic, Protocol from dataclasses import dataclass from abc import abstractmethod E = TypeVar('E') class SupportsSub(Generic[E], Protocol): @abstractmethod def __sub__(self, other: E) -> E: pass With this class, I can express that the difference method is well-typed in the following way T = TypeVar('T') @dataclass class Consumption(Generic[T]): first: T second: SupportsSub[T] def difference(self) -> T: return self.second - self.first but I lose the ability to say that second is actually also of type T. I've tried to define T such that it supports being subtracted with itself, but "TypeVar bound type cannot be generic". T = TypeVar('T', bound=SupportsSub["T"]) I can just exclude the generic in SupportsSub to make it type-check, but then it also allows other methods which are not well-typed T = TypeVar('T', bound=SupportsSub) def shouldnt_type(a: T, b: int) -> int: return a - b Another problem with this approach is that I can't have a Consumption[float], since float doesn't inherit from SupportsSub. Therefore, pyright will complain about the following. class Other: water: Consumption[float] I've had similar problems with pyright for example warning me that PySide.QtCore.QDate doesn't support the < operator, when in fact it does. Is it currently possible to express this with Python's typing? | Check out typing.Self. That way you can specify that the value being subtracted must be the same type as self. from typing import Protocol, TypeVar, Self, Generic from dataclasses import dataclass E = TypeVar('E') class SupportsSub(Protocol): @abstractmethod def __sub__(self, other: Self) -> Self: pass T = TypeVar('T', bound=SupportsSub) @dataclass class Consumption(Generic[T]): first: T second: T def difference(self) -> T: return self.second - self.first def shouldnt_type(a: T, b: int) -> T: return a - b # doesn't type | 3 | 2 |
76,817,600 | 2023-8-2 | https://stackoverflow.com/questions/76817600/effective-way-to-add-time-dimension-to-two-dimensional-x-y-netcdf-file-whi | I had CSV file with x and y coordinates, as well as variable values for three different time steps as follows: x, y, var_t1, var_t2, var_t3, 1, 1, 8, 8, 6 1, 2, 6, 1, 2 2, 1, 5, 3, 7 2, 2, 7, 2, 6 I have learned to create a NetCDF file with the following method: import xarray as xr xr.Dataset.from_dataframe(df.set_index(['x', 'y'])).to_netcdf('filename.nc') This results in a NetCDF with x and y as dimensions, and I get 3 different variables. My goal was to create a NetCDF with x, y and t as dimensions with a single variable. I managed to achieve this but I feel like I did it in a very complicated fashion. My solution was to play with the CSV file and make it 3 times longer, while adding a "t" column to represent time steps: x, y, t, var_t1, var_t2, var_t3, 1, 1, 0, 8, 0, 0 1, 2, 0, 6, 0, 0 2, 1, 0, 5, 0, 0 2, 2, 0, 7, 0, 0 1, 1, 1, 0, 8, 0 1, 2, 1, 0, 1, 0 2, 1, 1, 0, 3, 0 2, 2, 1, 0, 2, 0 1, 1, 2, 0, 0, 6 1, 2, 2, 0, 0, 2 2, 1, 2, 0, 0, 7 2, 2, 2, 0, 0, 6 Now when I apply import xarray as xr xr.Dataset.from_dataframe(df.set_index(['x', 'y', 't'])).to_netcdf('filename.nc') I get a NetCDF with x, y, t dimensions and a single variable for each different time (i.e. when t = 1, only var_t2 != 0). Would there be a way to achieve this in a much simpler way, in case I encounter a similar problem in the future? This was easy to do with only 3 time steps, but I would be in trouble with tens or thousands of time steps. Thank you! | Say you have the dataframe df: >> df x y var_t1 var_t2 var_t3 0 1 1 8 8 6 1 1 2 6 1 2 2 2 1 5 3 7 3 2 2 7 2 6 You can set x,y as an index, convert it to xarray, merge the variables var_t1... to a new dimension and set new_times as the coordinates of the time dimension: >> ds = df.set_index(["x", "y"]).to_xarray() >> ds <xarray.Dataset> Dimensions: (x: 2, y: 2) Coordinates: * x (x) int64 1 2 * y (y) int64 1 2 Data variables: var_t1 (x, y) int64 8 6 5 7 var_t2 (x, y) int64 8 1 3 2 var_t3 (x, y) int64 6 2 7 6 >> new_times = range(3) >> ds_result = ds.to_array(dim="time").assign_coords(time=new_times) >> ds_result <xarray.DataArray (time: 3, x: 2, y: 2)> array([[[8, 6], [5, 7]], [[8, 1], [3, 2]], [[6, 2], [7, 6]]]) Coordinates: * x (x) int64 1 2 * y (y) int64 1 2 * time (time) int64 0 1 2 | 3 | 2 |
76,817,633 | 2023-8-2 | https://stackoverflow.com/questions/76817633/python-pandas-np-where-value-from-another-column | I am trying to apply different value to a df from another column using: df['url']= np.where(df['client'] == 'xyz', "/s?k={query}&s=relevanceblender&page=%s".format(query=df['keyword']), "other") however query is replaced by all values of df['keyword'], not only the row in question. thanks for your help. | Assuming this input: df = pd.DataFrame({'client': ['abc', 'abc', 'xyz', 'xyz'], 'keyword': ['kw1', 'kw2', 'kw3', 'kw4'] }) You could use: df['url'] = np.where(df['client'] == 'xyz', df['keyword'].apply("/s?k={}&s=relevanceblender&page=%s".format), 'other') Notice how {query} was changed to {}. Or, if you cannot change the formatting string: df['url'] = np.where(df['client'] == 'xyz', df['keyword'].apply(lambda x: "/s?k={query}&s=relevanceblender&page=%s".format(query=x)), 'other') Output: client keyword url 0 abc kw1 other 1 abc kw2 other 2 xyz kw3 /s?k=kw3&s=relevanceblender&page=%s 3 xyz kw4 /s?k=kw4&s=relevanceblender&page=%s | 2 | 2 |
76,817,132 | 2023-8-2 | https://stackoverflow.com/questions/76817132/what-does-mean-parameter-offset-in-function-get-inline-bot-results | I cannot understand what parameter offset did in this function and what's kind of values accept? I try use integers but there is no effect. from pyrogram import Client, filters import time app = Client( "my_account", api_id=api_id, api_hash=api_hash, ) async def main(): async with app: bot_results = await app.get_inline_bot_results( "some_bot_name", query="some_query", offset="???" ) app.run(main()) | The documentation states: offset (str, optional) β Offset of the results to be returned. Which is not super helpful but from the Telegram Docs shows that it is used for pagination: offset - If the user scrolls past the first len(results) results, and next_offset field is set, the inline query should be repeated with this offset. Checking the string documentation and [this] GitHub issue, it seems that the offset should be a string of an int like: "0" or "100". Make sure the returned results' length is more than 1 and set the offset to "1". | 3 | 1 |
76,796,808 | 2023-7-30 | https://stackoverflow.com/questions/76796808/what-is-support-in-classification-report-within-sklearn | I have been wrote a code and the result was a report which you can seen blow. the code is about the number of people who survived or died in titanic. my question is what is "Support" in this report? precision recall f1-score support 0 0.78 0.87 0.82 154 1 0.79 0.67 0.72 114 accuracy 0.78 268 macro avg 0.79 0.77 0.77 268 weighted avg 0.78 0.78 0.78 268 Now I found an explanation about "Support" on the internet which is: "Support is the number of actual occurrences of the class in the dataset. It doesnβt vary between models, it just diagnoses the performance evaluation process." I didn't understood what is "actual occurrences" means as well. I would be grateful if someone could explain these definitions to me with an example. | support is how many samples are in each class. In your case, 154 samples are in class 0, and 114 samples are in class 1. The total number of samples is 268. It uses the ground truth labels, which represent the actual class of each sample. You might be interested in seeing how these values can be manually calculated - see https://stackoverflow.com/a/76789551/21896093 | 4 | 6 |
76,788,727 | 2023-7-28 | https://stackoverflow.com/questions/76788727/how-can-i-change-the-debug-level-and-format-for-the-quart-i-e-hypercorn-logge | I'm trying to set the level and format for the loggers used by the Quart module the way I did it successfully for other 'foreign' loggers: by running basicConfig and implicitly setting up the root-logger or later by running logging.getLogger("urllib3.connectionpool").setLevel(logging.INFO) to get and modify an existing logger However those approaches don't work for the loggers spawned by Quart. Neither are those affected by basicConfig nor can I set the level. Output will always look like this: [2023-07-28 16:17:12 +0200] [1254610] [INFO] Running on http://0.0.0.0:5432 (CTRL + C to quit) Setting breakpoints in logging/__init__.py let the program break on log messages by hypercorn.error (so it seems to use the same module), but setting the level like this logging.getLogger("hypercorn.error").setLevel(logging.WARNING) doesn't have any effect. The doc says I should use dictConfig, so I've added dictConfig({ 'version': 1, 'loggers': { 'quart.app': {'level': 'ERROR'}, 'hypercorn.error': {'level': 'ERROR'}, }, }) .. no effect I found https://github.com/pgjones/hypercorn/issues/120, and tried logger = logging.getLogger("hypercorn.error") logger.addHandler(my_own_handler) logger.setLevel(logging.WARNING) logger.propagate = False but also without effect. What else can I try? | I had the same issue recently, and it was a real headache to find how to solve it, but here is my solution: First, for clarity, I defined a function that takes a logger as an input and that patches the logger how I want it. import logging def patch_logger(logger_: logging.Logger): logger_.handlers = [] # Clear any existing handlers logger_.addHandler(file_handler) logger_.addHandler(console_handler) logger_.propagate = False logger_.setLevel(level) return logger_ Then later when running my app, I subclassed the HypercornLogger to have it patch its own loggers: from hypercorn.config import Config from hypercorn.logging import Logger as HypercornLogger from hypercorn.asyncio import serve class CustomLogger(HypercornLogger): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) if self.error_logger: patch_logger(self.error_logger) if self.access_logger: patch_logger(self.access_logger) app_config = Config() app_config.logger_class = CustomLogger The patch_logger function can really be anything you want to change to your logger, what I did is just an example. | 5 | 2 |
76,807,787 | 2023-8-1 | https://stackoverflow.com/questions/76807787/python-polars-how-to-build-a-supersession-conversion-table | The scenario is as follow: At my store I sell items that can be replaced by other items (i.e. have supersession). For example, until a certain date I may had for sale the item 'A' which was eventually replaced by a new item 'B'. These supersessions can happen successively. This means that 'A' can be replaced by 'B', which can be replaced by 'C', which can be replaced by 'D'. More than one product can be replaced by a common new product. This means that 'B' can be replaced by 'C', but item 'K' can also be replaced by 'C'. Starting from a DataFrame with the old and the new items code, my goal is to add another column to this DataFrame with the latest item code of each item. With the examples above, that means that starting from: I need to end up with: Note: this question is not duplicated with this one. That question I asked when I was trying to accomplish this in PowerBI. The question is now for Polars application. Althougnt I came up with a solution I'm really not satisfied with it, because: I loop though all items on the DataFrame (number_of_items)Β² times. As far as I'm aware it is not a good practice to loop through items in a DataFrame/Series using the for/while loop (Polars has tons of built-in methods to do this, so why can't I use one of them)? Polars documentation says that Series.set_at_idx() (which I used in my solution) is frequently an anti-pattern, as it can block optimisation (predicate pushdown, etc). So any idea of how can I perform such task with a cleaner and more performatic approach? Follows my solution: # Import Polars. import polars as pl # Create the sample DataFrame. df = pl.DataFrame( { 'Old_code': ['A', 'B', 'C', 'K'], 'New_code': ['B', 'C', 'D', 'C'] } ) # Retrieve the New_code column. new_PN_col = df.get_column('New_code') # Checks if the old code appears on the new code column. df = df.with_columns( pl.col('Old_code').is_in(new_PN_col).alias('Has_chained') ) # Breaks the DataFrame into Series. old_code_Series = df.get_column('Old_code') new_code_Series = df.get_column('New_code') has_chain = df.get_column('Has_chained') # Retrieve the DataFrame height. size = df.height # Loop through all items. for i in range(size): # Checks if the item has chained supersession. if has_chain[i]: # Retrieves the old and new item's code. old_1 = old_code_Series[i] new_1 = new_code_Series[i] # Loop through all items again. for k in range(size): # Finds where we need to replace the item code. if new_code_Series[k] == old_1: # Updates the item code. new_code_Series.scatter(k, new_1) # Concat the original DataFrame with the updated Series horizontally. df = pl.concat( [ df, new_code_Series.rename('Last_new_code').to_frame() ], how='horizontal' ).select(pl.col('*').exclude('Has_chained')) | This is more of a directed acyclic graph problem than Polars natively. Your old_code and new_code lists effectively define the edges between the vertices of such a graph: old_codes = ['A', 'B', 'C', 'K'] new_codes = ['B', 'C', 'D', 'C'] edges = {e1 : e2 for e1,e2 in zip(old_codes, new_codes)} From here, since every vertex will only have at most one edge, we can traverse this graph by iterating over every vertex without an in-edge. For each, we can mark all the vertices it's connected to until we find a vertex without an out-edge, this is the final mapping we want. The mapping of edges above lends itself to this quick algorithm: vertices with no in-edge is a symmetric set difference, and vertices with no out-edge simply doesn't have a key in this dictionary. There can be some short-circuiting to this algorithm by checking the dictionary being built as we go for already calculated pathing. Overall: final_mapping = {} for v in (set(edges.keys()) - set(edges.values())): vertices = {v} while v in edges.keys() and v not in final_mapping: vertices.add(edges[v]) v = edges[v] for v2 in vertices: final_mapping[v2] = final_mapping.get(v, v) Now we can use replace as a simple Polars expression: df = pl.DataFrame({'Old_code' : old_codes, 'New_codes' : new_codes}) df.with_columns(Last_new_code = pl.col('Old_code').replace(final_mapping)) shape: (4, 3) ββββββββββββ¬ββββββββββββ¬ββββββββββββββββ β Old_code β New_codes β Last_new_code β β --- β --- β --- β β str β str β str β ββββββββββββͺββββββββββββͺββββββββββββββββ‘ β A β B β D β β B β C β D β β C β D β D β β K β C β D β ββββββββββββ΄ββββββββββββ΄ββββββββββββββββ | 2 | 3 |
76,781,053 | 2023-7-27 | https://stackoverflow.com/questions/76781053/fastapi-generates-incorrect-openapi-3-0-1-specification | I am currently designing a REST API with FastAPI and using the generated openapi.json specification to generate a client. The client generator I am currently trying to use is limited to OpenAPI 3.0.x. The generator is complaining about "null" being generated as a possible type for a parameter, which makes sense as that was only introduced in OpenAPI 3.1.0 The offending part of the specification: "name": "someParameter", "in": "query", "required": false, "schema": { "anyOf": [ { "type": "array", "items": { "type": "string" }, "minItems": 3, "maxItems": 50 }, { "type": "null" } ], This is being generated from the following endpoint: @router.get("/{itemId}") async def readItem(someParameter: Annotated[Optional[list[str]], Query(title="...", description="...", min_length=3, max_length=50)] = None) I am customizing FastAPI to use the OpenAPI 3.0.1 spec like this: def customOpenAPI(): openapiSchema = get_openapi( title = app.title, openapi_version = "3.0.1", version = app.version, summary = app.summary, description = app.description, routes = app.routes ) app.openapi_schema = openapiSchema return app.openapi_schema app.openapi = customOpenAPI A possible solution would be to restructure the endpoint like this: @router.get("/{itemId}") async def readItem(someParameter: Annotated(list[str], Query(title="...", description="...", min_length=3, max_length=50, nullable=True)] = None) But then I would be losing the Optional typehint which I would like to keep for code readability purposes. The same problem also applies to Fields in my models: class SomeModel(BaseModel) someField: Optional[str] = None Which I would have to reformat to class SomeModel(BaseModel) someField: str = Field(nullable=True, default=None) Is there any way to get FastAPI to generate the correct OpenAPI 3.0.1 specification while keeping the Optional typehint in my code? | Looks like they don't support it in the code base as described here The version string of OpenAPI. FastAPI will generate OpenAPI version 3.1.0, and will output that as the OpenAPI version. But some tools, even though they might be compatible with OpenAPI 3.1.0, might not recognize it as a valid. So you could override this value to trick those tools into using the generated OpenAPI. Have in mind that this is a hack. But if you avoid using features added in OpenAPI 3.1.0, it might work for your use case. This is not passed as a parameter to the FastAPI class to avoid giving the false idea that FastAPI would generate a different OpenAPI schema. It is only available as an attribute. Example from fastapi import FastAPI app = FastAPI() app.openapi_version = "3.0.2" The only way to configure it for version 3.0.1 is to use the get_openapi(...) function post-process all the fields and then override it in the app. I am suffering from the same problem. | 9 | 4 |
76,783,239 | 2023-7-27 | https://stackoverflow.com/questions/76783239/is-it-safe-to-use-python-str-format-method-with-user-submitted-templates-in-serv | I am working on project where users must be able to submit templates containing placeholders to be later rendered to generate dynamic content. For example, a user might submit a template like: "${item.price} - {item.description} / {item.release_date}" that would be after formatted with the real values. Using a template engine (as Django or Jinja) for this purpose would require a lot of validation and sanitizing to prevent SSTI and XSS, so I was wondering if I could use the python str.format method to create a more limited and safer alternative, since this method cannot execute python code directly (I believe). My question is: Is using string.format safe enough when dealing with user-submitted templates, or it would still be vulnerable to injection attacks? If it is not, is there any alternative to implement a "safe template rendering" in python? | No, it is not safe in general to use str.format with user-provided format strings. Format strings are capable to executing a limited form of Python code. This limited code is just powerful enough to pose significant denial of service (DOS) and data breach risks. The main factors that make using untrusted format strings risky are: There's no telling how large the formatted string will be. Even short format strings can lead to massive expansions, posing DOS risks. Python's string formatting syntax is actually quite rich, and notably supports arbitrary indexing and attribute access on the format arguments. This means that you have to worry not only about the objects that you are passing to str.format, but also about every object that is indirectly accessible from those objects via indexing and attribute lookups. It turns out that the space of indirectly accessible objects is very large. Performing indexing or attribute access operations can trigger a larger number of different special methods. These triggered methods can have unintended side effects or raise unexpected exceptions. Data Breach Risks Off the top of my head, I came up with this toy example of how str.format with a carefully constructed format string can easily leak sensitive information from your server. This attack only relies on the attacker knowing (or guessing) the of types of the arguments being supplied to str.format. Running this code will print all environment variables defined on the server: import os class Foo: def foo(self): pass user_format = "{f.foo.__globals__[os].environ}" print(user_format.format(f=Foo())) Specific to Django, this format string will print your entire settings module, complete with your DB passwords, signing keys, and all sorts of other bits of exploitable information: user_format = "{f.foo.__globals__[sys].modules[myapp.settings].__dict__}" Denial of Service Risks Simple format strings like "{:10000000000}" can lead to massive memory consumption. This can grind things to a halt due to constant page swapping, or even crash the server process with a MemoryError exception or OOM kill. Additionally, due to the sheer number of different objects that a format string can interact with, and the variety of special methods that can be called on those objects, there's no telling what sort of other weird and unexpected exceptions could be raised or what resource intensive operations that may be performed. You can imagine a scenario where an ORM manager class has a property that triggers a database transaction when accessed. Such a property could be repeatedly accessed in the format string to stall the server with database interactions. | 2 | 4 |
76,771,858 | 2023-7-26 | https://stackoverflow.com/questions/76771858/ruff-does-not-autofix-line-too-long-violation | I have a python project and I am configuring latest version of ruff for that project for linting and formating purpose. I have the below settings in my pyproject.toml file: [tool.ruff] select = ["E", "F", "W", "Q", "I"] ignore = ["E203"] # Allow autofix for all enabled rules (when `--fix`) is provided. fixable = ["ALL"] unfixable = [] # restrict Line length to 99 line-length = 99 The ruff check command with autofix feature (--fix) of ruff identifies that the lines are long with E501 errors, but it does not format that code to wrap to next line to maintain the line-length restriction. Is there something I need to enable or do to ensure that ruff fixes this? Or is this not possible in ruff currently? Please help. I tried going through the documentation to find anything, but I am clueless what to do here. | It seems like Ruff has released Ruff Python Formatter as part of v0.0.289 and is currently in alpha state - https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_formatter/README.md We are currently using v0.0.280 which does not have this feature so we used a combination of Black and Ruff as per our project requirements. | 16 | 1 |
76,789,641 | 2023-7-28 | https://stackoverflow.com/questions/76789641/dash-multi-page-app-using-dbc-navigation-bar | I'm trying to replicate "multi_page_example1" from https://github.com/AnnMarieW/dash-multi-page-app-demos/tree/main. This uses a drop-down menu to navigate to different pages. However, I want to adjust the navbar options to be the standard links as in the first example here: https://dash-bootstrap-components.opensource.faculty.ai/docs/components/navbar/ A simple example is below: Folder structure: - app.py - app_pages |-- home.py |-- data_upload.py |-- __init__.py home.py: import dash from dash import html dash.register_page(__name__, path = '/') layout = html.Div(children=[ html.H1(children='This is our Home page') ]) data_upload.py: import dash from dash import html dash.register_page(__name__) layout = html.Div(children=[ html.H1(children='This is our upload page') ]) app.py: import dash_bootstrap_components as dbc import dash app = dash.Dash(__name__, pages_folder = "app_pages", use_pages = True, external_stylesheets=[dbc.themes.BOOTSTRAP]) navbar = dbc.NavbarSimple( children=[ dbc.NavItem(dbc.NavLink("Home", href="/home")), dbc.NavItem(dbc.NavLink("Data upload", href="/data_upload")), ], brand="Multipage Dash App", color="dark", dark=True, className="mb2", ) app.layout = dbc.Container( [navbar, dash.page_container], fluid = True) if __name__ == "__main__": app.run_server(debug=False) Problems: I have to run the app with "debug=False" because the server won't launch Dash otherwise. When it launches I can see the basic web app with navbar links. However, clicking between pages generates a "404 - Page not found" message. Oddly, the home page displays the normal message initially, but the 404 after clicking between links. Where am I going wrong? This is my first time working with bootstrap components and Dash multi-page approaches. I'm hoping to re-configure my current tabs-only Dash web app to a multi-page app with tab layouts in individual pages. | Here's a simple single app.py file example demonstrating a multi-page Dash web app For example, using the code you provide and combining it into a single file: Note: You can of course extend this approach with >1 files, as you wish, so long as ensuring correct modularization & importing (e.g., of the dash.Dash app object, separate page layouts, callbacks, etc.). Below, the layouts for the "Home" and "Data upload" pages are defined within the same file along with a callback that updates the 'page-content' container based on the current URL path via dcc.Location. from dash import Dash, Input, Output from dash import html, dcc import dash_bootstrap_components as dbc home_layout = html.Div(children=[html.H1(children="This is our Home page")]) data_upload_layout = html.Div( children=[html.H1(children="This is our upload page")] ) app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP]) navbar = dbc.NavbarSimple( children=[ dbc.NavItem(dbc.NavLink("Home", href="/")), dbc.NavItem(dbc.NavLink("Data upload", href="/data_upload")), ], brand="Multipage Dash App", color="dark", dark=True, className="mb-2", ) app.layout = html.Div( [ dcc.Location(id="url", refresh=False), navbar, dbc.Container(id="page-content", className="mb-4", fluid=True), ] ) @app.callback(Output("page-content", "children"), Input("url", "pathname")) def display_page(pathname): if pathname == "/": return home_layout elif pathname == "/data_upload": return data_upload_layout else: return dbc.Jumbotron( [ html.H1("404: Not found", className="text-danger"), html.Hr(), html.P(f"The pathname {pathname} was not recognized..."), ] ) if __name__ == "__main__": app.run_server(debug=True) β produces this app behavior: | 2 | 3 |
76,777,287 | 2023-7-27 | https://stackoverflow.com/questions/76777287/how-to-provide-two-different-ways-to-instantiate | Let's say I have a class AmbiguousClass which has two attributes, a and b (let's say they are both int, but it could be more general). They are related by some invertible equation, so that I can calculate a from b and reciprocally. I want to give the user the possibility to instantiate an AmbiguousClass by providing either a or b, depending on what is easier for them. Note that both ways to instantiate the variable can have the same signature, so this is not the typical polymorphism example. Providing none or both should result in an error/warning. My best guess was something like this: class AmbiguousClass(): def __init__(self, a=None, b=None): #if no parameter is provided, we raise an error (can not instantiate) if a is None and b is None: raise SomeCustomError() #if both a and b are provided, this seems redundant, a warning is raised elif a is not None and b is not None: warnings.warn(f"are you sure that you need to specify both a and b?") self.a = a self.b = b # if a is provided, we calculate b from it elif a is not None: self.a = a self.b = calculateB(self.a) # if b is provided: elif b is not None: self.b =b self.a = calculateA(self.b) And then, the user would have to instantiate the class by specifying which keyword he is providing: var1, var2 = AmbiguousClass(a=3), AmbiguousClass(b=6) However, this feels a bit clunky, especially if the user decides to provide an argument without providing a keyword (it will by default be a, but that is not clear from the user's point of view and might lead to unexpected behaviors). How can I do this within the __init__ function in a way that is clear and will avoid unexpected behavior? | I would enforce using custom constructors that only take a single parameter and clearly indicate which parameter is provided, thus avoiding ambiguity. class AmbiguousClass: def __init__(self, a, b, _is_from_cls=False): if not _is_from_cls: raise TypeError( "Cannot instantiate AmbiguousClass directly." " Use classmethods 'from_a' or 'from_b' instead." ) self.a = a self.b = b @classmethod def from_a(cls, a): b = calculateB(a) return cls(a=a, b=b, _is_from_cls=True) @classmethod def from_b(cls, b): a = calculateA(b) return cls(a=a, b=b, _is_from_cls=True) var1 = AmbiguousClass.from_a(a=3) var2 = AmbiguousClass.from_b(b=6) var3 = AmbiguousClass(3, 6) # will raise a TypeError var4 = AmbiguousClass(3, 6, True) # will not raise a TypeError but the user explicitly overwrites the default parameter. | 2 | 4 |
76,798,643 | 2023-7-30 | https://stackoverflow.com/questions/76798643/quantizing-normally-distributed-floats-in-python-and-numpy | Let the values in the array A be sampled from a Gaussian distribution. I want to replace every value in A with one of n_R "representatives" in R so that the total quantization error is minimized. Here is NumPy code that does linear quantization: n_A, n_R = 1_000_000, 256 mu, sig = 500, 250 A = np.random.normal(mu, sig, size = n_A) lo, hi = np.min(A), np.max(A) R = np.linspace(lo, hi, n_R) I = np.round((A - lo) * (n_R - 1) / (hi - lo)).astype(np.uint32) L = np.mean(np.abs(A - R[I])) print('Linear loss:', L) -> Linspace loss: 2.3303939600700603 While this works, the quantization error is large. Is there a smarter way to do it? I'm thinking that one could take advantage of A being normally distributed or perhaps use an iterative process that minimizes the "loss" function. Update While researching this question, I found a related question about "weighting" the quantization. Adapting their method sometimes gives better quantization results: from scipy.stats import norm dist = norm(loc = mu, scale = sig) bounds = dist.cdf([mu - 3*sig, mu + 3*sig]) pp = np.linspace(*bounds, n_R) R = dist.ppf(pp) # Find closest matches lhits = np.clip(np.searchsorted(R, A, 'left'), 0, n_R - 1) rhits = np.clip(np.searchsorted(R, A, 'right') - 1, 0, n_R - 1) ldiff = R[lhits] - A rdiff = A - R[rhits] I = lhits idx = np.where(rdiff < ldiff)[0] I[idx] = rhits[idx] L = np.mean(np.abs(A - R[I])) print('Gaussian loss:', L) -> Gaussian loss: 1.6521974945326285 K-means clustering might be better but seem to be too slow to be practical on large arrays. | K-means K-means clustering might be better but seem to be too slow to be practical on large arrays. For the 1D clustering case, there are algorithms faster than K-means. See https://stats.stackexchange.com/questions/40454/determine-different-clusters-of-1d-data-from-database I picked one of those algorithms, Jenks Natural Breaks, and ran it on a random sub-sample of your dataset: A_samp = np.random.choice(A, size=10000) breaks = np.array(jenkspy.jenks_breaks(A_samp, n_classes=n_R)) R = (breaks[:-1] + breaks[1:]) / 2 This is pretty fast, and gets a quantization loss for the full dataset of about 1.28. To visualize what each of these methods are doing, I plotted the cdf of the breaks that each of them come up with against the index within R of the break. Gaussian is a straight line, by definition. This means that it has an equal number of breaks at every percentile of the distribution. The linear method spends very little of its breaks in the middle of the distribution, and uses most of them at the tails. Jenks finds a compromise between the two of them. Automatically searching for lower loss Looking at the chart above, I had an idea: all of these methods of choosing breaks are sigmoid-shaped curves of various sorts when plotted in the quantile domain. (Gaussian sort of fits if you think of it as a really stretched out sigmoid.) I wrote a function which parameterized each of those curves using a single variable, strength, which is how fast the sigmoid should curve. Once I had that, I used scipy.optimize.minimize to automatically search for a curve which minimized the loss. It turns out that if you let Scipy optimize this, it picks a curve strength really close to Jenks, and the curve it finds is slightly worse than the Jenks one, with a loss of about 1.33. You can see the notebook with this failed approach here. Quantizing with 2^16 floats In the case where you need to create 2^16 different representatives, it's computationally infeasible to use Jenks. However, you can do something that's pretty close: Jenks with a small number of classes plus linear interpolation. Here's the code for this: import itertools def pairwise(iterable): "s -> (s0, s1), (s1, s2), (s2, s3), ..." a, b = itertools.tee(iterable) next(b, None) return zip(a, b) def linspace_jenks(A, n_R, jenks_classes, dist_lo, dist_hi): assert n_R % jenks_classes == 0, "jenks_classes must be divisor of n_R" simplify_factor = n_R // jenks_classes assert jenks_classes ** 2 <= len(A), "Need more data to estimate" breaks = jenkspy.jenks_breaks(A, n_classes=jenks_classes) # Adjust lowest and highest break to match highest/lowest observed value breaks[0] = dist_lo breaks[-1] = dist_hi linspace_classes = [] for lo, hi in pairwise(breaks): linspace_classes.append(np.linspace(lo, hi, simplify_factor, endpoint=False)) linspace_classes = np.hstack(linspace_classes) assert len(linspace_classes) == n_R return linspace_classes Example call: A_samp = np.random.choice(A, size = 2**16) jenks_R = linspace_jenks(A_samp, n_R, 128, np.min(A), np.max(A)) How does the performance compare to the linear method? On my system, I get a loss of 0.009421 for linear with n_R=2^16. The following graph shows the losses that the linspace_jenks method gets for each value of jenks_classes. With just 32 Jenks classes, and filling the rest in with linear interpolation, the loss goes down to 0.005031. | 2 | 3 |
76,780,411 | 2023-7-27 | https://stackoverflow.com/questions/76780411/use-data-matrix-as-a-fiducial-to-obtain-angle-of-rotation | I have a bunch of images such as the one above. They each contain a data matrix, but do not guarantee that it is oriented to an axis. Nevertheless, I can read these matrices with libdmtx pretty reliably regardless of their rotation. However, I also need to rotate the image so that the label is oriented right-side-up. My thought process is that I need to get the angle of rotation of the data matrix so that I can rotate the image with PIL to orient it correctly. pylibdmtx.decode returns the data that the matrix contains, as well as a rectangle which I originally thought was the bounding box of the data matrix. To test this, I ran the following code with the image above: from PIL import Image from pylibdmtx.pylibdmtx import decode def segment_qr_code(image: Image.Image): data = decode(image)[0] print(data.rect) if __name__ == "__main__": segment_qr_code(Image.open('<path to image>')) Unfortunately, this code returned Rect(left=208, top=112, width=94, height=-9). Because the height is negative, I don't think it is the bounding box to the data matrix, and if it is, I don't know how to use it to get the angle of rotation. My question is, what is the best way to obtain the angle of rotation of the data matrix? I originally thought that I could crop the image with the bounding box to get a segmented image of just the data matrix. Then I could use image thresholding or contouring to get an angle of rotation. However, I'm not sure how to get the correct bounding box, and even if I did I don't know how to use thresholding. I would also prefer to not use thresholding because it isn't always accurate. The data matrix always has a solid border on the bottom and left sides, so I think it may be possible to use it as a fiducial to align the image, however I was unable to find any libraries that were able to return the angle of rotation of the data matrix. I am open to any suggestions. Thanks in advance. | Thank you to @flakes for the suggestion. Combining code from the PR and issue, I created the following solution: from pylibdmtx.pylibdmtx import _region, _decoder, _image, _pixel_data, _decoded_matrix_region from pylibdmtx.wrapper import c_ubyte_p, DmtxPackOrder, DmtxVector2, dmtxMatrix3VMultiplyBy, DmtxUndefined from ctypes import cast, string_at from collections import namedtuple import numpy _pack_order = { 8: DmtxPackOrder.DmtxPack8bppK, 16: DmtxPackOrder.DmtxPack16bppRGB, 24: DmtxPackOrder.DmtxPack24bppRGB, 32: DmtxPackOrder.DmtxPack32bppRGBX, } Decoded = namedtuple('Decoded', 'data rect') def decode_with_region(image): results = [] pixels, width, height, bpp = _pixel_data(image) with _image(cast(pixels, c_ubyte_p), width, height, _pack_order[bpp]) as img: with _decoder(img, 1) as decoder: while True: with _region(decoder, None) as region: if not region: break else: res = _decode_region(decoder, region) if res: open_cv_image = numpy.array(image) # Convert RGB to BGR open_cv_image = open_cv_image[:, :, ::-1].copy() height, width, _ = open_cv_image.shape topLeft = (res.rect['01']['x'], height - res.rect['01']['y']) topRight = (res.rect['11']['x'], height - res.rect['11']['y']) bottomRight = (res.rect['10']['x'], height - res.rect['10']['y']) bottomLeft = (res.rect['00']['x'], height - res.rect['00']['y']) results.append(Decoded(res.data, (topLeft, topRight, bottomRight, bottomLeft))) return results def _decode_region(decoder, region): with _decoded_matrix_region(decoder, region, DmtxUndefined) as msg: if msg: vector00 = DmtxVector2() vector11 = DmtxVector2(1.0, 1.0) vector10 = DmtxVector2(1.0, 0.0) vector01 = DmtxVector2(0.0, 1.0) dmtxMatrix3VMultiplyBy(vector00, region.contents.fit2raw) dmtxMatrix3VMultiplyBy(vector11, region.contents.fit2raw) dmtxMatrix3VMultiplyBy(vector01, region.contents.fit2raw) dmtxMatrix3VMultiplyBy(vector10, region.contents.fit2raw) return Decoded( string_at(msg.contents.output), { '00': { 'x': int((vector00.X) + 0.5), 'y': int((vector00.Y) + 0.5) }, '01': { 'x': int((vector01.X) + 0.5), 'y': int((vector01.Y) + 0.5) }, '10': { 'x': int((vector10.X) + 0.5), 'y': int((vector10.Y) + 0.5) }, '11': { 'x': int((vector11.X) + 0.5), 'y': int((vector11.Y) + 0.5) } } ) else: return None To decode an image, use decode_with_region() instead of pylibdmtx's decode(). It outputs a dictionary of coordinates, which I can plot on an image and get the following output: I can then use these coordinates to obtain an angle of rotation: def get_data_from_matrix(image): decoded = decode_with_region(image)[0] topLeft, topRight = decoded.rect[2], decoded.rect[3] rotation = -math.atan2(topLeft[1] - topRight[1], topLeft[0] - topRight[0]) * (180 / math.pi) image = image.rotate(rotation, expand=True) | 4 | 3 |
76,796,990 | 2023-7-30 | https://stackoverflow.com/questions/76796990/module-not-found-error-in-virtual-environment | I'm a newbie with Python and been trying to install modules using pip unsuccessfully in my small project. Following advice online, I've created my own virtual environment and imported my first module cowsay fine. I can definitely see the module being installed in my project: BUT, when attempting to run the file in my terminal, I keep getting a ModuleNotFoundError. (env) sr@python-virtual-env >> pip install cowsay Collecting cowsay Using cached cowsay-5.0-py2.py3-none-any.whl Installing collected packages: cowsay Successfully installed cowsay-5.0 (env) sr@python-virtual-env >> python say.py John Traceback (most recent call last): File "/Users/sr/Sites/python-virtual-env/say.py", line 1, in <module> import cowsay ModuleNotFoundError: No module named 'cowsay' What am I missing here? Thanks in advance! | As far as the output of the command which python pip is python: aliased to /usr/bin/python3 /Users/sr/Sites/python-virtual-env/env/bin/pip we can say that you are running the python not from your environment. So it cannot see by default any package installed in this env. Remove the alias python (in bash, the command unalias python should be enough for the current session) or print the command with the leading \ to ignore aliases and look for the python in $PATH, like this $ \python say.py John Note \ at the beginning of this command. But if you have declared python as a shell function, this function may be executed (hardly so, but what if some shell behaves this way?). So may be the best way to resolve the problem is to run env python, like this: $ env python say.py John Important update Having thought somewhat longer about this problem, I can allow myself a categorical answer: env python ... is the only way to run python directly from PATH. Here's the model to see why I think so. Let's say we run bash on a linux machine with PATH='/usr/bin:/bin' and the only python /usr/bin/python: $ bash --version | head -1 GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu) $ export PATH='/usr/bin:/bin' $ which -a python /usr/bin/python Making sure we don't have any aliases or functions named python: $ type -a python python is /usr/bin/python Now we define an alias and a function with name python: $ alias python='echo Running python as an alias; /usr/bin/python' $ function python(){ echo Running python as a function; python "$@"; } Let's see what is the answer of which and type commands now: $ which -a python /usr/bin/python $ type -a python python is aliased to `echo Running python as an alias; /usr/bin/python' python is a function python () { echo Running python as a function; echo Running python as an alias; /usr/bin/python "$@" } python is /usr/bin/python And here we go: $ python -c 'print("Hello world")' Running python as an alias Hello world $ \python -c 'print("Hello world")' Running python as a function Running python as an alias Hello world $ env python -c 'print("Hello world")' Hello world As we can see, an alias took precedence over a function in this case, and only env python helped to reach the PATH directly, which is most desirable when running python from a local environment. | 5 | 3 |
76,788,010 | 2023-7-28 | https://stackoverflow.com/questions/76788010/twitter-api-v2-follows-lookup | I am trying to retrieve a list of who a Twitter account follows using Python. After realizing that the free tier API access did not provide this endpoint I upgraded my developer account to the basic plan (for $100 a month) as it clearly states that once signed up, you can retrieve an accounts followers or following. I have created a script that should retrieve the followers of an account based on a user id - import requests def get_followers(user_id, bearer_token): url = f'https://api.twitter.com/2/users/{user_id}/followers' headers = {'Authorization': f'Bearer {bearer_token}'} all_followers = [] while True: response = requests.get(url, headers=headers) if response.status_code == 200: result_data = response.json() all_followers.extend(result_data['data']) if 'next_token' in result_data['meta']: next_token = result_data['meta']['next_token'] url = f'https://api.twitter.com/2/users/{user_id}/followers?pagination_token={next_token}' else: break else: print(f"Failed to get followers for user ID '{user_id}' with status code: {response.status_code}") print(response.json()) return None return all_followers However I am getting the following (quite common it would seem) error response - { "client_id":"my-id", "detail":"When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal.", "registration_url":"https://developer.twitter.com/en/docs/projects/overview", "title":"Client Forbidden", "required_enrollment":"Appropriate Level of API Access", "reason":"client-not-enrolled", "type":"https://api.twitter.com/2/problems/client-forbidden" } I have made sure that my application is located within a project that has the V2 ACCESS tag associated to it. I also tried using Tweepy however was met with the same error response. Also on reading the specific page in the Twitter docs, the quick start guide AND the API explorer buttons both leads to broken links! | My original post here provided the code (now removed) that I wrote for the question Tweepy get followers list on April 18, 2023. After your comment, I did some more research into the error message: tweepy.errors.Forbidden: 403 Forbidden When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. I found that the Twitter API capabilities changed soon after I wrote that code back in April 2023. This change removed querying access to obtain the followers for an individual user or for a list. The image below is from Twitter's Developer Changelog, which shows this change. Here is a GitHub issue on the error in question. It's interesting that the Twitter Account used by the Twitter's API Developers to post changelog updates is now non-existent on Twitter. I also looked at the Twitter Access levels. Earlier this year I was using the Essential Account to query for followers. The X Access Levels are, which are more restrictive as shown in the image below. Based on these new access levels and the changelog post on June 26, 2023, it seems that you need to have an Enterprise account to obtain a list of followers. | 2 | 4 |
76,799,021 | 2023-7-30 | https://stackoverflow.com/questions/76799021/unable-to-use-py-binary-target-as-executable-in-a-custom-rule | I have a py_binary executable target, and want to use this from a custom rule. I am able to get this to work, but only by duplicating the dependencies of my py_binary target with my custom rule. Is there any way to avoid this duplication and automatically include the dependencies of the py_binary? I have simplified my problem down to the following (reproduce by running the //foo:main target) greet/BUILD.bazel: py_binary( name = "greet", srcs = ["greet.py"], deps = ["@rules_python//python/runfiles"], visibility = ["//visibility:public"], ) greet/greet.py: from rules_python.python.runfiles import runfiles print("hello world") foo/BUILD.bazel: load("//bazel:defs.bzl", "foo_binary") foo_binary( name = "main", ) bazel/BUILD.bazel: (empty) bazel/defs.bzl: def _foo_binary(ctx): shell_script = ctx.actions.declare_file("run.sh") ctx.actions.write( output = shell_script, content = """#!/bin/bash $0.runfiles/svkj/{runtime} """.format(runtime=ctx.executable._greet.short_path), is_executable = True, ) return [ DefaultInfo( executable = shell_script, runfiles = ctx.runfiles( files = ctx.files._greet + ctx.files.deps + ctx.files._python, collect_data = True ), ), ] foo_binary = rule( implementation = _foo_binary, attrs = { "deps": attr.label_list( allow_files=True, ), "_greet": attr.label( default = "//greet:greet", cfg = "exec", executable = True, ), # TODO - Why is it necessary to add this explicitly? "_python": attr.label_list( allow_files=True, default = ["@python3_9//:python3", "@rules_python//python/runfiles"], ), }, executable = True, ) bazel/BUILD.bazel: WORKSPACE: workspace(name = "svkj") load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "rules_python", sha256 = "5fa3c738d33acca3b97622a13a741129f67ef43f5fdfcec63b29374cc0574c29", strip_prefix = "rules_python-0.9.0", url = "https://github.com/bazelbuild/rules_python/archive/refs/tags/0.9.0.tar.gz", ) load("@rules_python//python:repositories.bzl", "python_register_toolchains") python_register_toolchains( name = "python3_9", python_version = "3.9", ) load("@python3_9//:defs.bzl", "interpreter") | You need to correctly collect the runfiles of the _greet binary. Instead of just using the _greet binary itself, try the following to include its dependencies (and data dependencies) aswell: DefaultInfo( executable = shell_script, runfiles = ctx._greet[DefaultInfo].default_runfiles ) See also: https://bazel.build/extending/rules#runfiles | 2 | 4 |
76,774,415 | 2023-7-26 | https://stackoverflow.com/questions/76774415/vectorized-sum-and-product-with-list-of-pandas-data-frames | I have a list of Data Frames, each corresponding to a different time period t from 0 to N. Each data frame has multiple types, I need to preform the calculation below for each type in the data frame. An example data set would be as follows, I made each df in the list the same values for simplicity but the calculation would remain the same. d = {'type': ['a', 'b', 'c'], 'x': [1, 2, 3], 'y':[3,4,5]} df = pd.DataFrame(data=d) l = [] window = 20 [l.append(df.copy(deep=True)) for i in range(window)] I need to compute a vectored sum $$\sum$$ and product $$\prod$$ using the above list of dataframes for each type (a, b, c) in an efficient manner. e.g. for each calculation below I need to filter by a single type df[df['type'] == 'a'] for every df in the list. If I use a df.groupby('type') on every df in the list it could be very slow using a larger data set. The same goes if I use nested for loops for the sum and the product and filtering by type in each iteration of the for loop. How can I compute this sum product in an efficient manner ? Update One possible way as suggested in the comments is below, however this will start to be very inefficient if I use a larger dataset or window: import pandas as pd import math d = {'type': ['a', 'b', 'c'], 'x': [1, 2, 3], 'y':[3, 4, 5]} df = pd.DataFrame(data=d) l = [] window = 20 for i in range(window): df['window'] = i l.append(df.copy(deep=True)) df = pd.concat(l) sums = {} types = df['type'].unique() for s in types: sums[s] = 0 tdf = df[df['type'] == s] for i in range(window): sums[s] += tdf[tdf['window'] == i]['x'].values[0] * math.prod(2 - tdf[tdf['window'] == i+1]['x'].values[0] for i in range(0, window-1)) * tdf[tdf['window'] == i]['y'].values[0] | Assuming I've understood your intent correctly, you could try something along these lines. sums = {} x_col_idx = df.columns.get_loc('x') for s, tdf in df.groupby('type'): sums[s] = np.dot(tdf['x'], tdf['y']) * np.prod(2 - tdf['x'].iloc[1:]) This gets the same results on your example and a few other examples I tried as the reference code, and is much faster. It also mostly stays within Pandas. EDIT: If you're willing to do the whole computation in Numpy, you can get a 3x speedup. (More, if you have a large number of groups.) However, this comes at the cost of the code being much harder to read. def run(): # Convert to numpy type_ = df['type'].values x = df['x'].values y = df['y'].values uniques, codes = np.unique(type_, return_inverse=True) # Set up intermediate arrays dot_prod = np.zeros(len(uniques)) per_type_prod = np.ones(len(uniques) + 1) # Compute dot product of x and y within each group prod = x * y np.add.at(dot_prod, codes, prod) # Compute (2 - x).prod() within each group, dropping first sample per_type_prod_before_multiply = 2 - x # Drop first instance of each type for i in range(len(uniques)): codes[(codes == i).argmax()] = -1 np.multiply.at(per_type_prod, codes, per_type_prod_before_multiply) # Last element is garbage, result of multiplying thrown away first element. per_type_prod = per_type_prod[:-1] # Convert to dictionary sums = {k: v for k, v in zip(uniques, per_type_prod * dot_prod)} return sums This avoids df.groupby() by having an array for each of the N groups, and choosing which element to add into by the codes array. | 3 | 2 |
76,815,232 | 2023-8-1 | https://stackoverflow.com/questions/76815232/difference-between-pandas-na-and-nan-for-numeric-columns | I have a data frame column as float64 full of NaN values, If I cast it again to float64 they got substituted for <NA> values which are not the same. I know that the <NA> values are pd.NA, while NaN values are np.nan , so they are different things. So why casting an already float64 column to float64 changed NaN to <Na> ? Here's an example: df=pd.DataFrame({'a':[1.0,2.0]}) print(df.dtypes) #output is: float64 df['a'] = np.nan print(df.dtypes) # output is float64 print(df) a 0 NaN 1 NaN #Now, lets cast that float64 to float 64 df3['a']=df3['a'].astype(pd.Float64DType()) print(df3.dtypes) #output is Float64, notice it's uppercase F this time, previously it was lowercase print(df3) a 0 <NA> 1 <NA> it seems float64 and Float64 are two different things. And NaN (np.nan) is the null value for float64 while <NA> (pd.NA) is the null for Float64 Is this correct? And if so, what's under the hoods? | Yes, you are correct. float64 and Float64 are two different data types in pandas. The difference is that Float64 is an extension type that can hold missing values using a special sentinel, while float64 is a native numpy type that uses NaN to represent missing values. Under the hood, Float64 uses a numpy array with dtype object to store the values, while float64 uses a numpy array with dtype float64. This means that Float64 may have some performance overhead compared to float64, but it also allows more consistent handling of missing values across different data types. Check this out: Numpy float64 vs Python float | 2 | 4 |
76,814,661 | 2023-8-1 | https://stackoverflow.com/questions/76814661/how-do-i-import-a-module-within-the-same-directory-or-subdirectory | I have the following directory for a project: photo_analyzer/ β βββ main.py # Main script to run the photo analyzer βββ gui/ # Directory for GUI-related files β βββ __init__.py # Package initialization β βββ app.py # Tkinter application class and main GUI logic β βββ widgets.py # Custom Tkinter widgets (if needed) β βββ styles.py # Styling configurations for the GUI β βββ resources/ # Directory for GUI resources (e.g., icons, images) β βββ icon.png # Application icon β βββ ... β βββ photo_analysis/ # Directory for photo analysis-related files β βββ __init__.py # Package initialization β βββ photo_utils.py # Utility functions for photo manipulation β βββ light_profiles.py # Functions to analyze light profiles in photos β βββ ... I'm trying to import photo_utils and light_profiles from the photo_analysis module into gui/app.py. Whenever I try to run app.py, I get the following error message: Traceback (most recent call last): File "c:\Users\Josh\Documents\PhotoAnalyzer\gui\app.py", line 6, in <module> from photo_analysis import photo_utils ModuleNotFoundError: No module named 'photo_analysis' I have empty __init__.py files in both directories, so I don't think that's the issue. I've tried adding the photo_analysis module to the Python path using SET PYTHONPATH="C:\Users\Josh\Documents\PhotoAnalyzer\photo_analysis" and I've tried to use the sys.path technique as well. Both end with the same error message as before. photo_utils.py imports widgets.py from the "gui" module. Could I be experiencing circular dependency? I did try the work-around explained in text by importing widgets into each method in photo_utils that used it, and while this allowed a test main code in photo_utils to run properly, app.py still gives the same error message that there is no module named 'photo_analysis'. What am I missing? | It is a tricky one. Normally python imports from their sibling files or childs of sibling folders. Here if you want to import from a file that are from same parent then you have to use sys.path.append('.') Secondly it also depends on the terminal current working directory. If your current working directory is "photo_analyzer" then adding these line will work import sys sys.path.append('.') import photo_analysis.photo_utils as pa Otherwise if your current working directory is "photo_analyzer/gui", then you have to add these lines import sys sys.path.append('./../photo_analysis') import photo_utils as pa | 4 | 3 |
76,772,509 | 2023-7-26 | https://stackoverflow.com/questions/76772509/llama-2-7b-hf-repeats-context-of-question-directly-from-input-prompt-cuts-off-w | Context: I am trying to query Llama-2 7B, taken from HuggingFace (meta-llama/Llama-2-7b-hf). I give it a question and context (I would guess anywhere from 200-1000 tokens), and ask it to answer the question based on the context (context is retrieved from a vectorstore using similarity search). Here are my two problems: The answer ends, and the rest of the tokens until it reaches max_new_tokens are all newlines. Or it just doesn't generate any text and the entire response is newlines. Adding a repetition_penalty of 1.1 or greater has solved infinite newline generation, but does not get me full answers. For answers that do generate, they are copied word for word from the given context. This remains the same with repetition_penalty=1.1, and making the repetition penalty too high makes the answer nonsense. I have only tried using temperature=0.4 and temperature=0.8, but from what I have done, tuning temperature and repetition_penalty both result in either the context being copied or a nonsensical answer. Note about the "context": I am using a document stored in a Chroma vector store, and similarity search retrieves the relevant information before I pass it to Llama. Example Problem: My query is to summarize a certain Topic X. query = "Summarize Topic X" The retrieved context from the vectorstore has 3 sources that looks something like this (I format the sources in my query to the LLM separated by newlines): context = """When talking about Topic X, Scenario Y is always referred to. This is due to the relation of Topic X is a broad topic which covers many aspects of life. No one knows when Topic X became a thing, its origin is unknown even to this day.""" Then the response from Llama-2 directly mirrors one piece of context, and includes no information from the others. Furthermore, it produces many newlines after the answer. If the answer is 100 tokens, and max_new_tokens is 150, I have 50 newlines. response = "When talking about Topic X, Scenario Y is always referred to. This is due to the relation of \n\n\n\n" One of my biggest issues is that in addition to copying one piece of context, if the context ends mid-sentence, so does the LLM response. Is anyone else experiencing anything like this (newline issue or copying part of your input prompt)? Has anyone found a solution? | This is a common issue with pre-trained base models like Llama. My first thought would be to select a model that has some sort of instruction tuning done to it i.e https://huggingface.co/meta-llama/Llama-2-7b-chat. Instruction tuning impacts the model's ability to solve tasks reliably, as opposed to the base model, which is often just trained to predict the next token (which is often why the cutoff happens). The second thing, in my experience, I have seen that has helped is using the same prompt format that was used during training. You can see in the source code the prompt format used in training and generation by Meta. Here is a thread about it. Finally, for repetition, using a Logits Processor at generation-time has been helpful to reduce repetition. | 7 | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.