question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,269,751
2023-10-11
https://stackoverflow.com/questions/77269751/make-efficient-python-generator-with-condition
I tried to make python str generator with condition, but it is not easy to me. condition is simple. I have 3 letters, "A", "B", and "C" each letters must be used at least 4 times. total sentence length is 19. I need to test all combinations. so i tried below code for i in combinations_with_replacement('ABC', 7): for j in permutation(i+("A","B","C","A","B","C","A","B","C","A","B","C",), 19): test j I think this code covers all combinations but it contains duplicants. how can I make it better?
To implement the rule that "each letters must be used at least 4 times" you can create a base pool of 12 characters with 'ABC' repeated 4 times, and that leaves 19 - 3 x 4 = 7 characters that need to be filled in with any of the letters in 'ABC', which can be done with itertools.combinations_with_replacement. Chain the base pool of 12 characters and the additional pool of 7 characters together to generate distinct permutations with more_itertools.distinct_permutations: from itertools import chain, combinations_with_replacement from more_itertools import distinct_permutations def string_generator(letters='ABC', min_count=4, length=19): base = letters * min_count for extra in combinations_with_replacement(letters, length - len(base)): yield from map(''.join, distinct_permutations(chain(base, extra))) Demo: https://replit.com/@blhsing1/SandyNimblePatches
2
2
77,270,062
2023-10-11
https://stackoverflow.com/questions/77270062/python-pandas-filter-rows-by-days-of-difference-in-two-columns-with-weekend-and
I have a dataframe with two dates, among other things. I need to filter out rows that have more than two working days difference between these two dates. I must take into consideration weekends and holidays. *Assuming 10/17/2023 is a holiday... Example df: NAME DATE1 DATE2 CASE1 10/12/2023 10/13/2023 <--- one day difference CASE2 10/12/2023 10/16/2023 <--- two days difference (weekend) CASE3 10/12/2023 10/18/2023 <--- three days difference (weekends and holidays) ... CASEX 10/12/2023 10/19/2023 <--- four days difference (weekends and holidays) I need to save CASE3 and CASEX (which has more than two days difference) in another dataframe and delete it from this one. My approach: date1 = "10/12/2023" date2 = "10/19/2023" date1 = pd.to_datetime(date1, format="%m/%d/%Y").date() date2 = pd.to_datetime(date2, format="%m/%d/%Y").date() holidays = [pd.to_datetime("10/17/2023",format="%m/%d/%Y").date()] days = np.busday_count(date1, date2, holidays=holidays) In "days" I have the correct number. But I don't get to implement it in dataframe to filter it and extract the rows.
Code Define a list of holidays holidays = np.array([pd.to_datetime("10/17/2023", format="%m/%d/%Y")], dtype='datetime64[D]') Parse the strings in date columns to datetime type df['DATE1'] = pd.to_datetime(df['DATE1'], format="%m/%d/%Y") df['DATE2'] = pd.to_datetime(df['DATE2'], format="%m/%d/%Y") # NAME DATE1 DATE2 # 0 CASE1 2023-10-12 2023-10-13 # 1 CASE2 2023-10-12 2023-10-16 # 2 CASE3 2023-10-12 2023-10-18 # 3 CASEX 2023-10-12 2023-10-19 Cast the dates to datetime64[D] types then use busy_day count to get the diff days = np.busday_count(df['DATE1'].values.astype("datetime64[D]"), df['DATE2'].values.astype("datetime64[D]"), holidays=holidays) # array([1, 2, 3, 4]) Use boolean indexing to filter the rows valid_rows = df[days <= 2] invalid_rows = df[days > 2] # valid_rows # NAME DATE1 DATE2 # 0 CASE1 2023-10-12 2023-10-13 # 1 CASE2 2023-10-12 2023-10-16 # invalid_rows # NAME DATE1 DATE2 # 2 CASE3 2023-10-12 2023-10-18 # 3 CASEX 2023-10-12 2023-10-19
4
2
77,237,818
2023-10-5
https://stackoverflow.com/questions/77237818/how-to-load-a-huggingface-pretrained-transformer-model-directly-to-gpu
I want to load a huggingface pretrained transformer model directly to GPU (not enough CPU space) e.g. loading BERT from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("bert-base-uncased") would be loaded to CPU until executing model.to('cuda') now the model is loaded into GPU I want to load the model directly into GPU when executing from_pretrained. Is this possible?
I'm answering my own question. Hugging Face accelerate (add via pip install accelerate) could be helpful in moving the model to GPU before it's fully loaded in CPU. It's useful when: GPU memory > model size > CPU memory Also specify device_map="cuda": from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("bert-base-uncased", device_map="cuda")
17
28
77,260,102
2023-10-9
https://stackoverflow.com/questions/77260102/python-typing-cast-method-vs-colon-type-hints
What the difference between using the Python typing.cast method x = cast(str, x) compared with using type hints/colon notation/left hand side type annotation? x: str I've seen the use of the cast method in codebases but don't have an apparent reason to use it instead of type hint notation, which is more concise.
I will answer you through a well-written example: def get_name(can_return_name: bool) -> str | None: in_name = input('Enter your name: ') return in_name if can_return_name else None def do_something_with_name() -> None: name: str = get_name(True) print(name.capitalize()) Here, you'd expect Pyright or mypy to say that name's type is str, even if the return type of get_name is str | None, but this is not the case: if the function you are calling (in this case get_name) has a return type, the column notation type will be ignored completely, and Pyright will tell you that you can't call .capitalize on a name, as it could be None In this case though, you are completely sure that name will be str and not None, so you can use cast, like such: def do_something_with_name() -> None: name: str = cast(str, get_name(True)) print(name.capitalize()) get_name's value will be unchanged, but static type checkers will treat them as the first argument's type. in fact, as python's documentation states, the cast function does in fact nothing at all. It can be defined with the new Python 3.12 generics as such: def cast[T](var: object) -> T: return var cast[str](get_name(True))
4
7
77,241,390
2023-10-6
https://stackoverflow.com/questions/77241390/querying-html-content-in-common-crawl-dataset-using-amazon-athena
I am currently exploring the massive Common Crawl dataset hosted on Amazon S3 and am attempting to use Amazon Athena to query this dataset. My objective is to search within the HTML content of the web pages to identify those that contain specific strings within their tags. Essentially, I am looking to filter out websites whose HTML content matches particular criteria. I am aware that Athena is capable of querying large datasets on S3 using standard SQL. However, I am not entirely sure about the feasibility and the approach to directly query inside the HTML content of the web pages in the Common Crawl dataset. Here's a simplified version of what I am looking to achieve: sql SELECT * FROM "common_crawl_dataset" WHERE html_content LIKE '%specific-string%'; Is it possible to directly query the HTML content of the web pages in the Common Crawl dataset using Athena? If yes, what would be the best approach to accomplish this, considering efficiency and cost-effectiveness? Are there any limitations or challenges that I should be aware of?
This is not easily possible, because the html content is not in the schema of the index that you are querying. Please see the Common Crawl Columnar Index blog post for further details. The most common use of this index is to select a small subset of the crawl (things like "all webpages with a Swiss domain name (*.ch) classified as being in the Romansh "roh" language). Accessing the html for these selected web captures is a second step. There is a large list of examples of columnar index (in many programming languages) in the cc-index-table GitHub repo.
3
1
77,254,777
2023-10-8
https://stackoverflow.com/questions/77254777/alternative-to-concat-of-empty-dataframe-now-that-it-is-being-deprecated
I have two dataframes that can both be empty, and I want to concat them. Before I could just do : output_df= pd.concat([df1, df2]) But now I run into FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation. An easy fix would be: if not df1.empty and not df2.empty: result_df = pd.concat([df1, df2], axis=0) elif not df1.empty: result_df = df1.copy() elif not df2.empty: result_df = df2.copy() else: result_df = pd.DataFrame() But that seems pretty ugly. Does anyone have a better solution ? FYI: this appeared after pandas released v2.1.0
To be precise, concat is not deprecated (and won't be IMHO) but I can trigger this FutureWarning in 2.1.1 with the following example, while df2 being an empty DataFrame with a different dtypes than df1 : df1 = pd.DataFrame({"A": [.1, .2, .3]}) df2 = pd.DataFrame(columns=["A"], dtype="object") out = pd.concat([df1, df2]) print(out) A 0 0.1 1 0.2 2 0.3 As a solution in your case, you can try something like you did : out = (df1.copy() if df2.empty else df2.copy() if df1.empty else pd.concat([df1, df2]) # if both DataFrames non empty ) Or maybe even this one? : out = pd.concat([df1.astype(df2.dtypes), df2.astype(df1.dtypes)])
43
26
77,227,895
2023-10-4
https://stackoverflow.com/questions/77227895/polars-sql-context-filter-by-date-or-datetime
I'm trying to write a query against parquet using Polars SQL Context. It's working great if I pre-filter my arrow table by date. I cannot figure out how to use date in the SQL query. works: filters = define_filters(event.get("filters", None)) table = pq.read_table( f"s3://my_s3_path{partition_path}", partitioning="hive", filters=filters, ) df = pl.from_arrow(table) ctx = pl.SQLContext(stuff=df) sql = "SELECT things FROM stuff" new_df = ctx.execute(sql,eager=True) doesn't work (filters==None in this case): filters = define_filters(event.get("filters", None)) table = pq.read_table( f"s3://my_s3_path{partition_path}", partitioning="hive", filters=filters, ) df = pl.from_arrow(table) ctx = pl.SQLContext(stuff=df) sql = """ SELECT things FROM stuff where START_DATE_KEY >= '2023-06-01' and START_DATE_KEY < '2023-06-17' """ new_df = ctx.execute(sql,eager=True) I get the following error: Traceback (most recent call last): File "/Users/xaras/projects/arrow-lambda/loose.py", line 320, in <module> test_runner(target=args.t, limit=args.limit, is_debug=args.debug) File "/Users/xaras/projects/arrow-lambda/loose.py", line 294, in test_runner rows, metadata = test_handler(target, limit, display=True, is_debug=is_debug) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/xaras/projects/arrow-lambda/loose.py", line 110, in test_handler response = json.loads(handler(event, None)) ^^^^^^^^^^^^^^^^^^^^ File "/Users/xaras/projects/arrow-lambda/serverless/app.py", line 42, in handler new_df = ctx.execute( ^^^^^^^^^^^^ File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/sql/context.py", line 275, in execute return res.collect() if (eager or self._eager_execution) else res ^^^^^^^^^^^^^ File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/utils/deprecation.py", line 95, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1713, in collect return wrap_df(ldf.collect()) ^^^^^^^^^^^^^ exceptions.ComputeError: cannot compare 'date/datetime/time' to a string value (create native python { 'date', 'datetime', 'time' } or compare to a temporal column) UPDATE for addl info: I started using cast('2023-06-01' as date). This runs, but does not return any records. Here is a replicatable example: import polars as pl df = pl.DataFrame( { "a": ["2023-06-01", "2023-06-01", "2023-06-02"], "b": [None, None, None], "c": [4, 5, 6], "d": [None, None, None], } ) df = df.with_columns(pl.col("a").str.strptime(pl.Date, "%Y-%m-%d", strict=False)) print(df) ctx = pl.SQLContext(stuff=df) new_df = ctx.execute( "select * from stuff where a = cast('2023-06-01' as date)", eager=True, ) print(new_df) ## Returns... shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ f32 ┆ i64 ┆ f32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═════β•ͺ══════║ β”‚ 2023-06-01 ┆ null ┆ 4 ┆ null β”‚ β”‚ 2023-06-01 ┆ null ┆ 5 ┆ null β”‚ β”‚ 2023-06-02 ┆ null ┆ 6 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ shape: (0, 4) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ f32 ┆ i64 ┆ f32 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
Update: Implicit string β†’ temporal conversion in SQL comparisons was added in Polars 0.20.26 https://github.com/pola-rs/polars/pull/15958 The original query now runs as expected. import polars as pl df = pl.DataFrame({ "a": ["2023-06-01", "2023-06-01", "2023-06-02"], "b": [None, None, None], "c": [4, 5, 6], "d": [None, None, None], }) df = df.with_columns(pl.col("a").str.to_date()) with pl.SQLContext(stuff=df) as ctx: ctx.execute( "select * from stuff where a = '2023-06-01'", eager=True ) shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ f32 ┆ i64 ┆ f32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═════β•ͺ══════║ β”‚ 2023-06-01 ┆ null ┆ 4 ┆ null β”‚ β”‚ 2023-06-01 ┆ null ┆ 5 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
4
3
77,259,434
2023-10-9
https://stackoverflow.com/questions/77259434/python-testing-assertions-unittest-playwright
so im new to testing and have been learning python/playwright testing for a couple of weeks now. I've got to assertions and have found multiple ways to write them, so this might sound like a stupid question but which of these 3 types of assertions would be best to use, or is there one that i don't know about that is better. Playwright Assertions: https://playwright.dev/python/docs/test-assertions Unittest Assertions: https://docs.python.org/3/library/unittest.html Python Assertions: https://realpython.com/python-assert-statement/ Here is a little example: class ExperimentTest(StaticLiveServerTestCase): def test_experimental(self): page.goto(f"{self.live_server_url}") expect(page).to_have_url(f"{self.live_server_url}") assert page.url == f"{self.live_server_url}") self.assertEqual(page.url, f"{self.live_server_url}")
When using Playwright, almost always use the Playwright assertion, which waits for the predicate to be true. Consider the following simple example: from playwright.sync_api import expect, sync_playwright html = r"""<!DOCTYPE html><html><body> <h1>NO!</h1> <script> setTimeout(() => { document.querySelector("h1").textContent = "YES!" }, 3000); </script> </body></html> """ def main(): with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.set_content(html) expect(page.locator("h1")).to_have_text("YES!") browser.close() if __name__ == "__main__": main() This passes, even though the site takes 3 seconds after load to change its header text from NO! to YES!. If you use any other assertion library, the auto-wait will not occur. Now change the above code to fail: expect(page.locator("h1")).to_have_text("NEVER!"). The output will be clear: AssertionError: Locator expected to have text 'NEVER!' Actual value: YES! Call log: LocatorAssertions.to_have_text with timeout 5000ms waiting for locator("h1") locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>NO!</h1> unexpected value "NO!" locator resolved to <h1>YES!</h1> unexpected value "YES!" locator resolved to <h1>YES!</h1> unexpected value "YES!" Notice it's retried and tracked changes over time, eventually emitting a clear error which makes debugging easier. As an aside, there's no need to use f-strings if you're not concatenating anything: f"{self.live_server_url}" should be self.live_server_url.
2
6
77,238,856
2023-10-5
https://stackoverflow.com/questions/77238856/problems-installing-libraries-via-pip-after-installing-python-3-12
Today I installed the new Python 3.12 on my Ubuntu 22.04 from the ppa repository ppa:deadsnakes/ppa. Everything works, but when I try to install some library with the command python3.12 -m pip install somelibrary, I get the following error ERROR: Exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 165, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 205, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 285, in run session = self.get_default_session(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 75, in get_default_session self._session = self.enter_context(self._build_session(options)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 89, in _build_session session = PipSession( ^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/network/session.py", line 282, in __init__ self.headers["User-Agent"] = user_agent() ^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/network/session.py", line 157, in user_agent setuptools_dist = get_default_environment().get_distribution("setuptools") ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pip/_internal/metadata/__init__.py", line 24, in get_default_environment from .pkg_resources import Environment File "/usr/lib/python3/dist-packages/pip/_internal/metadata/pkg_resources.py", line 9, in <module> from pip._vendor import pkg_resources File "/usr/lib/python3/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 2164, in <module> register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? Any suggestions why this is happening? EDIT: This problem doesn't exist when I use venv, it seems to me that the problem is that pip uses /usr/lib/python3 instead of /usr/lib/python3.12
python3.12 -m ensurepip --upgrade fixed my problem! solution
15
17
77,227,048
2023-10-4
https://stackoverflow.com/questions/77227048/django-testing-views-getting-error-discoverrunner-run-tests-takes-2-positi
I use Django framework to create basic web application. I started to write tests for my views. I followed the django documentation and fixed some issues along the way. But now I am stuck - I don't know why I get this error even after 30 minutes of searching for answer. C:\Osobni\realityAuctionClient\venv\Scripts\python.exe "C:/Program Files/JetBrains/PyCharm 2022.3.3/plugins/python/helpers/pycharm/django_test_manage.py" test realityAuctionClientWeb.test_views.TestViews.test_start_view C:\Osobni\realityAuctionClient Testing started at 7:41 ... Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2022.3.3\plugins\python\helpers\pycharm\django_test_manage.py", line 168, in <module> utility.execute() File "C:\Program Files\JetBrains\PyCharm 2022.3.3\plugins\python\helpers\pycharm\django_test_manage.py", line 142, in execute _create_command().run_from_argv(self.argv) File "C:\Osobni\realityAuctionClient\venv\Lib\site-packages\django\core\management\commands\test.py", line 24, in run_from_argv super().run_from_argv(argv) File "C:\Osobni\realityAuctionClient\venv\Lib\site-packages\django\core\management\base.py", line 412, in run_from_argv self.execute(*args, **cmd_options) File "C:\Osobni\realityAuctionClient\venv\Lib\site-packages\django\core\management\base.py", line 458, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\JetBrains\PyCharm 2022.3.3\plugins\python\helpers\pycharm\django_test_manage.py", line 104, in handle failures = TestRunner(test_labels, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\JetBrains\PyCharm 2022.3.3\plugins\python\helpers\pycharm\django_test_runner.py", line 254, in run_tests return DjangoTeamcityTestRunner(**options).run_tests(test_labels, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\JetBrains\PyCharm 2022.3.3\plugins\python\helpers\pycharm\django_test_runner.py", line 156, in run_tests return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: DiscoverRunner.run_tests() takes 2 positional arguments but 3 were given Process finished with exit code 1 Empty suite My tests.py looks like this: import django from django.test import TestCase from django.urls import reverse class TestViews(TestCase): def test_start_view(self): """ Test that the start view returns a 200 response and uses the correct template """ django.setup() url = reverse("start") resp = self.client.get(url) self.assertEqual(resp.status_code, 200) I use python 3.11 and django 5.0a1. Anybody who has a clue? I think it is somehow connected with settings but I do not know how.
Django removed the parameter extra_tests in Django 5.0 and PyCharm's test runner is still providing it. See django's release notes: The extra_tests argument for DiscoverRunner.build_suite() and DiscoverRunner.run_tests() is removed. You could downgrade to django 4.2 until this is fixed by Jetbrains. A Pycharm issue regarding this deprecation exists, see PY-53355.
5
11
77,267,346
2023-10-10
https://stackoverflow.com/questions/77267346/error-while-installing-python-package-llama-cpp-python
I am using Llama to create an application. Previously I used openai but am looking for a free alternative. Based on my limited research, this library provides openai-like api access making it quite easy to add into my prexisting code. However this library has errors while downloading. I tried installing cmake which did not help. Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error Γ— Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─> [20 lines of output] *** scikit-build-core 0.5.1 using CMake 3.27.7 (wheel) *** Configuring CMake... 2023-10-10 21:23:02,749 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None loading initial cache file C:\Users\ARUSHM~1\AppData\Local\Temp\tmpf1bzj6ul\build\CMakeInit.txt -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:3 (project): Running 'nmake' '-?' failed with: The system cannot find the file specified CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects Although not directly related to this question, these are other questions I am unable to get answers for: Does this library use Llama or Llama 2? Will this be secure on a Python Flask Application?
You need to install the desktop c++ block with visual studio to get cmake properly installed.Open the Visual Studio Installer and click Modify, then check Desktop development with C++ and click Modify to start the install. I also recommend the Windows 10 SDK. https://learn.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio?view=msvc-170 https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/ After that, !pip install llama-cpp-python should build just fine.
15
15
77,266,318
2023-10-10
https://stackoverflow.com/questions/77266318/defining-an-annotatedstr-with-constraints-in-pydantic-v2
Suppose I want a validator for checksums that I can reuse throughout an application. The Python type-hint system has changed a lot in 3.9+ and that is adding to my confusion. In pydantic v1, I subclassed str and implement __get_pydantic_core_schema__ and __get_validators__ class methods. v2 has changed some of this and the preferred way has changed to using annotated types. In pydantic-extra-types package I have found examples that require insight into the inner workings of pydantic. I can copy and get something working, but I would prefer to find the "correct" user's way to do it rather than copying without understanding. In pydantic v2 it looks like I can make a constrained string as part of a pydantic class, e.g. from typing import Annotated from pydantic import BaseModel, StringConstraints class GeneralThing(BaseModel): special_string = Annotated[str, StringConstraints(pattern="^[a-fA-F0-9]{64}$")] but this is not valid (pydantic.errors.PydanticUserError: A non-annotated attribute was detected). Additionally I would have to annotate every field I want to constrain, as opposed to special_string = ChecksumStr that I was able to do in the past.
For pydantic you need to annotate your fields, but you're assigning them. The following code should do the trick for you from typing import Annotated from pydantic import BaseModel, StringConstraints ChecksumString = Annotated[str, StringConstraints(pattern="^[a-fA-F0-9]{64}$")] class GeneralThing(BaseModel): special_string: ChecksumString Note that special_string: ChecksumString and not special_string = ChecksumString. special_string: Annotated[str, StringConstraints(...) would be valid as well just to know.
8
8
77,233,855
2023-10-5
https://stackoverflow.com/questions/77233855/why-did-i-get-an-error-modulenotfounderror-no-module-named-distutils
I've installed scikit-fuzzy but when I import skfuzzy as fuzz I get an error ModuleNotFoundError: No module named 'distutils'" I already tried to pip uninstall distutils and got this output Note: you may need to restart the kernel to use updated packages. WARNING: Skipping distutils as it is not installed. Then I tried to install it again pip install distutils Note: you may need to restart the kernel to use updated packages. ERROR: Could not find a version that satisfies the requirement distutils (from versions: none) ERROR: No matching distribution found for distutils Where did I go wrong? This question addresses the problem from the perspective of installing a library. For developing a library, see How can one fully replace distutils, which is deprecated in 3.10?.
Python 3.12 does not come with a stdlib distutils module (changelog), because distutils was deprecated in 3.10 and removed in 3.12. See PEP 632 – Deprecate distutils module. You can still use distutils on Python 3.12+ by installing setuptools. When that doesn't work, you may need stay on Python < 3.12 until the 3rd-party package (skfuzzy in this case) publishes an updated release for Python 3.12 support.
52
83
77,260,536
2023-10-9
https://stackoverflow.com/questions/77260536/run-adb-shell-and-then-interact-with-console-app-in-python
I have a situation where I need to run the adb shell command which takes me to the root access of my device. Then I run the interactive console app against my device using this command: /oem/console --no-logging. The command returns this output: Console Process: connecting to application... Console Process: connected to application... After this, I need to PRESS ENTER so that the console accepts events (The arrow cursor is ready to accept invoke events). This console app takes certain events that I can simulate my device's behavior. For example: invoke put.device.into.charge 1 invoke call.cell.hangup 1 invoke xyz So my question is, how do I one-by-one enter these commands into this console app and wait a couple of seconds after each command? This is the code I have so far: def run_adb_shell_command(command, return_process=False): """Run an ADB shell command.""" cmd = command.split() process = Popen(cmd, stdin=subprocess.PIPE) if return_process: return process def send_command_to_console_app(command, process): """SEND Commands to Console App.""" new_line = '\n'.encode() # encoded_command = str(command).encode() s0 = command s1 = 'event 1' s2 = 'event 2' s3 = 'event 3' concat_query = "{}\n".format(s0) process.communicate(input=concat_query.encode('utf-8'))[0] process.kill() The problem is that when communicate() function is called for one of the events it just hangs up and never finishes so that I can run the next event. What is the best approach for this?
So, after a couple of days of research, I found a solution from this post https://stackoverflow.com/a/56051270/4911426. I had to use an asynchronous subprocess for chaining different commands together.
3
5
77,267,596
2023-10-10
https://stackoverflow.com/questions/77267596/mypy-type-stubs-for-aws-lambda-function
Are there maintained mypy types published for AWS Lambda functions? I'm talking here about defining a function to handle an HTTP request (from a lambda URL or API Gateway). I'm not talking about other AWS Lambda-related APIs such as are exposed for example by boto3. Here's a example of the kind of function I would like to have types for, taken from AWS's docs: def lambda_handler(event, context): message = 'Hello {} {}!'.format(event['first_name'], event['last_name']) return { 'message' : message } I'd like to type functions like that one as much as possible - for example, the context parameter (please ignore the fact that this trivial example does not happen to use parameter context). I'm aware that AWS schema registry provides "bindings" for Event Bridge events, but note this question is about invoking lambdas in response to HTTP requests, not Event Bridge events. I'm also aware that for example, if I used FastAPI together with a library like Mangum, I'd get types. I'm restricting this question just to use of the "native" AWS Lambda API (see the example code above). I'm also aware that I could define my own types, but here I'm asking about types maintained by AWS or a third party.
Check out Powertools for AWS Lambda. It includes typings for LambdaContext as well as dataclasses for event sources such as ALBEvent. from aws_lambda_powertools.utilities.data_classes import ALBEvent from aws_lambda_powertools.utilities.typing import LambdaContext @event_source(data_class=ALBEvent) def lambda_handler(lambda_event: ALBEvent, context: LambdaContext) -> dict[str, Any]: ...
3
3
77,262,501
2023-10-10
https://stackoverflow.com/questions/77262501/how-to-alter-cipher-suite-used-with-python-requests
This is part of a larger program I'm working on, but I've got it pinpointed down to the exact problem. When I use Python requests module in some environments it works but not others and it seems to be related to the cipher suite being used by SSL. When I run the following command in Python, it gives an error or success depending on the OS I'm using. here is the command: requests.get("https://api-mte.itespp.org/markets/VirtualService/v2/?WSDL") On Windows 10 and Ubuntu 18.04, it works. But on Windows 11 and Ubuntu 22.04, it does not work and gives the following error: raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='api-mte.itespp.org', port=443): Max retries exceeded with url: /markets/VirtualService/v2/?WSDL (Caused by SSLError(SSLError(1, '[SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:997)'))) The part [SSL: DH_KEY_TOO_SMALL] dh key too small seems to be the relevant part indicating the requests module thinks the server is using an outdated cipher? When I view the site in Firefox it loads fine. Am I seeing this correctly or am I off base? This seems to be a relevant question (Python - requests.exceptions.SSLError - dh key too small) but I could not get it to work when I changed requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS. I know Ubuntu 22 uses OpenSSL v3 while Ubuntu 18 uses OpenSSL v1.1.1 and that seems relevant. Thanks! EDIT: I believe specifically I'm trying to use the cipher ECDHE-RSA-AES128-GCM-SHA256 or ECDHE-RSA-AES256-GCM-SHA384
So indeed the problem is the server is using an outdated cipher. And the issue is that Windows 11 and Ubuntu 22 will not use that outdated cipher by default. The issue also is that in the version of requests that I was using (2.31.0), you can no longer change the cipher suite with requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS as many answers suggest. The answer was found here, a blog post written by I believe one of the authors of the module. It involves making a new adapter from the requests module so the solution isn't straightforward. This was the code I used to make it work. Obviously you can change custom_ciphers to whatever is needed. import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.ssl_ import create_urllib3_context import urllib3 custom_ciphers = [ "ECDHE-RSA-AES256-GCM-SHA384", # "DHE-RSA-AES256-GCM-SHA384", # "ECDHE-RSA-AES128-GCM-SHA256", # "TLS_AES_256_GCM_SHA384", ] class CustomCipherAdapter(HTTPAdapter): def init_poolmanager(self, *args, **kwargs): context = create_urllib3_context(ciphers=":".join(custom_ciphers)) kwargs['ssl_context'] = context return super(CustomCipherAdapter, self).init_poolmanager(*args, **kwargs) # Create a session and mount the adapter session = requests.Session() session.mount("https://", CustomCipherAdapter()) # Now you can use the session to make requests with the custom cipher suites response = session.get("https://api-mte.itespp.org/markets/VirtualService/v2/?WSDL") print(response) Info about HTTPTransport can be found here.
3
6
77,269,234
2023-10-10
https://stackoverflow.com/questions/77269234/turtle-nameerror-after-defining-a-variable-in-a-separate-function
I want to create a turtle game. When I click the turtle shape in the game, it should give score +1 point so that I can count the points. But when I execute the command and click on the turtle shape it is giving me a name error which is NameError: name 'score_turtle' is not defined even though it is defined in the upper command. Please help me to fix it. import turtle import random screen = turtle.Screen() screen.bgcolor("light blue") FONT = ('Arial', 25, 'normal') skor = 0 turtle_list = [] #score turtle def score_turtle_setup(): score_turtle = turtle.Turtle() score_turtle.hideturtle() score_turtle.penup() score_turtle.color("dark blue") top_screen = screen.window_height() / 2 y = top_screen * 0.85 score_turtle.setpos(-200,y) score_turtle.write(arg=f"SCORE: 0", move=False, align="center", font=FONT) def make_turtle(x, y): point = turtle.Turtle() def handle_click(x, y): global skor skor += 1 score_turtle.write(arg=f"SCORE: {skor}", move=False, align="center", font=FONT) point.onclick(handle_click) point.shape("turtle") point.penup() point.color("green") point.shapesize(1.5) point.goto(x * 12, y*12) turtle_list.append(point) x_coordinats = [-20,-10,0,10,20] y_coordinats = [20,10,0,-10] def setup_turtles(): for x in x_coordinats: for y in y_coordinats: make_turtle(x, y) def hide_turtles(): for point in turtle_list: point.hideturtle() def show_turtles(): random.choice(turtle_list).showturtle() turtle.tracer(0) score_turtle_setup() setup_turtles() hide_turtles() show_turtles() turtle.tracer(1) turtle.mainloop() I tried to call the function score_turtle_setup from the handle_click function but that did not work.
Variables are scoped to the function by default. That means that when the function returns, all variables in the function scope are destroyed. Luckily, functions let you return values, so you can do something like: def score_turtle_setup(): score_turtle = turtle.Turtle() # ... return score_turtle # ... score_turtle = score_turtle_setup() Now the score_turtle variable is assigned in the global scope, where it can be accessed from any of your other functions. This may not be great design, but for a small program, it's fine. Down the line, if the program grows, consider using a class and/or module for encapsulating logical entities like this. In general, the situation isn't specific to turtle, so How do I get a result (output) from a function? How can I use the result later? is a good canonical resource.
3
2
77,269,011
2023-10-10
https://stackoverflow.com/questions/77269011/how-to-get-the-callable-function-defined-by-exec-in-python
Say I want to have a function exec_myfunc that can execute any user-defined function with an input value of 10. The user is supposed to define the function by a string as shown below: func1_str = """ def myfunc1(x): return x """ func2_str = """ def myfunc2(x): return x**2 """ Right now I am using a very hacky way, by extracting the function name between def and ( using regular expression, as shown below: def exec_myfunc(func_str: str): import re exec(func_str) myfunc_str = re.search(r'def(.*)\(', func_str).group(1).strip() return eval(myfunc_str)(10) print(exec_myfunc(func1_str)) # 10 print(exec_myfunc(func2_str)) # 100 I would like to know what is the general and correct way of doing this?
You could directly use exec: def exec_myfunc(func_str: str): fun = {} # to hold your function exec(func_str,fun) fun_name = list(fun.keys())[1] fn = fun[fun_name] print(f"The function {fun_name} evaluated at 10 = {fn(10)}") exec_myfunc(func1_str) The function myfunc1 evaluated at 10 = 10
2
1
77,268,771
2023-10-10
https://stackoverflow.com/questions/77268771/how-to-make-a-cluster-of-items-based-on-membership
I need like a magnet effect to form a cluster within a list of unique items. My input is this : lst1 = ['ah', 'ab', 'c', None, 'xo', 'i', 'b', 'lji', 'z', 'bel', 'oyb'] So based on a given item (a string that could be of any length), we need to move from left and right (towards this item) every string that contains it. For simplicity, lets consider the chosen item as a single letter : letter = 'b'. Now, we need to look for any string that contain b and bring it closer to it (from both sides) and preserve its suborder, like magnet. The other items will also be shifted (-+). In this case, the output should be : lst2 = ['ah', 'c', None, 'xo', 'i', 'ab', 'b', 'bel', 'oyb', 'lji', 'z'] The cluster is represented here by the slice : ['ab', 'b', 'bel', 'oyb']. I tried the code below but I got a wrong list even though I was able to make the cluster : letter = 'b' data = {} for i, e in enumerate(lst1): if e is not None and letter in e and e!=letter: data[e] = i left = lst1.index(letter)-1 right = lst1.index(letter)+1 for key,value in data.items(): if value<lst1.index(letter): lst1[left] = key left -= 1 elif value>lst1.index(letter): lst1[right] = key right += 1 Here is what I got : print(lst1) ['ah', 'ab', 'c', None, 'xo', 'ab', 'b', 'bel', 'oyb', 'bel', 'oyb'] Do you have any suggestions ?
One possible solution is to use list slicing + sorting: letter = "b" lst1 = ["ah", "ab", "c", None, "xo", "i", "b", "lji", "z", "bel", "oyb"] index = lst1.index(letter) out = ( sorted(lst1[:index], key=lambda v: letter in v if isinstance(v, str) else False) + [letter] + sorted(lst1[index + 1 :], key=lambda v: letter not in v if isinstance(v, str) else True) ) print(out) Prints: ['ah', 'c', None, 'xo', 'i', 'ab', 'b', 'bel', 'oyb', 'lji', 'z']
2
1
77,268,673
2023-10-10
https://stackoverflow.com/questions/77268673/cannot-install-matplotlib-in-python-3-12
When i try to install matplotlib 3.8.0 in my conda environment with miniconda package manager i get the following error : Solving environment: / warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE failed LibMambaUnsatisfiableError: Encountered problems while solving: - cannot install both pin-1-1 and pin-1-1 - nothing provides numpy 1.10* needed by matplotlib-1.5.3-np110py34_1 Could not solve for environment specs The following packages are incompatible β”œβ”€ matplotlib is installable with the potential options β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ numpy 1.10* , which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ python 3.4* but there are no viable options β”‚ β”‚ β”œβ”€ python 3.4.5 would require β”‚ β”‚ β”‚ └─ vc 10.* , which does not exist (perhaps a missing channel); β”‚ β”‚ └─ python 3.4.5 would require β”‚ β”‚ └─ vs2010_runtime, which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ pyqt 4.11.* , which requires β”‚ β”‚ └─ qt 4.8.* , which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|2.2.4] would require β”‚ β”‚ └─ python >=2.7,<2.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|3.3.4] would require β”‚ β”‚ └─ python >=3.6,<3.7.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.1.2|2.2.2|...|3.5.3] would require β”‚ β”‚ └─ python >=3.7,<3.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.2.4|3.1.1|...|3.7.3] would require β”‚ β”‚ └─ python >=3.8,<3.9.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 2.2.5 would require β”‚ β”‚ └─ matplotlib-base >=2.2.5,<2.2.6.0a0 with the potential options β”‚ β”‚ β”œβ”€ matplotlib-base 2.2.5 would require β”‚ β”‚ β”‚ └─ python >=2.7,<2.8.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.6,<3.7.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.7,<3.8.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.8,<3.9.0a0 , which can be installed; β”‚ β”‚ └─ matplotlib-base [2.2.5|3.3.2] would require β”‚ β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.2.0|3.2.1|3.2.2|3.3.0|3.3.1] would require β”‚ β”‚ └─ matplotlib-base [>=3.2.0,<3.2.1.0a0 |>=3.2.1,<3.2.2.0a0 |>=3.2.2,<3.2.3.0a0 |>=3.3.0,<3.3.1.0a0 |>=3.3.1,<3.3.2.0a0 ], whic; β”‚ β”œβ”€ matplotlib 3.3.2 would require β”‚ β”‚ └─ matplotlib-base >=3.3.2,<3.3.3.0a0 , which can be installed (as previously explained); β”‚ β”œβ”€ matplotlib [3.3.2|3.3.3|...|3.8.0] would require β”‚ β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.4.3|3.5.0|...|3.8.0] would require β”‚ β”‚ └─ python >=3.10,<3.11.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.6.1|3.6.2|...|3.8.0] would require β”‚ β”‚ └─ python >=3.11,<3.12.0a0 , which can be installed; β”‚ └─ matplotlib [2.0.2|2.1.0|...|3.0.0] would require β”‚ └─ python >=3.5,<3.6.0a0 , which can be installed; └─ pin-1 is not installable because there are no viable options β”œβ”€ pin-1 1 would require β”‚ └─ python 3.12.* , which conflicts with any installable versions previously reported; └─ pin-1 1 would require └─ python 3.12.* , which conflicts with any installable versions previously reported. Pins seem to be involved in the conflict. Currently pinned specs: - python 3.12.* (labeled as 'pin-1') (datasc) C:\Users\USER>conda install matplotlib -c conda-forge Channels: - conda-forge - defaults Platform: win-64 Collecting package metadata (repodata.json): done Solving environment: \ warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE failed LibMambaUnsatisfiableError: Encountered problems while solving: - cannot install both pin-1-1 and pin-1-1 - nothing provides numpy 1.10* needed by matplotlib-1.5.3-np110py34_1 Could not solve for environment specs The following packages are incompatible β”œβ”€ matplotlib is installable with the potential options β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ numpy 1.10* , which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ python 3.4* but there are no viable options β”‚ β”‚ β”œβ”€ python 3.4.5 would require β”‚ β”‚ β”‚ └─ vc 10.* , which does not exist (perhaps a missing channel); β”‚ β”‚ └─ python 3.4.5 would require β”‚ β”‚ └─ vs2010_runtime, which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib 1.5.3 would require β”‚ β”‚ └─ pyqt 4.11.* , which requires β”‚ β”‚ └─ qt 4.8.* , which does not exist (perhaps a missing channel); β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|2.2.4] would require β”‚ β”‚ └─ python >=2.7,<2.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.0.2|2.1.0|...|3.3.4] would require β”‚ β”‚ └─ python >=3.6,<3.7.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.1.2|2.2.2|...|3.5.3] would require β”‚ β”‚ └─ python >=3.7,<3.8.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [2.2.4|3.1.1|...|3.7.3] would require β”‚ β”‚ └─ python >=3.8,<3.9.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib 2.2.5 would require β”‚ β”‚ └─ matplotlib-base >=2.2.5,<2.2.6.0a0 with the potential options β”‚ β”‚ β”œβ”€ matplotlib-base 2.2.5 would require β”‚ β”‚ β”‚ └─ python >=2.7,<2.8.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.6,<3.7.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.7,<3.8.0a0 , which can be installed; β”‚ β”‚ β”œβ”€ matplotlib-base [2.2.5|3.2.0|...|3.3.2] would require β”‚ β”‚ β”‚ └─ python >=3.8,<3.9.0a0 , which can be installed; β”‚ β”‚ └─ matplotlib-base [2.2.5|3.3.2] would require β”‚ β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.2.0|3.2.1|3.2.2|3.3.0|3.3.1] would require β”‚ β”‚ └─ matplotlib-base [>=3.2.0,<3.2.1.0a0 |>=3.2.1,<3.2.2.0a0 |>=3.2.2,<3.2.3.0a0 |>=3.3.0,<3.3.1.0a0 |>=3.3.1,<3.3.2.0a0 ], whic; β”‚ β”œβ”€ matplotlib 3.3.2 would require β”‚ β”‚ └─ matplotlib-base >=3.3.2,<3.3.3.0a0 , which can be installed (as previously explained); β”‚ β”œβ”€ matplotlib [3.3.2|3.3.3|...|3.8.0] would require β”‚ β”‚ └─ python >=3.9,<3.10.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.4.3|3.5.0|...|3.8.0] would require β”‚ β”‚ └─ python >=3.10,<3.11.0a0 , which can be installed; β”‚ β”œβ”€ matplotlib [3.6.1|3.6.2|...|3.8.0] would require β”‚ β”‚ └─ python >=3.11,<3.12.0a0 , which can be installed; β”‚ └─ matplotlib [2.0.2|2.1.0|...|3.0.0] would require β”‚ └─ python >=3.5,<3.6.0a0 , which can be installed; └─ pin-1 is not installable because there are no viable options β”œβ”€ pin-1 1 would require β”‚ └─ python 3.12.* , which conflicts with any installable versions previously reported; └─ pin-1 1 would require └─ python 3.12.* , which conflicts with any installable versions previously reported. Pins seem to be involved in the conflict. Currently pinned specs: - python 3.12.* (labeled as 'pin-1') I have set my solver to libmamba so that's why the message looks a bit different. Isn't there any other way rather than downgrading my python to 3.9? I'd rather use the latest version 3.12.0.
As of the writing of the question, python 3.12 is just a few days old. Days! You don't have to downgrade your python to 3.9. It's conda. You just use another enviroronment for a python 3.9 and install whatever is needed. That's what Conda is made for. Matplotlib already works pretty fine with Python 3.11. No need to go down to 3.9 Really, it's just a few days old. Give it some time. Keep your python 3.11 environment and use it for whatever you want to use matplotlib. Keep a Python 3.12 environment for your very first steps in Python 3.12, and try to install matplotlib now and then. And at some point, matplotlib will be there. Just wait a few days more.
2
5
77,266,671
2023-10-10
https://stackoverflow.com/questions/77266671/polars-drop-duplicate-row-based-on-column-subset-but-keep-first
Given the following table, I'd like to remove the duplicates based on the column subset col1,col2. I'd like to keep the first row of the duplicates though: data = { 'col1': [1, 2, 3, 1, 1], 'col2': [7, 8, 9, 7, 7], 'col3': [3, 4, 5, 6, 8] } tmp = pl.DataFrame(data) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ 1 ┆ 7 ┆ 3 β”‚ β”‚ 2 ┆ 8 ┆ 4 β”‚ β”‚ 3 ┆ 9 ┆ 5 β”‚ β”‚ 1 ┆ 7 ┆ 6 β”‚ β”‚ 1 ┆ 7 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ The result should be β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ 1 ┆ 7 ┆ 3 β”‚ β”‚ 2 ┆ 8 ┆ 4 β”‚ β”‚ 3 ┆ 9 ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Usually I'd do this with pandas df["col1","col2"].is_duplicated(keep='first'), but polars dl.is_duplicated() marks all rows as duplicates including the first occurence.
You can use DataFrame.unique; the flexible keep keyword argument is here in Polars. tmp.unique(('col1', 'col2'), keep='first', maintain_order=True)
2
6
77,251,236
2023-10-7
https://stackoverflow.com/questions/77251236/cloudbuild-failing-on-flask-app-due-to-incopatibility-of-functions-framework-and
I'm trying to build my code using CloudBuild, but it's failing with the following message: Error: Error while updating cloudfunction configuration: Error waiting for Updating CloudFunctions Function: Error code 3, message: Build failed: found incompatible dependencies: "functions-framework 3.0.0 has requirement flask<3.0,>=1.0, but you have flask 3.0.0."; Error ID: 5503c41a β”‚ I've checked my dependencies and subdependencies, and I've tried to release the same code that was releasable 2 weeks ago, and I'm getting this error everywhere. I'm not using Flask 3.0.0 anywhere, my version is 2.3.3. My guess is that this is some temporary problem, since I haven't found anything online, and google has already had problems with functions-framework before, but I'm interested if someone has found a workaround?
Solved by setting fixed (latest) version of functions-framework, which is 3.4.0 at the time of writing this comment in my requirements.txt. Why GCP doesn't use latest version of their own libary is beyond me.
3
5
77,228,915
2023-10-4
https://stackoverflow.com/questions/77228915/window-all-throwing-an-error-with-pyflink-kafka-connector
I am trying to print the datastream by applying the tumbling process window for every 5 seconds. Since I couldn't implement the custom deserializer for now, I created the process function which returns the result as tuple, and as per this documentation link I could link the process function with the windowing operation so I tried this out: def get_data(self): source = self.__get_kafka_source() ds = self.env.from_source(source, WatermarkStrategy.no_watermarks(), "Kafka Source").window_all( TumblingProcessingTimeWindows.of(Time.seconds(5))) ds.process(ExtractingRecordAttributes(), output_type=Types.TUPLE( [Types.STRING(), Types.STRING(), Types.STRING()])).print() self.env.execute("source") def __get_kafka_source(self): source = KafkaSource.builder() \ .set_bootstrap_servers("localhost:9092") \ .set_topics("test-topic1") \ .set_group_id("my-group") \ .set_starting_offsets(KafkaOffsetsInitializer.latest()) \ .set_value_only_deserializer(SimpleStringSchema()) \ .build() return source class ExtractingRecordAttributes(KeyedProcessFunction): def __init__(self): pass def process_element(self, value: str, ctx: 'KeyedProcessFunction.Context'): parts = UserData(*ast.literal_eval(value)) result = (parts.user, parts.rank, str(ctx.timestamp())) yield result def on_timer(self, timestamp, ctx: 'KeyedProcessFunction.OnTimerContext'): yield "On timer timestamp: " + str(timestamp) When I trigger the get_data method, it gives me the below error: return self._wrapped_function.process(self._internal_context, input_data) AttributeError: 'ExtractingRecordAttributes' object has no attribute 'process' at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999) at org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:61) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:504) at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:555) at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.finishBundle(BeamPythonFunctionRunner.java:421) ... 7 more If I don't use window_all, everything works fine. But the moment I introduce it, it fails. What am I doing wrong here? Any hints would be helpful. I am using Pyflink 1.17.1 TIA.
The problem is for windowing you need to use ProcessWindowFunction and not ProcessFunction. Ideally, your code should look like this: class ExtractingRecordAttributes(ProcessWindowFunction): def __init__(self): pass def process(self, key: str, context: 'ProcessWindowFunction.Context', elements: Iterable[Tuple[str, str]]) -> Iterable[Tuple[str, str, str]]: result = "" for element in elements: parts = UserData(*ast.literal_eval(str(element))) result = (parts.user, parts.rank, str(context.current_processing_time())) yield result def clear(self, context: 'ProcessWindowFunction.Context'): pass I haven't tested this, but I think it should work. Here is the reference from the documentation.
3
1
77,258,692
2023-10-9
https://stackoverflow.com/questions/77258692/how-can-i-efficiently-get-multiple-slices-out-of-a-large-dataset
I want to get multiple small slices out of a large time-series dataset (~ 25GB, ~800M rows). At the moment, this looks something like this: from polars import pl sample = pl.scan_csv(FILENAME, new_columns=["time", "force"]).slice(660_000_000, 3000).collect() This code takes about 0-5 minutes, depending on the position of the slice I want to get. If I want 5 slices, this takes maybe 15 minutes to run everything. However, since polars is reading the whole csv anyhow, I was wondering if there is a way to get all my slices I want in one go, so polars only has to read the csv once. Chaining multiple slices (obviously) doesn't work, maybe there is some other way?
You should be able to run them all in parallel with .collect_all() lf = pl.scan_csv(FILENAME, new_columns=["time", "force"]) samples = [ lf.slice(660_000_000, 3000), lf.slice(890_000_000, 5000), ... ] samples = pl.collect_all(samples)
2
3
77,257,750
2023-10-9
https://stackoverflow.com/questions/77257750/how-to-move-up-values-in-specific-columns-as-long-as-possible
My input is this dataframe : df = pd.DataFrame({'class': ['class_a', 'class_a', 'class_a', 'class_a', 'class_b', 'class_b', 'class_c', 'class_c', 'class_d'], 'id': ['id1', 'id2', 'id3', '', '', 'id4', 'id5', '', 'id6'], 'name': ['abc1', '', '', 'abc2', 'abc3', '', '', 'abc4', 'abc5']}) print(df) class id name 0 class_a id1 abc1 1 class_a id2 2 class_a id3 3 class_a abc2 4 class_b abc3 5 class_b id4 6 class_c id5 7 class_c abc4 8 class_d id6 abc5 And I need to have one single row per pair (class/id). We need to move up the values in id and name as long as it's possible. class id name 0 class_a id1 abc1 1 class_a id2 abc2 2 class_a id3 3 class_b id4 abc3 4 class_c id5 abc4 5 class_d id6 abc5 I tried with the code below but I get a messed up result : df.replace('',np.nan,inplace=True) df.groupby(['class', 'id'], as_index=False, sort=False).first() class id name 0 class_a id1 abc1 1 class_a id2 None 2 class_a id3 None 3 claas_b id4 None 4 class_c id5 None 5 class_d id6 abc5 Can you provide an explanation please ?
Use DataFrame.melt with sorting by key parameter for correct ordering, then reshape by DataFrame.pivot with GroupBy.cumcount and last remove empty rows if exist at least one value: out = (df.melt('class').sort_values('value', key=lambda x: x.eq('')) .assign(g = lambda x: x.groupby(['class','variable']).cumcount()) .pivot(index=['class','g'], columns='variable', values='value') .droplevel(1) .reset_index() .rename_axis(None, axis=1)) cols = out.columns.difference(['class']) out = out[out[cols].ne('').any(axis=1) | ~df['class'].duplicated(keep=False)] print (out) class id name 0 class_a id1 abc1 1 class_a id2 abc2 2 class_a id3 4 class_b id4 abc3 6 class_c id5 abc4 8 class_d id6 abc5 9 class_e Input DataFrame: df = pd.DataFrame({'class': ['class_a', 'class_a', 'class_a', 'class_a', 'class_b', 'class_b', 'class_c', 'class_c', 'class_d', 'class_e'], 'id': ['id1', 'id2', 'id3', '', '', 'id4', 'id5', '', 'id6',''], 'name': ['abc1', '', '', 'abc2', 'abc3', '', '', 'abc4', 'abc5','']}) print(df) class id name 0 class_a id1 abc1 1 class_a id2 2 class_a id3 3 class_a abc2 4 class_b abc3 5 class_b id4 6 class_c id5 7 class_c abc4 8 class_d id6 abc5 9 class_e
2
2
77,257,822
2023-10-9
https://stackoverflow.com/questions/77257822/melting-only-one-level-of-a-multi-index-dataframe
I am working with a MultiIndex DataFrame and would like to melt out one of the column levels, I have found a way to do this but it involves a two step process (detailed below). I am wondering if there is a way in which I can use df.melt() to achieve the intended result in one step Here is a simplified example of the df I am working with import pandas as pd import datetime import numpy as np cols = pd.MultiIndex.from_tuples( [ ('A', 'temp_avg'), ('A', 'temp_predicted'), ('B', 'temp_avg'), ('B', 'temp_predicted'), ] ) df = pd.DataFrame( np.random.randn(4,4), columns=cols, index=pd.date_range( datetime.datetime(2013,1,1), datetime.datetime(2013,1,4), ) ) df.index.names=['date'] My current two-step solution is as follows: df_melt = df.reset_index().melt( id_vars=[('date', "")], var_name=['location', 'name'], ).rename({('date', ''): 'date'}, axis=1) df_desired = df_melt.pivot_table( index=['date', 'location'], columns='name', values='value', ) I wish to arrive at df_desired by a one-step use of df.melt(**kwargs) or another function
You just need to stack: df_desired = df.stack(0) With name: df_desired = df.rename_axis(['location', None], axis=1).stack('location') Output: temp_avg temp_predicted date location 2013-01-01 A 0.018696 -1.135884 B 0.064724 0.790992 2013-01-02 A -1.572779 -0.365371 B -0.572017 0.742684 2013-01-03 A -3.018399 1.081398 B 1.223285 -1.810627 2013-01-04 A -0.889924 -0.652284 B -0.251828 0.098451
2
3
77,255,758
2023-10-9
https://stackoverflow.com/questions/77255758/how-can-i-mock-my-environment-variables-for-my-pytest
I've looked at some examples to mock environment variables. Here In my case, I have to mock a variable in my config.py file that is setup as follows: class ServiceConfig(BaseSettings): ... API_AUDIENCE: str = os.environ.get("API_AUDIENCE") ... ... I have that called here: class Config(BaseSettings): auth: ClassVar[ServiceConfig] = ServiceConfig() I've attempted to patch that value as follows: @mock.patch.dict(os.environ, {"API_AUDIENCE": "https://mock.com"}) class Test(TestCase): Another way I've tried is: @patch('config.Config.API_AUDIENCE', "https://mock.com") def ....: The setup is incorrect both ways, getting this error: E pydantic_core._pydantic_core.ValidationError: 1 validation error for ServiceConfig E API_AUDIENCE What is the correct way to setup mocking an environment variable?
To update the environment, and restore it after the test: import os from unittest import mock @pytest.fixture() def setenvvar(monkeypatch): with mock.patch.dict(os.environ, clear=True): envvars = { "API_AUDIENCE": "https://mock.com", } for k, v in envvars.items(): monkeypatch.setenv(k, v) yield # This is the magical bit which restore the environment after This is just a fixture, you can use it (or not) in any test you want, at the scope you want, no extra dependencies.
4
8
77,245,595
2023-10-6
https://stackoverflow.com/questions/77245595/fastapi-testclient-overriding-lifespan-function
In a more complicated setup using the python dependency injector framework I use the lifespan function for the FastAPI app object to correctly wire everything. When testing I'd like to replace some of the objects with different versions (fakes), and the natural way to accomplish that seems to me like I should override or mock the lifespan function of the app object. However I can't seem to figure out if/how I can do that. MRE follows import pytest from contextlib import asynccontextmanager from fastapi.testclient import TestClient from fastapi import FastAPI, Response, status greeting = None @asynccontextmanager async def _lifespan(app: FastAPI): # Initialize dependency injection global greeting greeting = "Hello" yield @asynccontextmanager async def _lifespan_override(app: FastAPI): # Initialize dependency injection global greeting greeting = "Hi" yield app = FastAPI(title="Test", lifespan=_lifespan) @app.get("/") async def root(): return Response(status_code=status.HTTP_200_OK, content=greeting) @pytest.fixture def fake_client(): with TestClient(app) as client: yield client def test_override(fake_client): response = fake_client.get("/") assert response.text == "Hi" So basically in the fake_client fixture I'd like to change it to use the _lifespan_override instead of the original _lifespan, making the dummy test-case above pass I'd have expected something like with TestClient(app, lifespan=_lifespan_override) as client: to work, but that's not supported. Is there some way I can mock it to get the behavior I want? (The mre above works if you replace "Hi" with "Hello" in the assert statement) pyproject.toml below with needed dependencies [tool.poetry] name = "mre" version = "0.1.0" description = "mre" authors = [] [tool.poetry.dependencies] python = "^3.10" fastapi = "^0.103.2" [tool.poetry.group.dev.dependencies] pytest = "^7.1.2" httpx = "^0.25.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" EDIT: Tried extending my code with the suggestion from Hamed Akhavan below as follows @pytest.fixture def fake_client(): app.dependency_overrides[_lifespan] = _lifespan_override with TestClient(app) as client: yield client but it doesn't work, even though it looks like it should be the right approach. Syntax problem?
I found a solution to my problem that didn't include overriding the lifespan function, so not a general solution to my questions above. As I mentioned my specific problem in the real application was using the python dependency injector framework, and it provides and override method for it's containers. So the solution was to use that override functionality when wiring the dependencies during testing, which means the lifespan function doesn't need to be touched Here's a complete working MRE in case anyone is interested. import pytest from contextlib import asynccontextmanager from fastapi.testclient import TestClient from fastapi import FastAPI, Response, status, Depends from dependency_injector import containers, providers from dependency_injector.wiring import Provide, inject class HelloGreeter(): def greet(self): return "Hello" class Container(containers.DeclarativeContainer): greeter = providers.Singleton(HelloGreeter) @asynccontextmanager async def _lifespan(app: FastAPI): # Initialize dependency injection container = Container() container.wire(modules=[__name__]) yield app = FastAPI(title="Test", lifespan=_lifespan) @app.get("/") @inject async def root(greeter=Depends(Provide[Container.greeter])): return Response(status_code=status.HTTP_200_OK, content=greeter.greet()) @pytest.fixture def fake_client(): class HiGreeter(): def greet(self): return "Hi" with Container.greeter.override(HiGreeter()): with TestClient(app) as client: yield client def test_override(fake_client): response = fake_client.get("/") assert response.text == "Hi"
6
1
77,234,199
2023-10-5
https://stackoverflow.com/questions/77234199/msys2-and-embedding-python-no-module-named-encodings
I'm trying to use embedded python in my C++ dll library. The library is built and compiled in MSYS2 using GCC compiler, CMake and Ninja. Python 3.10 is also installed on MSYS2 using pacman. Windows 10 env contains C:\msys64\mingw64\bin in Path (python is also located there). Python doesn't installed on Windows, only on MSYS2. This is what CMakeLists.txt contains: cmake_minimum_required(VERSION 3.26) project(python_test) set(CMAKE_CXX_STANDARD 17) find_package(Python REQUIRED Development) add_executable(python_test main.cpp) target_link_libraries(python_test PRIVATE Python::Python) Simple test code: int main() { Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "import numpy as np\n" "print('Today is', ctime(time()))\n"); if (Py_FinalizeEx() < 0) exit(120); return 0; } When I run this code I got this error: Could not find platform independent libraries <prefix> Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Python path configuration: PYTHONHOME = (not set) PYTHONPATH = (not set) program name = 'python3' isolated = 0 environment = 1 user site = 1 import site = 1 sys._base_executable = 'C:\\Users\\someUsername\\CLionProjects\\python_test\\cmake-build-debug\\python_test.exe' sys.base_prefix = 'D:\\a\\msys64\\mingw64' sys.base_exec_prefix = 'D:\\a\\msys64\\mingw64' sys.platlibdir = 'lib' sys.executable = 'C:\\Users\\someUsername\\CLionProjects\\python_test\\cmake-build-debug\\python_test.exe' sys.prefix = 'D:\\a\\msys64\\mingw64' sys.exec_prefix = 'D:\\a\\msys64\\mingw64' sys.path = [ 'D:\\a\\msys64\\mingw64\\lib\\python310.zip', 'D:\\a\\msys64\\mingw64\\lib\\python3.10', 'D:\\a\\msys64\\mingw64\\lib\\lib-dynload', '', ] Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x00004564 (most recent call first): <no Python frame> But python command work both in MSYS2/MinGW console and Windows cmd. When I specify PYTHONHOME to C:\msys64\mingw64\bin in Windows 10 environment, I have got the same error again in Clion, and now in Windows command line too. How to resolve this problem?
I set PYTHONHOME=C:\msys64\mingw64\bin and more important PYTHONPATH=C:\msys64\mingw64\lib\python3.10;C:\msys64\mingw64\lib\python3.10\site-packages;C:\msys64\mingw64\lib\lib-dynload and it solved my problem
3
2
77,253,713
2023-10-8
https://stackoverflow.com/questions/77253713/calling-loc-with-a-boolean-array-containing-na
The pandas doc on loc states that it can be used with boolean arrays, more specifically it states the following: "Allowed inputs are: ... A boolean array (any NA values will be treated as False)." My question: How can u create a boolean array containing NA-values? I mean: A numpy bool array can't contain Nans and if we interpret this more liberal as stating "a list containing boolean values and na" then loc throws exceptions, e.g.: d_test = pd.DataFrame({"id": [1,2,3,5], "q1": [1,4,4,2], "q2": [4,np.nan,9,0]}, index=["a","b","c","d"]) t1 = [True,False,False,np.nan] d_test.loc[t1] # KeyError #same with None: t1 = [True,False,False,None] So my question: How is this sentence to be interpreted?
I think they mean a BooleanArray (which can be created with pd.array) : Array of boolean (True/False) data with missing values. t1 = [True,False,False,np.nan] out = d_test.loc[pd.array(t1, dtype="boolean")] So, since np.nan is treated as False, only the first row is being selected by the mask. Output : print(out) id q1 q2 a 1 1 4.00
3
3
77,249,095
2023-10-7
https://stackoverflow.com/questions/77249095/python-select-linter-option-is-missing
VS Code is missing "Python: Select Linter" option when trying to run a command (F1 or Ctrl/CMD+Shift+P). Previously, (2 days ago) this option was available. Is there a way to restore this option or is there a new alternative? I have tried creating new VS Code profile and installing just Python extension - still no mentioned option.
Rolling back Python extension to previous version (v2023.16.0) restores the option.
4
0
77,252,117
2023-10-8
https://stackoverflow.com/questions/77252117/what-is-the-difference-between-built-in-sum-and-math-fsum-in-python-3-12
In the What's New document for 3.12 there is a point: sum() now uses Neumaier summation to improve accuracy and commutativity when summing floats or mixed ints and floats. (Contributed by Raymond Hettinger in gh-100425.) But also there is a math.fsum function that is intended to be used for exact floating-point number summation. I guess it should be using some similar algorithm. I tried different cases on 3.11 and 3.12. On 3.11 sum() gives less precise results as expected. But on 3.12 they always return the same good results and I wasn't able to find a case when it differs. Which one should I use? Is there cases left when one should prefer math.fsum() over built-in sum()?
Here is the GitHub issue from your question: https://github.com/python/cpython/issues/100425 According to the detailed discussion there, sum has been about 10x faster that fsum. This new optimisation is supposed to run in parallel to the summation, making it essentially free. sum is now more accurate than before. fsum should still be even more accurate in certain cases and slower. That seems to be its point. If the accuracy of sum results is acceptable to you or you need its speed, then you can go with it.
5
5
77,251,673
2023-10-7
https://stackoverflow.com/questions/77251673/futurewarning-dataframe-groupby-with-axis-1-is-deprecated-do-frame-t-groupby
I have been using these two lines to get stock data: df = yf.download(tickers, group_by="ticker") d = {idx: gp.xs(idx, level=0, axis=1) for idx, gp in df.groupby(level=0, axis=1)} from https://stackoverflow.com/a/66989947/21955590. While it works fine, I can't figure out how to avoid getting the warning in the title. How should this code be rewritten with df.T.groupby without the axis=1 without causing an error?
Edit: In fact, you don't need groupby: d = {ticker: df[ticker] for ticker in df.columns.levels[0]} There is a discussion about groupby(axis=1) on github. Another solution is to stack the ticker level: d = {idx: gp.xs(idx, level=1) for idx, gp in df.stack(level=0).groupby(level=1)} Output: >>> d {'AAPL': Open High Low Close Adj Close Volume Date 1980-12-12 0.128348 0.128906 0.128348 0.128348 0.099449 469033600.0 1980-12-15 0.122210 0.122210 0.121652 0.121652 0.094261 175884800.0 1980-12-16 0.113281 0.113281 0.112723 0.112723 0.087343 105728000.0 1980-12-17 0.115513 0.116071 0.115513 0.115513 0.089504 86441600.0 1980-12-18 0.118862 0.119420 0.118862 0.118862 0.092099 73449600.0 ... ... ... ... ... ... ... 2023-10-02 171.220001 174.300003 170.929993 173.750000 173.750000 52164500.0 2023-10-03 172.259995 173.630005 170.820007 172.399994 172.399994 49594600.0 2023-10-04 171.089996 174.210007 170.970001 173.660004 173.660004 53020300.0 2023-10-05 173.789993 175.449997 172.679993 174.910004 174.910004 48527900.0 2023-10-06 173.800003 177.990005 173.179993 177.490005 177.490005 57224100.0 [10795 rows x 6 columns], 'MSFT': Open High Low Close Adj Close Volume Date 1986-03-13 0.088542 0.101563 0.088542 0.097222 0.060396 1.031789e+09 1986-03-14 0.097222 0.102431 0.097222 0.100694 0.062553 3.081600e+08 1986-03-17 0.100694 0.103299 0.100694 0.102431 0.063632 1.331712e+08 1986-03-18 0.102431 0.103299 0.098958 0.099826 0.062014 6.776640e+07 1986-03-19 0.099826 0.100694 0.097222 0.098090 0.060936 4.789440e+07 ... ... ... ... ... ... ... 2023-10-02 316.279999 321.890015 315.179993 321.799988 321.799988 2.057000e+07 2023-10-03 320.829987 321.390015 311.209991 313.390015 313.390015 2.103350e+07 2023-10-04 314.029999 320.040009 314.000000 318.959991 318.959991 2.072010e+07 2023-10-05 319.089996 319.980011 314.899994 319.359985 319.359985 1.696560e+07 2023-10-06 316.549988 329.190002 316.299988 327.260010 327.260010 2.564550e+07 [9469 rows x 6 columns]}
6
4
77,251,464
2023-10-7
https://stackoverflow.com/questions/77251464/how-to-merge-rows-with-nearest-values-in-dataframe
I have a DataFrame like this: index B 0 1 1 2 2 5 3 6 4 7 5 10 And i need to merge rows where the difference is less than or equal 2, select the line with the smaller value and set count merges The result should be like this : index B count 0 1 2 1 5 3 2 10 1 How can this be solved using pandas?
Very similar to @AndrejKesely, just shorter syntax, using named aggregations: df.groupby(df["B"].diff().gt(2).cumsum(), as_index=False).agg(B=("B", "first"), count=("B", "count")) Output: B count 0 1 2 1 5 3 2 10 1
3
7
77,250,919
2023-10-7
https://stackoverflow.com/questions/77250919/matrix-multiplication-of-numpy-ndarrays
I have a two numpy arrays a, B like this. >> a [1 2 3] >> type(a) <class 'numpy.ndarray'> >> B [[1 2 3] [2 2 7] [3 4 6]] >> type(B) <class 'numpy.ndarray'> I want to do the matrix multiplication like a * B * a_transpose which is (1*3)*(3*3)*(3*1) type matrix multiplication which should result in (1*1). How do I do this in numpy?
a.T is the transpose of matrix a temp = np.dot(a, B) # a * B final= np.dot(temp, a.T) #(a * B) * a_transpose Answer for your example is 155
2
1
77,248,861
2023-10-7
https://stackoverflow.com/questions/77248861/failing-to-run-turtle-graphics-program-on-macos
I'm a Python newbie working on my first Turtle Graphics program. This is what I have at the moment import turtle def draw_square(): window = turtle.Screen() window.bgcolor("red") brad = turtle.Turtle() brad.forward(100) window.exitonclick() draw_square() This code works perfectly on my friend's Windows laptop: However, problem arises when we tried to run it on my Macbook. The issue we were facing is that Python Turtle Graphics displays nothing as below: Why can't my program run normally on MacOS? And how can I resolve this issue?
This is a known issue that arises due to an incompatibility between MacOS and Tkinter, on which Turtle is based. In the terminal, use the following code: brew install tcl-tk env \ PATH="$(brew --prefix tcl-tk)/bin:$PATH" \ LDFLAGS="-L$(brew --prefix tcl-tk)/lib" \ CPPFLAGS="-I$(brew --prefix tcl-tk)/include" \ PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \ CFLAGS="-I$(brew --prefix tcl-tk)/include" \ PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'" \ pyenv install 3.8.13
3
2
77,248,988
2023-10-7
https://stackoverflow.com/questions/77248988/how-importing-from-a-script-works-differently-than-importing-from-a-module
I have a structure of files and folders like this: package1/ p1.py package2/ p2.py Contents of package1/p1.py: def p1fun(): print("p1fun") Contents of package1/package2/p2.py: import package1.p1 if __name__ == '__main__': package1.p1.p1fun() Now, when I do python -m package1.package2.p2, I get the correct result = p1fun. When I do python package1/package2/p2.py, I get: Traceback (most recent call last): File "/home/marcin/projects/test/package1/package2/p2.py", line 1, in <module> import package1.p1 ModuleNotFoundError: No module named 'package1' How the import mechanism differs in these two scenarios? How to make the "script" scenario import properly?
You just have to add the directory containing package1 to sys.path. A simple way is to set the PYTHONPATH environment variable to .: export PYTHONPATH='.' # on Linux or other Unix-like set PYTHONPATH=. # on Windows Underlying cause: As explained by @KlausD. , python automatically adds the path of the script passed on the command line, or the current path when you start a module with python -m. A simple way to see it is to print sys.path close to the beginning of p2.py: import package1.p1 import sys print(sys.path) if __name__ == '__main__': package1.p1.p1fun() For python -m package1.package2.p2 you can see that the first element of the list is the current folder, for python package1/package2/p2.py you will instead have: /path/to/current/package1/package2
2
3
77,240,105
2023-10-5
https://stackoverflow.com/questions/77240105/python-typing-generic-type-that-has-the-same-interface-as-the-wrapper-type
I would like to define a sort of "wrapper" Generic Type, say MyType[T], so that it has the same type interface as the wrapped type. from typing import Generic, TypeVar T = TypeVar("T") class MyType(Generic): pass # what to write here? So, as an example, when I have a type MyType[int], the type-checker should treat it as if it was an int type. Is that possible? If so, how?
To confirm, you're looking at wanting the expression MyType[T] to mean to a static type checker "A subclass of MyType and T", such that a declaration class MyType: attr: object will result in the following (e.g. using mypy and int.conjugate as an example): >>> reveal_type(MyType[int].conjugate) # def (self: builtins.int) -> builtins.int >>> obj: MyType[int] = MyType[int]() >>> reveal_type(obj.attr) # builtins.object No, this isn't a supported idiom in Python typing. Your use case should be covered by a future implementation of intersection types with a proposed syntax MyType & T instead. To a static type checker, the syntax MyType[T] means and only means that MyType is a generic type (including generic structural types), and the API that MyType has is derived from any metaclasses, any base classes, and anything declared under class MyType's body. Any type parameterisation (e.g. passing int to T in MyType[T]) only affects generic types in the class bases and statements in the class body; There is nothing you can write in a class body which tells a static type checker that the class inherits the API from another class, as this information is already given by the metaclass and base class(es); <type>[T] is only well-defined as a statically-inferrable type when <type> is a user-defined generic type. The static types inferred from other arbitrary attempts to make <type>[T] a valid expression is unstable and implementation-defined, regardless of the runtime implementation. from typing import * T = TypeVar("T") class M(type): def __getitem__(cls, type_: T, /) -> T: ... class A(metaclass=M): ... class B: def __class_getitem__(cls, type_: T, /) -> T: ... class C(Generic[T]): ... >>> A_int: TypeAlias = A[int] # mypy: "A" expects no type arguments, but 1 given # pyright: Expected no type arguments for class "A" >>> B_int: TypeAlias = B[int] # mypy: "B" expects no type arguments, but 1 given # pyright: <no messages> >>> C_int: TypeAlias = C[int] >>> >>> reveal_type(A[int]) # mypy: <overloads of `int.__new__`> pyright: Type of "A[int]" is "Type[int]" >>> reveal_type(B[int]) # mypy: The type "type[B]" is not generic and not indexable pyright: Type of "B[int]" is "Type[int]" >>> reveal_type(C[int]) # mypy: Revealed type is "def () -> __main__.C[builtins.int]" pyright: Type of "C[int]" is "Type[C[int]]"
5
3
77,235,342
2023-10-5
https://stackoverflow.com/questions/77235342/run-external-program-inside-a-conda-environment-in-r
I am trying to run stitchr in R. For programs that run in Python, I use reticulate. I create a conda environment named r-reticulate, where I want to install stitchr and run it. I try the following: if (!('r-reticulate' %in% reticulate::conda_list()[,1])){ reticulate::conda_create(envname = 'r-reticulate', packages = 'python=3.10') } reticulate::use_condaenv('r-reticulate') reticulate::py_install("stitchr", pip = TRUE) system("stitchr -h") # this does not work But obviously enough, the system() call does not work, with the message error in running command. What would be the right way to do this? I had success in the past with anndata, for example. But this is an R package wrapper, so I can just do: reticulate::use_condaenv('r-reticulate') reticulate::py_install("anndata", pip = TRUE) data_h5ad <- anndata::read_h5ad("file.h5ad") How can I approach the stitchr case? EDIT: So I retrieved stitchr.py location during the package installation: /usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py I tried all the following but nothing works (see error messages): pyloc="/usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py" reticulate::source_python(pyloc) Error in py_run_file_impl(file, local, convert) : ImportError: attempted relative import with no known parent package Run reticulate::py_last_error() for details. reticulate::py_run_file(pyloc) Error in py_run_file_impl(file, local, convert) : ImportError: attempted relative import with no known parent package Run reticulate::py_last_error() for details. reticulate::py_run_string(paste(pyloc, "-h")) Error in py_run_string_impl(code, local, convert) : File "", line 1 /usr/local/Caskroom/miniconda/base/envs/r-reticulate/lib/python3.10/site-packages/Stitchr/stitchr.py -h SyntaxError: invalid syntax Run reticulate::py_last_error() for details. I am absolutely clueless on how to proceed here.
Here is maybe what you expect. shell: conda create --name=testenv python # or conda create --name=testenv python==3.10.13 if you want a specific version for jupyter for example conda activate testenv # to be sure which pip is: whereis pip ~/anaconda3/envs/testenv/bin/pip shell stitchr part, read from the doc of stitchr pip install stitchr IMGTgeneDL stitchrdl stitchr -v TRBV7-3*01 -j TRBJ1-1*01 -cdr3 CASSYLQAQYTEAFF It works with command line. shell cd ~ cp /home/extraits/anaconda3/envs/testenv/bin/stitchr ~/teststitchr.py ./teststitchr.py -v TRBV7-3*01 -j TRBJ1-1*01 -cdr3 CASSYLQAQYTEAFF It works with command line. Create ~/teststitchr2.py filled by the content of https://jamieheather.github.io/stitchr/importing.html ~/teststitchr2.py: # import stitchr from Stitchr import stitchrfunctions as fxn from Stitchr import stitchr as st # specify details about the locus to be stitched chain = 'TRB' species = 'HUMAN' # initialise the necessary data tcr_dat, functionality, partial = fxn.get_imgt_data(chain, st.gene_types, species) codons = fxn.get_optimal_codons('', species) # provide details of the rearrangement to be stitched tcr_bits = {'v': 'TRBV7-3*01', 'j': 'TRBJ1-1*01', 'cdr3': 'CASSYLQAQYTEAFF', 'l': 'TRBV7-3*01', 'c': 'TRBC1*01', 'skip_c_checks': False, 'species': species, 'seamless': False, '5_prime_seq': '', '3_prime_seq': '', 'name': 'TCR'} # then run stitchr on that rearrangement stitched = st.stitch(tcr_bits, tcr_dat, functionality, partial, codons, 3, '') print(stitched) # Which produces (['TCR', 'TRBV7-3*01', 'TRBJ1-1*01', 'TRBC1*01', 'CASSYLQAQYTEAFF', 'TRBV7-3*01(L)'], 'ATGGG snip snip snip snip snip snip TTC', 0) python in the shell python ./teststitchr2.py (['TCR', 'TRBV7-301', 'TRBJ1-101', 'TRBC101','CASSYLQAQYTEAFF','TRBV7-301(L)'],'ATG snip snip snip snip TTC', 0) In R: library(reticulate) reticulate::use_condaenv('testenv') py_run_file(file.path(path.expand('~'),'teststitchr2.py')) names(py) reticulate::py_run_file() populates the variable py: https://rstudio.github.io/reticulate/articles/calling_python.html#executing-code Here is, by names(py), all functions and variables from reticulate prefixed by py$ c("chain", "codons", "functionality", "fxn", "partial", "r", "species", "st", "stitched", "tcr_bits", "tcr_dat") In R: print(py$stitched ) It works :) [[1]] [1] "TCR" "TRBV7-3*01" "TRBJ1-1*01" "TRBC1*01" [5] "CASSYLQAQYTEAFF" "TRBV7-3*01(L)" [[2]] [1] "ATGGGCAC snip snip snip snip " [[3]] [1] 0 You can type myvar=py$stitched to have it in a variable and use it later. You can also try this: In R: tcr_bits2= list(v = "TRBV7-3*01", j = "TRBJ1-1*01", cdr3 = "CASSYLQAQYTEAFF", l = "TRBV7-3*01", c = "TRBC1*01", skip_c_checks = FALSE, species = "HUMAN", seamless = FALSE, `5_prime_seq` = "", `3_prime_seq` = "", name = "TCR") py$st$stitch(tcr_bits2, py$tcr_dat,py$functionality, py$partial, py$codons, 3, '') 'TCR''TRBV7-301''TRBJ1-101''TRBC101''CASSYLQAQYTEAFF''TRBV7-301(L)' 'ATG snip snip snip snip ATTTC' 0 Be careful I mixed R variable, tcr_bits2, and reticulate environment (py$). You can type myvar2=py$st$stitch(bla bla) to have it in a variable and use it later. It works again :) Edit: And a bad trick, in the Python side, if you have an issue of import, before from Stitchr import import os os.chdir(os.path.join(os.path.expanduser('~'), 'anaconda3/envs/testenv/lib/python3.12/site-packages')) But look at also How can I import a module dynamically given the full path? This trick (os.chdir()) is only for test, but try to not use it.
4
0
77,240,340
2023-10-5
https://stackoverflow.com/questions/77240340/python-pandas-select-and-drop-rows-grouped-by-multiple-columns-based-on-conditi
Suppose I have a pandas DataFrame with the following columns and data: user time session time_diff 0 21.0 2022-12-16 14:03:08 5 NaN 1 21.0 2022-12-16 14:03:10 5 2.0 2 21.0 2022-12-16 14:03:12 6 2.0 3 21.0 2022-12-16 14:03:13 6 1.0 4 21.0 2022-12-28 14:49:54 16 1039601.0 5 30.0 2022-12-16 14:03:16 5 1039598.0 6 30.0 2022-12-16 14:03:18 5 2.0 7 30.0 2022-12-16 14:03:20 6 2.0 I would like to select those rows where for the same user and session the time difference (time_diff column in seconds) is less than some threshold (10 seconds, for example). Which would result in the following output: user time session time_diff 1 21.0 2022-12-16 14:03:10 5 2.0 3 21.0 2022-12-16 14:03:13 6 1.0 6 30.0 2022-12-16 14:03:18 5 2.0 I could probably iterate through each row and select records where id = id of the preceding row and session = session of the preceding row but I believe this is not the most optimal approach. df.groupby(['user', 'session']).filter(lambda x: (x.time_diff <= 10).any()) also does not produce the expected result.
Option 1 Group by ["user", "session"] (df.groupby) and check .diff for column "time". For the resulting Series check < 10 seconds using Series.lt. Finally, use the resulting Series (populated with True & False) for boolean indexing to retrieve the desired subset. out = df[df.groupby(["user", "session"])['time'].diff() .lt(pd.Timedelta('00:00:10'))] out user time session time_diff 1 21.0 2022-12-16 14:03:10 5 2.0 3 21.0 2022-12-16 14:03:13 6 1.0 6 30.0 2022-12-16 14:03:18 5 2.0 Option 2 (Assuming your data is properly sorted on user and session.) Apply Series.diff to column "time" and check < 10 seconds. Now, also check whether row values for user and session are both equal to (df.eq) the values in the previous row (df.shift). Use df.all row-wise to get False for all shifts to a new group. Finally, apply boolean indexing to select from the df where both conditions are True (using the bitwise operator &). out2 = df[df.time.diff().lt(pd.Timedelta('00:00:10')) & df[['user','session']].eq(df[['user','session']].shift(1)).all(axis=1)] out2.equals(out) # True Performance comparison Option 1 will be fastest. Both will be faster than the solution offered by @AndrejKesely. # opt1: 1.75 ms Β± 176 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # opt2: 3.1 ms Β± 133 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) # AK: 7.02 ms Β± 300 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Data used import pandas as pd import numpy as np data = {'user': {0: 21.0, 1: 21.0, 2: 21.0, 3: 21.0, 4: 21.0, 5: 30.0, 6: 30.0, 7: 30.0}, 'time': {0: '2022-12-16 14:03:08', 1: '2022-12-16 14:03:10', 2: '2022-12-16 14:03:12', 3: '2022-12-16 14:03:13', 4: '2022-12-28 14:49:54', 5: '2022-12-16 14:03:16', 6: '2022-12-16 14:03:18', 7: '2022-12-16 14:03:20'}, 'session': {0: 5, 1: 5, 2: 6, 3: 6, 4: 16, 5: 5, 6: 5, 7: 6}, 'time_diff': {0: np.nan, 1: 2.0, 2: 2.0, 3: 1.0, 4: 1039601.0, 5: 2, 6: 2.0, 7: 2.0}} df = pd.DataFrame(data) df['time'] = pd.to_datetime(df['time'])
3
3
77,242,993
2023-10-6
https://stackoverflow.com/questions/77242993/how-to-reshape-pandas-dataframe-into-a-symmetric-matrix-corr-like-square-matrix
I have a df like below: name1 name2 value 0 A B 1300 1 A C 150 2 A D 300 3 B C 450 4 B D 200 5 C D 300 I tried to pivot the table to plot a corr-like heatmap based on the value column: table = df.pivot(columns='name1', index='name2', values='value') table The result is: name1 A B C name2 B 1300.0 NaN NaN C 150.0 450.0 NaN D 300.0 200.0 300.0 How can I create a square matrix that contains A-D in columns and rows with value of name1 name2 is same to name2 name1 (A/B = B/A = 1300)? Desired output: A B C D A NaN 1300.0 150.0 300.0 B 1300.0 NaN 450.0 200.0 C 150.0 450.0 NaN 300.0 D 300.0 200.0 300.0 NaN Reproducible input: import pandas as pd products_list = [['A', 'B', 1300], ['A', 'C', 150], ['A', 'D', 300], ['B', 'C', 450], ['B', 'D', 200], ['C', 'D', 300]] df = pd.DataFrame(products_list, columns=['name1', 'name2', 'value'])
You can combine_first the transpose after pivoting: # table = df.pivot(columns='name1', index='name2', values='value') out = table.combine_first(table.T) Output: A B C D A NaN 1300.0 150.0 300.0 B 1300.0 NaN 450.0 200.0 C 150.0 450.0 NaN 300.0 D 300.0 200.0 300.0 NaN Alternatively, but less elegant, swap the columns and concat before the pivot: out = (pd.concat([df, df.rename(columns={'name1': 'name2', 'name2': 'name1'})]) .pivot(columns='name1', index='name2', values='value') )
2
6
77,235,006
2023-10-5
https://stackoverflow.com/questions/77235006/importerror-cannot-import-name-docstring-from-matplotlib
Recently, my code involving matplotlib.pyplot suddenly stopped working on all my machines (Ubuntu 22.04 LTS). I tried a simple import and got the following error: $ python Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import matplotlib.pyplot as plt Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/matplotlib/pyplot.py", line 66, in <module> from matplotlib.figure import Figure, FigureBase, figaspect File "/usr/local/lib/python3.10/dist-packages/matplotlib/figure.py", line 43, in <module> from matplotlib import _blocking_input, backend_bases, _docstring, projections File "/usr/local/lib/python3.10/dist-packages/matplotlib/projections/__init__.py", line 58, in <module> from mpl_toolkits.mplot3d import Axes3D File "/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/__init__.py", line 1, in <module> from .axes3d import Axes3D File "/usr/lib/python3/dist-packages/mpl_toolkits/mplot3d/axes3d.py", line 23, in <module> from matplotlib import _api, cbook, docstring, _preprocess_data ImportError: cannot import name 'docstring' from 'matplotlib' (/usr/local/lib/python3.10/dist-packages/matplotlib/__init__.py) I am not sure what caused the problem, and how to diagnose or fix it. The matplotlib package is installed using pip as root, as I need it to be available to all users by default. Has anyone encountered a similar issue and know how to fix it?
As pointed out in comments by @Imsteffan, and the linked bug reports here and here, the issue happens because: ... docstring was removed after a deprecation cycle of 2 releases and changed to be private (_docstring) However, the line that is erroring in mplot3d/axes3d.py was updated #22148, indicating that you have a version of mpl that it is picking up that is <3.6 This is the case in my OS. It turns out that the apt package python3-matplotlib is also installed as the dependency of another package (qgis). So there are two versions of matplotlib, from pip and from apt, respectively. The solution is to remove one of the two versions. In the linked bug report, the suggestion is to remove the apt version: sudo apt remove python3-matplotlib But in my case, I can't remove the application that depends on the apt version. So I removed the pip version instead. And then the import works as expected. $ sudo pip uninstall matplotlib Found existing installation: matplotlib 3.8.0 Uninstalling matplotlib-3.8.0: Would remove: /usr/local/lib/python3.10/dist-packages/matplotlib-3.8.0.dist-info/* /usr/local/lib/python3.10/dist-packages/matplotlib/* /usr/local/lib/python3.10/dist-packages/mpl_toolkits/axes_grid1/* /usr/local/lib/python3.10/dist-packages/mpl_toolkits/axisartist/* /usr/local/lib/python3.10/dist-packages/mpl_toolkits/mplot3d/* /usr/local/lib/python3.10/dist-packages/pylab.py Proceed (Y/n)? Successfully uninstalled matplotlib-3.8.0
6
12
77,235,156
2023-10-5
https://stackoverflow.com/questions/77235156/check-if-a-columns-integer-is-in-another-columns-string-of-integers
A dataframe has two columns. One has a single integer per row. The other has a string of multiple integers, separated by ',', per row: import pandas as pd duck_ids = ["1, 4, 5, 7", "3, 11, 14, 27"] ducks_of_interest = [4,15] duck_df = pd.DataFrame( { "DucksOfInterests": ducks_of_interest, "DuckIDs": duck_ids } ) print(f"The starting dataframe:\n{duck_df}") DucksOfInterests DuckIDs 0 4 1, 4, 5, 7 1 15 3, 11, 14, 27 A new column is required that returns a True if the Duck of Interest is within the set of Duck IDs. This is attempted using a simple lambda function with the .apply method: duck_df['DoIinDIDs'] = duck_df.apply(lambda x: str(x['DuckIDs']) in [x['DucksOfInterests']], axis=1) This was expected to return a True for the first row, as 4 is a number in "1, 4, 5, 7", and False for the second row. However, the result is False for both rows: print(f"The dataframe with the additional column:\n{duck_df}") DucksOfInterests DuckIDs DoIinDIDs 0 4 1, 4, 5, 7 False 1 15 3, 11, 14, 27 False What is the error in the code or the approach?
You were almost there but unnecessarily used a list and swapped the names: duck_df['DoIinDIDs'] = duck_df.apply(lambda x: str(x['DucksOfInterests']) in x['DuckIDs'], axis=1) Output: DucksOfInterests DuckIDs DoIinDIDs 0 4 1, 4, 5, 7 True 1 15 3, 11, 14, 27 False Note, however, that this approach might fail as you rely on the whole string and 4 would be found in 1, 14, 20. You can instead split the string: duck_df['DoIinDIDs'] = duck_df.apply(lambda x: str(x['DucksOfInterests']) in x['DuckIDs'].split(', '), axis=1) Finally, as apply on axis=1 is slow, you can replace the whole thing by a list comprehension: duck_df['DoIinDIDs'] = [str(a) in b.split(', ') for a, b in zip(duck_df['DucksOfInterests'], duck_df['DuckIDs'])]
2
3
77,234,523
2023-10-5
https://stackoverflow.com/questions/77234523/python-is-writing-empty-lists-from-cvs-file-dictreader-data-in-python-3-10
So I have written this piece of code which is meant to read a csv file and write the data to a dictionary with keys "key1" "key2" "key3" "key4" with values given as a list comprised of data from columns of the csv file but with some modifications: import numpy as np import csv #code ... #code with open(file,'r') as fl: csv_file=csv.DictReader(fl) fdata={ 'key1':[data['THING'] for data in csv_file], 'key2':[f"{data['STR1']}, {data['STR2']}, {data['STR3']}" for data in csv_file], 'key3':[np.array([data['FL1'],data['FL2']],dtype=float) for data in csv_file], 'key4':[int(data['POPULATION']) for data in csv_file] } The problem is that all the list come up empty except for the fist one, the one with key value "key1" and even when take out the typecast in "key4" it does not work, initially i thought it had to do with numpy, type casting or the string formatting but I don't what is wrong, online there are a lot of examples like this and using print() debugging shows that the data is being read. I don't know what to do here. Also I'm certain the keys for the csv file "THING", "STR1", "STR2", "STR3", "FL1", "FL2" are correct. Another thing is that the csv file is large but not millions of rows long,
This looks like it's to do with how your csv.DictReader object is being iterated over. When you do a list comprehension over the csv_file object for key1, you exhaust the iterator. This means that when you try to do the list comprehension for key2, there's no data left in csv_file to iterate over, and so on for the subsequent keys. Instead, try reading the CSV data into a list first, and then perform your list comprehensions on this list. Try something like this: import numpy as np import csv with open(file, 'r') as fl: csv_file = csv.DictReader(fl) all_data = list(csv_file) fdata = { 'key1': [data['THING'] for data in all_data], 'key2': [f"{data['STR1']}, {data['STR2']}, {data['STR3']}" for data in all_data], 'key3': [np.array([data['FL1'], data['FL2']], dtype=float) for data in all_data], 'key4': [int(data['POPULATION']) for data in all_data] } By storing all the rows in all_data first, you ensure that you can iterate over the rows multiple times without exhausting the iterator.
3
1
77,233,769
2023-10-5
https://stackoverflow.com/questions/77233769/if-a-module-is-an-object-from-the-class-module-why-arent-regular-functions-co
Begginer here trying to understand how python works and i came up with this doubt. Shouldn't all functions be methods? Here's the code i used to verify this my_module.py: def func(): pass Main.py: import inspect import my_module print(inspect.ismethod(my_module.func)) Output: False
Because methods are defined on the class. Any given module is an instance of the class module, not the definition of a new class. Compare to: class Foo: pass f = Foo() f.stuff = lambda: print('stuff') f.stuff is an instance attribute of that particular instance of Foo, not of Foos in general. The descriptor protocol (invoked when looking up an attribute on an instance, and finding it on the class, not the instance) is what converts functions on classes into methods of instances, and since these functions on a given module are all instance attributes of the module object, not the root class it's an instance of, no descriptor protocol is invoked, and the function doesn't become a method when called. All that said, you can make subclasses of the module type, which would allow a module to have real methods. This is documented under Customizing module attribute access as a fallback in case the special methods they explicitly added support for (e.g. __getattr__, __dir__) aren't enough and you need your module to do other things. Since the __class__ of a module is reassignable, you can reassign it to some other subclass of the module class and the methods of that class will receive self implicitly, so the same subclass can be assigned to multiple modules and provide per-module tailoring of behavior. For example: import sys from types import ModuleType class MethodModule(ModuleType): def dumb_method(self, x): return f"This is module named {self.__name__} which received {x!r}" thismodule = sys.modules[__name__] thismodule.__class__ = MethodModule print(thismodule.dumb_method("foo")) Try it online! gives the module it's run within an instance method named dumb_method which customizes its behavior based on the self implicitly passed to it (in this case, using self.__name__ to determine the name of the module).
3
4
77,230,983
2023-10-4
https://stackoverflow.com/questions/77230983/why-does-it-take-longer-to-execute-a-simple-for-loop-in-python-3-12-than-in-pyth
Python 3.12 was released two days ago with several new features and improvements. It claims that it's faster than ever, so I decided to give it a try. I ran a few of my scripts with the new version, but it was slower than before. I tried various approaches, simplifying my code each time in an attempt to identify the bottleneck that was causing it to run slowly. However, I have been unsuccessful so far. Finally, I decided to test a simple for loop like the following: import time def calc(): for i in range(100_000_000): x = i * 2 t = time.time() calc() print(time.time() - t) On my machine, it took 4.7s on Python 3.11.5 and 5.7s on Python 3.12.0. Trying on other machines had similar results. So why is it slower in the latest version of Python?
I am able to reproduce the observed behavior between CPython 3.11.2 and CPython 3.12.0rc2 on Debian Linux 6.1.0-6 using an Intel i5-9600KF CPU. I tried to use a low-level profiling approach so to find the differences. Put it shortly: your benchmark is very specific and CPython 3.12 is less optimized for this specific case. CPython 3.12 seems to manage object allocations, and more specifically range a bit differently. CPython 3.12 appears to create a new object from the constant 2 for every iteration of the loop as opposed to CPython 3.11. Moreover, the main evaluation function do an indirect function pointer call which is particularly slow in this case. Anyway, you should not use (C)Python in such a use-case (this is stated in the CPython doc). Under the hood Here is the results I get (pretty stable between multiple launches): 3.11: 2.026395082473755 3.12: 2.4122846126556396 Thus, CPython 3.12 is roughly 20% slower than CPython 3.11 on my machine. Profiling results indicates that half the overhead comes from an indirect function pointer call in the main evaluation function of CPython 3.12 which was not present in CPython 3.11. This function call is expensive on most modern processors. Here is the assembly code of the hot part: β”‚ ↓ je 4b8 1,28 β”‚ mov (%rdi),%rax 0,33 β”‚ test %eax,%eax 0,61 β”‚ ↓ js 4b8 0,02 β”‚ sub $0x1,%rax 2,80 β”‚ mov %rax,(%rdi) β”‚ ↓ jne 4b8 0,08 β”‚ mov 0x8(%rdi),%rax 16,28 β”‚ β†’ call *0x30(%rax) <---------------- HERE β”‚ nop 1,53 β”‚ 4b8: mov (%rsp),%rax β”‚ lea 0x4(%rax),%rcx β”‚ movzbl 0x3(%rax),%eax 0,06 β”‚ mov 0x48(%r13,%rax,8),%rdx 1,82 β”‚ mov (%rdx),%eax 0,04 β”‚ add $0x1,%eax β”‚ ↓ je 8800 While the assembly code of the same function in CPython 3.11 is similar, it does not have such expensive call. Still, there are many similar indirect function calls like this already in CPython 3.11. My hypothesis is that such a call is more expensive in CPython 3.12 because it is less predictable by the hardware prediction unit (possibly because the same instruction calls multiple functions). For more information about that, please read this great post. I cannot say much more about this part since the assembly code is really HUGE (and it turns out the C code is also pretty big). The rest of the overhead seems to come from the way object and more specifically constant are managed in CPython 3.12. Indeed, In CPython 3.11, PyObject_Free calls are slow (because you spent all your time creating/deleting objects), while in CPython 3.12, such a call is not even visible in the profiler, but there is instead PyLong_FromLong which is quite slow (not visible in CPython 3.11). The rest of the (many other) functions only takes less than 25~30% and look similar in the two versions. Based on that, we can conclude that CPython 3.12 creates a new object from the constant 2 for each iteration of the loop as opposed to CPython 3.11. This is clearly not efficient in this case (one should keep in mind that CPython is an interpreter and not a compiler though so it is not surprising it does not perform optimizations on this). There is a simple way to check that: store 2 in a variable before the loop and use this variable in the loop. Here is the corrected code: import time def calc(): const = 2 for i in range(100_000_000): x = i * const t = time.time() calc() print(time.time() - t) Here are the timings of the corrected code: 3.11: 2.045902967453003 3.12: 2.2230796813964844 The CPython 3.12 version is now substantially faster than before while the other version is almost unaffected by the modification of the code. At first glance, it tends to confirm the last hypothesis. That being said, the profiler still report many calls to PyLong_FromLong in the modified code! It turns out this change removed the issue related to the indirect function pointer call discussed in the beginning of this section! My hypothesis is that the PyLong_FromLong calls are coming from a different way to manage the objects generated from range (i.e. i). The following code tends to confirm that (note the code require ~4 GiB of RAM due to the list so it should not be used in production but only for testing purposes): import time def calc(): const = 2 TMP = list(range(100_000_000)) t = time.time() for i in TMP: x = i * const print(time.time() - t) calc() Here are results on my machine: 3.11: 1.6515681743621826 3.12: 1.7162528038024902 The gap is closer than before and the timings of the loop is smaller since objects are all pre-computed in the list before. Profiling results confirm PyLong_FromLong is not called in the timed loop. Thus, range is slower in this case in CPython 3.12. The rest of the overhead is small (<4%). Such a performance gap can come from compiler optimizations or even very tiny changes in the CPython source code. For example, simple things like the address of conditional jumps can significantly impact performance results on many Intel CPUs (see: JCC erratum). Tiny details like this matters and compilers are not perfect. This is why such a variation of performance is common and rather expected so it's not worth investigating. By the way, if you care about performance, then please use Cython or PyPy for such a computing code.
11
16
77,232,950
2023-10-4
https://stackoverflow.com/questions/77232950/python-pandas-data-frame-time-difference-between-specific-alternating-values
I have a dataframe of app usage in 4 columns that looks like this: Id Timestamp App_Name Event_Type 1 2018/01/16 06:01:05 Instagram Opened 2 2018/01/16 06:01:06 Instagram Closed 3 2018/01/16 06:01:07 Instagram Opened 4 2018/01/16 06:01:08 Instagram Interaction 5 2018/01/16 06:01:09 Instagram Interaction 6 2018/01/16 06:02:08 Instagram Closed 7 2018/01/16 06:01:08 Instagram Opened 8 2018/01/16 06:01:08 Instagram Opened 9 2018/01/16 06:01:09 Instagram Opened 10 2018/01/16 06:01:09 Instagram Closed 11 2018/01/16 06:03:44 Instagram Opened 12 2018/01/16 06:03:44 Instagram Closed 13 2018/01/16 06:03:45 Instagram Closed 14 2018/01/16 06:03:45 Instagram Closed 15 2018/01/16 06:03:47 Instagram Opened I want to get time difference in seconds between each pair of 'Opened' followed by 'Closed' rows regardless of whether or not there are other 'Event_Types' between them. There can be errors where there is more than one consecutive open or close. I just want difference between last open and first close. So in this case I want time differences between: Rows 2 and 1 6 and 3 10 and 9 12 and 11 How do I do that? Thanks!
Try: out, state = [], None for i, e in zip(df["Id"], df["Event_Type"]): if e == "Opened": state = i elif e == "Closed" and state is not None: out.append([state, i]) state = None print(out) Prints: [[1, 2], [3, 6], [9, 10], [11, 12]] To get time differences: df["Timestamp"] = pd.to_datetime(df["Timestamp"]) out, state = [], None for i, e in zip(df.index, df["Event_Type"]): if e == "Opened": state = i elif e == "Closed" and state is not None: out.append(df.loc[i, "Timestamp"] - df.loc[state, "Timestamp"]) state = None print(out) Prints: [Timedelta('0 days 00:00:01'), Timedelta('0 days 00:01:01'), Timedelta('0 days 00:00:00'), Timedelta('0 days 00:00:00')]
3
1
77,232,741
2023-10-4
https://stackoverflow.com/questions/77232741/how-to-block-progression-in-asyncio
So I understand that using async ... await python can prevent blocking and accomplish xyz to follow. But what about the opposite, where I want python to block xyz until an process has completed? For example, suppose I have three functions, A,B & C where A should not block B and vice versa. But C should be blocked by both A and B? async def A(): # some code async def B(): # some code def C(): # some code async def handler(): # A and B not to block each other await A() await B() # Must complete before C C() As I understand it, asyncio will implicitly infer that C should be blocked by A and B if C is defined in terms of them (or their outputs.) But in the event that C is not defined in terms of A & B, how can it be ensured that C will only commence when A and B are complete?
If I understand you correctly, you want to run A, B in parallel and if they both finish, call C(). You can create tasks from A(), B(), use asyncio.gather() to wait for them to finish and then call C(): import asyncio async def A(): await asyncio.sleep(1) print("A finished") async def B(): await asyncio.sleep(2) print("B finished") def C(): print("In C!") async def handler(): # A and B not to block each other tasks = {asyncio.create_task(A()), asyncio.create_task(B())} await asyncio.gather(*tasks) # Must complete before C C() asyncio.run(handler()) Prints (and finishes in 2 seconds): A finished B finished In C!
2
4
77,231,765
2023-10-4
https://stackoverflow.com/questions/77231765/how-to-create-a-menu-separator-in-pystray
I am trying to create a pystray menu separator, but I am having a hard time doing so. I have searched here on SO and in its documentation, which I find super confusing and unhelpful and I even tried to read the Menu class: class Menu(object): """A description of a menu. A menu description is immutable. It is created with a sequence of :class:`Menu.Item` instances, or a single callable which must return a generator for the menu items. First, non-visible menu items are removed from the list, then any instances of :attr:`SEPARATOR` occurring at the head or tail of the item list are removed, and any consecutive separators are reduced to one. """ #: A representation of a simple separator SEPARATOR = MenuItem('- - - -', None) def __init__(self, *items): self._items = tuple(items) In which I found the following representation, which I used like this: sep = pystray.MenuItem("- - - -", None) but instead of creating a separator, it created a menu item with this text: - - - - You can find a minimal reproducible example below: import pystray from PIL import Image def item1_action(icon, item): print("Item 1 clicked") def item2_action(icon, item): print("Item 2 clicked") def quit_action(icon, item): print("Quit clicked") item1 = pystray.MenuItem("Item 1", item1_action) item2 = pystray.MenuItem("Item 2", item2_action) sep = pystray.MenuItem("- - - -", None) quit_item = pystray.MenuItem("Quit", quit_action) menu = (item1, item2,s, quit_item) image = Image.open('icon.png') icon = pystray.Icon("test", image, "test", menu) icon.run()
You have correctly identified the point at which the use of the separator is indicated. However, you should specifically use the SEPARATOR attribute instead of its assigned value because someone might actually want to create a menu item with four hyphens, and it wouldn't be appropriate for pystray to automatically convert it to a separator against their will. That being said, you need to replace this: sep = pystray.MenuItem("- - - -", None) with this: sep = pystray.Menu.SEPARATOR
6
11
77,228,269
2023-10-4
https://stackoverflow.com/questions/77228269/remove-white-area-around-3d-plot
I created a surface plot of a gaussian using matplotlib: num_pts = 1000 Οƒ = 1.5 x = np.linspace(-5, 5, num_pts) kernel1d = np.exp(-np.square(x) / (2 * Οƒ * Οƒ)) kernel2d = np.outer(kernel1d, kernel1d) X, Y = np.meshgrid(x, x) fig, ax = plt.subplots(figsize=(6,6), subplot_kw={"projection":"3d"}) ax.plot_surface(X, Y, kernel2d, cmap="viridis", linewidth=0, antialiased=True) ax.set(xticks=[], yticks=[], zticks=[]) ax.grid(False) ax.axis('off') fig.savefig("test.svg", format="svg", transparent=True) The output has a lot of dead space around the actual plot. I tried layout=tight which did not have any effect. How can I maximize the size of the surface withing the figure assuming no need for any axis: ticks, labels, grid, etc...
You can play around with the pad_inches value: fig.savefig("test.svg", format="svg", transparent=True, bbox_inches='tight', pad_inches=-0.4) Otherwise, perhaps you can try getting the bounding box containing the paths, and set the axes limits based on that: https://stackoverflow.com/a/76076555/7750891
2
2
77,227,241
2023-10-4
https://stackoverflow.com/questions/77227241/is-there-a-lint-rule-for-python-that-automatically-detects-list-operator-concat
For me the following extras = ["extra0", "extra1"] func_with_list_arg([ "base0", "base1", ] + extras) is nicer to read with a spread operator like the following extras = ["extra0", "extra1"] func_with_list_arg([ "base0", "base1", *extras, ]) Is there a lint rule in ruff or pylint that would detect this situation?
Yes, there is! c = [3] b = [1, 2] + c print(b) When I run (my configured) ruff on this, I get: RUF005 [*] Consider [1, 2, *c] instead of concatenation This is part of the ruff specific ruleset: https://docs.astral.sh/ruff/rules/#ruff-specific-rules-ruf and is a rule which is actually automatically fixable.
2
3
77,226,367
2023-10-4
https://stackoverflow.com/questions/77226367/whats-the-best-way-to-calculate-tp-q-r-sp-q-sp-r-in-numpy
I'm trying to turn a series of m input sequences with n items each (sm,n) into a tensor where tp,q,r = sp,qsp,r While my code does work, I feel there must be a better solution. Here's what I got. # nseqs is the number of sequences # seq_length is the sequence length # seq is a list of sequences output = np.empty((nseqs, seq_length, seq_length)) for n in range(nseqs): for i, j in enumerate(seq[n]): output[n, i, :] = j output[n, :, :] *= output[n, :, :].T More to the point, is there a way to rejuggle the output tensor so that the multiplication phase is done in a single step and without those loops?
You can achieve the desired result with broadcasting. The first dimensions line up, and the others can be coerced into the right shape using unit dimensions introduced with None: t = s[:, :, None] * s[:, None, :] You can implement your original approach without loops by assigning the elements of output with broadcasting and then taking the transpose only of the last dimensions: output = np.empty((seq.shape[0], seq.shape[1], seq.shape[1]), seq.dtype) output[:] = seq[..., None] output *= output.transpose(0, 2, 1)
3
5
77,225,812
2023-10-3
https://stackoverflow.com/questions/77225812/is-there-a-way-to-install-pytorch-on-python-3-12-0
I'm making an app using gpt-neo and I'm trying to install torch, but it won't install. The error message is as follows: C:\Users\Ben>pip install torch ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch Is there any way to install torch without pip?
There are now released versions of pytorch available for python 3.12, starting with pytorch 2.2 There should be no need to use the pre-release nightly build for cuda 11.8 and python 3.12, at least there seems to be installation candidates in https://download.pytorch.org/whl/cu118 for python 3.12: torch-2.2.0+cu118-cp312-cp312-linux_x86_64.whl torch-2.2.0+cu118-cp312-cp312-win_amd64.whl So at least on windows and linux, you should be able to do pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 for a cuda enabled installation of pytorch
11
2
77,202,074
2023-9-29
https://stackoverflow.com/questions/77202074/how-do-i-transcribe-a-multi-language-audio-file-using-whisper-without-translati
I am attempting to transcribe an audio file using the Whisper library which contains alternating English and Indonesian speech. Some of the Indonesian speech is correctly transcribed into Indonesian text, but some of it is translated into English and transcribed. This behaviour seems to be random, different passes with the same model and different models give different results. Is there any way to only transcribe and not translate? Setting the language to Indonesian causes everything to be translated to Indonesian. Setting it to English causes the behaviour I described.
You could use WhisperX and leverage its speaker diarization. Make two (or more) transcription passes, one for each language. Merge both results based on speaker and time stamps.
5
5
77,217,220
2023-10-2
https://stackoverflow.com/questions/77217220/pydantic-2-validator-to-add-all-extra-fields-to-unique-dict-field
Both validators and extra columns have different ways of configuration in pydantic 2.*, I would like to know if I can still do a validation to take all the extra fields and put them in a single dictionary field The expected behavior is: class A(BaseModel): name: str extra_columns: Optional[Dict] a = A(name="john", age=24, address="Brazil") print(a) >>> A(name="john", extra_columns={"age": 24, "address": "Brazil"}) My previous code: @root_validator(pre=True) def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: all_required_field_names = { field.alias for field in cls.__fields__.values() if field.alias != "extra_columns" } extra: Dict[str, Any] = {} for field_name in list(values): if field_name not in all_required_field_names: extra[field_name] = values.pop(field_name) values["extra_columns"] = extra return values Is possible to take the same behavior in pydantic 2.*?
This can be solved with model_validator: from typing import Any from pydantic import BaseModel, model_validator class A(BaseModel): name: str extra_columns: dict | None = None @model_validator(mode="before") @classmethod def set_extra_columns(cls, data: Any): if isinstance(data, dict): extra_fields = data.keys() - cls.model_fields.keys() if extra_fields: if "extra_columns" not in data: data["extra_columns"] = {} data["extra_columns"].update( {field_name: data[field_name] for field_name in extra_fields} ) return data a = A(name="john", age=24, address="Brazil") print(a) # >>> name='john' extra_columns={'age': 24, 'address': 'Brazil'}
2
2
77,210,441
2023-10-1
https://stackoverflow.com/questions/77210441/pydantic-typeerror-validate-takes-2-positional-arguments-but-3-were-given
This is my JsonFeedOptions class. class JsonFeedOptions(BaseModel): address: Optional[AddressType] = None signer: Optional[Union[AccountAPI, str]] = None Type: Optional[FeedType] = None And I'm trying to validate the data like this. opts = {"signer": "abcd"} options = JsonFeedOptions.model_validate(opts) But got this error > options = JsonFeedOptions.model_validate(opts) E TypeError: validate() takes 2 positional arguments but 3 were given
Which Pydantic version do you use? You can check this by pip list and review the library version (do not forget to activate virtual environment if you have one) The error TypeError: BaseModel.validate() takes 2 positional arguments, but 3 were given is relevant to the difference between Pydantic versions 1 and 2. In Pydantic version 2, the method signature for BaseModel.validate() has been modified, which may lead to this error. In any case, you cannot touch the Pydantic library method, but you have two options defined here. Option 1: Migrate your code to new Pydantic version Option 2: Use the old version with from pydantic.v1 import BaseModel instead of from pydantic import BaseModel. Keep in the mind that this v1 call is possible if you have new version.
3
9
77,215,107
2023-10-2
https://stackoverflow.com/questions/77215107/importerror-cannot-import-name-url-decode-from-werkzeug-urls
I am building a webapp using Flask. I imported the flask-login library to handle user login. But it shows an ImportError. Below is my folder structure: >flask_blog1 >flaskblog >static >templates >__init__.py >forms.py >models.py >routes.py >instance >site.db >venv >requirements.txt >run.py My run.py: from flaskblog import app if __name__ == "__main__": app.run(debug=True) My __init__.py: from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_bcrypt import Bcrypt from flask_login import LoginManager app = Flask(__name__) app.config["SECRET_KEY"] = "5791628bb0b13ce0c676dfde280ba245" app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///site.db" db = SQLAlchemy(app) bcrypt = Bcrypt(app) login_manager = LoginManager(app) from flaskblog import routes My models.py: from datetime import datetime # from .extensions import db from flaskblog import db, login_manager from flask_login import UserMixin @login_manager.user_loader def load_user(user_id): return User.query.get(int(user_id)) class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(20), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) image_file = db.Column(db.String(20), nullable=False, default="default.jpg") password = db.Column(db.String(60), nullable=False) posts = db.relationship("Post", backref="author", lazy=True) def __repr__(self): return f"User('{self.username}', '{self.email}', '{self.image_file}')" class Post(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100), nullable=False) date_posted = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) content = db.Column(db.Text, nullable=False) user_id = db.Column(db.Integer, db.ForeignKey("user.id"), nullable=False) def __repr__(self): return f"Post('{self.title}', '{self.date_posted}')" My routes.py: from flask import render_template, flash, redirect, url_for from flaskblog import app, db, bcrypt from flaskblog.forms import RegistrationForm, LoginForm from flaskblog.models import User, Post from flask_login import login_user posts = [ { "author": "Ashutosh Chapagain", "title": "Blog Post 1", "content": "First Post Content", "date_posted": "October 1, 2023", }, { "author": "Ash Dhakal", "title": "Blog Post 2", "content": "Second Post Content", "date_posted": "October 2, 2023", }, ] @app.route("/") @app.route("/home") def home(): return render_template("home.html", posts=posts) @app.route("/about") def about(): return render_template("about.html", title="About") @app.route("/register", methods=["GET", "POST"]) def register(): form = RegistrationForm() if form.validate_on_submit(): hashed_password = bcrypt.generate_password_hash(form.password.data).decode( "utf-8" ) user = User( username=form.username.data, email=form.email.data, password=hashed_password ) db.session.add(user) db.session.commit() flash(f"Your account has been created! You are now able to log in!", "success") return redirect(url_for("login")) return render_template("register.html", title="Register", form=form) @app.route("/login", methods=["GET", "POST"]) def login(): form = LoginForm() if form.validate_on_submit(): user = User.query.filter_by(email=form.email.data).first() if user and bcrypt.check_password_hash(user.password, form.password.data): login_user(user, remember=form.remember.data) return redirect(url_for("home")) else: flash("Login Unsuccessful. Please check email and password", "danger") return render_template("login.html", title="Login", form=form) My forms.py: from flask_wtf import FlaskForm from wtforms import StringField, PasswordField, SubmitField, BooleanField from wtforms.validators import DataRequired, Length, Email, EqualTo, ValidationError from flaskblog.models import User class RegistrationForm(FlaskForm): username = StringField( "Username", validators=[DataRequired(), Length(min=2, max=20)] ) email = StringField("Email", validators=[DataRequired(), Email()]) password = PasswordField("Password", validators=[DataRequired()]) confirm_password = PasswordField( "Confirm Password", validators=[DataRequired(), EqualTo("password")] ) submit = SubmitField("Sign Up") def validate_username(self, username): user = User.query.filter_by(username=username.data).first() if user: raise ValidationError( "That username is taken. Please choose a different one." ) def validate_email(self, email): user = User.query.filter_by(email=email.data).first() if user: raise ValidationError("That email is taken. Please choose a different one.") class LoginForm(FlaskForm): email = StringField("Email", validators=[DataRequired(), Email()]) password = PasswordField("Password", validators=[DataRequired()]) remember = BooleanField("Remember Me") submit = SubmitField("Login") The exact error is: (venv) asu@asu-Lenovo-Legion-5-15ARH05:/media/asu/Data/Projects/flask_blog1$ python3 run.py Traceback (most recent call last): File "/media/asu/Data/Projects/flask_blog1/run.py", line 1, in <module> from flaskblog import app File "/media/asu/Data/Projects/flask_blog1/flaskblog/__init__.py", line 4, in <module> from flask_login import LoginManager File "/media/asu/Data/Projects/flask_blog1/venv/lib/python3.10/site-packages/flask_login/__init__.py", line 12, in <module> from .login_manager import LoginManager File "/media/asu/Data/Projects/flask_blog1/venv/lib/python3.10/site-packages/flask_login/login_manager.py", line 33, in <module> from .utils import _create_identifier File "/media/asu/Data/Projects/flask_blog1/venv/lib/python3.10/site-packages/flask_login/utils.py", line 14, in <module> from werkzeug.urls import url_decode ImportError: cannot import name 'url_decode' from 'werkzeug.urls' (/media/asu/Data/Projects/flask_blog1/venv/lib/python3.10/site-packages/werkzeug/urls.py)
I can only assume you got the Werkzeug 3.0 update (as flask-login didn't up-bound their werkzeug dependency). In their ongoing quest to remove all the non-core public APIs of werkzeug, the developers deprecated most of werkzeug.urls in Werkzeug 2.3 (released April 25th 2023), and removed it in Werkzeug 3.0 (released September 30th 2023). Your options are: force werkzeug to a pre-3.0 version wait for flask-login to release a version compatible with werkzeug 3, a fix of that and a bunch of other stuff was merged a few minutes ago edit: flask-login 0.6.3 with the compatibility fix was released October 30th: https://github.com/maxcountryman/flask-login/releases/tag/0.6.3
15
28
77,219,901
2023-10-3
https://stackoverflow.com/questions/77219901/can-not-change-media-using-setsource-in-pyside6
I have some video clips named 0.mp4, 1.mp4, 2.mp4... and I am using QMediaPlayer in PySide6. I want to write a media player which can play videos one by one. At the end of each video clip, I use the 'setSource()' function to transition to the next clip. But the mainwindow stuck every time I execute setSource() in slot function. I guess it is somewhat related to threads, so I tried changing source in a new thread. But it seems to only complete one switch, and have no response at the end of the 1.mp4. **1. I tried writing setSource() in an ordinary function: ** import sys from PySide6.QtCore import Slot, QUrl from PySide6.QtWidgets import QApplication, QMainWindow from PySide6.QtMultimedia import QAudioOutput, QMediaPlayer from PySide6.QtMultimediaWidgets import QVideoWidget class MainWindow(QMainWindow): def __init__(self): super().__init__() self._audio_output = QAudioOutput() self._player = QMediaPlayer() self._player.setAudioOutput(self._audio_output) self._video_widget = QVideoWidget() self.setCentralWidget(self._video_widget) self._player.setVideoOutput(self._video_widget) self.video_now = 0 self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now))) self.video_now += 1 self._player.play() self._player.mediaStatusChanged.connect(self.change_source) @Slot() def change_source(self): self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now))) self.video_now += 1 self._player.play() if __name__ == '__main__': app = QApplication(sys.argv) main_win = MainWindow() available_geometry = main_win.screen().availableGeometry() main_win.resize(available_geometry.width() / 3, available_geometry.height() / 2) main_win.show() sys.exit(app.exec()) mainwindow will be stuck at the end of the 0.mp4. 2. I tried putting setSource() into a new thread: import sys from PySide6.QtCore import Slot, QUrl, QThread from PySide6.QtWidgets import QApplication, QMainWindow from PySide6.QtMultimedia import QAudioOutput, QMediaPlayer from PySide6.QtMultimediaWidgets import QVideoWidget class source_thread(QThread): def __init__(self, func): super().__init__() self.func = func def run(self): self.func() print("source_thread_finished") class MainWindow(QMainWindow): def __init__(self): super().__init__() self._audio_output = QAudioOutput() self._player = QMediaPlayer() self._player.setAudioOutput(self._audio_output) self._video_widget = QVideoWidget() self.setCentralWidget(self._video_widget) self._player.setVideoOutput(self._video_widget) self.video_now = 0 self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now))) self.video_now += 1 self._player.play() self._player.mediaStatusChanged.connect(self.source_thread_slot) self.thread_source = source_thread(self.change_source) @Slot() def source_thread_slot(self, play_status): if play_status != QMediaPlayer.MediaStatus.EndOfMedia: return self.thread_source.start() def change_source(self): self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now))) self.video_now += 1 self._player.play() if __name__ == '__main__': app = QApplication(sys.argv) main_win = MainWindow() available_geometry = main_win.screen().availableGeometry() main_win.resize(available_geometry.width() / 3, available_geometry.height() / 2) main_win.show() sys.exit(app.exec()) the first switch was perfect, but the program didn't response at the end of 1.mp4. You can download short video clips in pexel for test, and I'm very grateful if you can give me some suggestions.
Check the status, play when media is loaded, set source when end of media is reached. class MainWindow(QMainWindow): def __init__(self): super().__init__() self._audio_output = QAudioOutput() self._player = QMediaPlayer() self._player.setAudioOutput(self._audio_output) self._video_widget = QVideoWidget() self.setCentralWidget(self._video_widget) self._player.setVideoOutput(self._video_widget) self.video_now = 0 self._player.mediaStatusChanged.connect(self.change_source) self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now))) def change_source(self, status): if status == QMediaPlayer.LoadedMedia: self._player.play() elif status == QMediaPlayer.EndOfMedia: self.video_now += 1 self._player.setSource(QUrl.fromLocalFile("./{}.mp4".format(self.video_now)))
4
0
77,189,542
2023-9-27
https://stackoverflow.com/questions/77189542/how-to-read-corrupted-pickle-file
I have a binary file written using pickle.dump containing logs from an app (a set of tuples of floats and strings), it worked great for 24h but now when trying to read it using import pickle path = "/path/to/file.pkl" with open(path, 'rb') as f: score_board = pickle.load(f) I get UnpicklingError: invalid load key, '\x00'. I think there is a null value somewhere corrupting the file, I know the error is in the last inputs, as the file ceased to be updated when the error occured. each tuple in the set contains (a score [float], a short sentence [str], a username [str], a datetime [str]). I was wondering if there was a way for me to only open the file until a certain point, or even to edit it manually, to make it safe to read. thanks in advance
TL;DR Add the right opcodes to the end of your pickle file. Mine should end in 'bsbuu.' but instead ends with 'bsbj' (the opcodes between items). Editing the binary file to put 'bsbuu.' at the end instead of 'bsbj' fixed it. Now my file opens perfectly. Eat it, pickle devs. Your opcodes are probably different since it depends on the data structures in your file and where it got cut off. Make a good pickle file with the same data structure, look at the opcodes, and adjust your corrupted file as needed. You can use this to see the opcodes in your pickle file: python -m pickletools file You can use this to turn a binary pickle file into plaintext hex, edit it, then turn it back to binary: xxd picklefile hexfile (edit file) xxd -r hexfile newpicklefile Pickle Format First off, let's get this out of the way: pickle sucks. It's an awful format. Binary crap with zero ability to recover any data. Even so-called python experts say "bad pickle file? kiss your data goodbye and start over." It's a complete joke of a format. Python docs aren't any better. Zero tools for recovering partial data. Zero ability to deal with errors. Best they can do is print opcodes and tell you "write your own unpickler". The attitude is so user unfriendly it even makes IBM blush. My situation My data is pretty straightforward. It's a big old dict of dicts. Like this: data = { "key1" : { "value1" : binarystring , "value2" : binarystring , "value3" : binarystring , } , "key2" : { "value1" : binarystring , "value2" : binarystring , "value3" : binarystring , } , ... } My write got interrupted near the end. Most of my data is still there in the corrupted pickle file. But the stupid thing won't open. pickle.load reads 80 MB of data and says "Oops end of file, sorry no data for you". Which is garbage. pickle could at least return the data it read successfully, if you pass an error flag or something. Nope, pickle refuses. NO PICKLE FOR YOU!! I want to recover the data that's there. I'm not gonna write my own unpickler, figuring out all those opcodes. That's a stupid solution and anyone who proposes it should be ashamed of themselves. There's gotta be a better way. How to fix it I made a better way. Pickle files has a bunch of opcodes between data to define the structure. You can use this command to display the data and opcodes in your file: python -m pickletools file I noticed that in a sample pickle file containing data structure above, each sub-dict (i.e. the dict under "key1") ends with 'bsbu' before the next item starts (i.e. "key2"). Then the files ends with 'bsbuu.' It seems the last 'u.' is an opcode that means 'end the (top-level) dict and end data'. My corrupt files ends with 'bsbj' instead of 'bsbuu.' So if I change the last part of the file to 'bsbuu.' instead, then it should close both dicts and end the data. Stands to reason, right? I don't have a good binary editor handy, so I used this to convert the binary pickle file to a plaintext hex file: xxd picklefile hexfile Changed the opcodes at the end, then converted back to binary with: xxd -r hexfile newpicklefile Called pickle.load on the new file and lo and behold, Bob's your uncle, it works! I recovered 80 MB of data. Thanks for nothing, python! >:( Conclusion Yes I know I didn't get all my data back. But I got a lot of it, no thanks to python. Telling users to eat it and start over is not helpful in any way. In future I'll consider moving to json for storing my data. It's a pain because the binary strings don't serialize to json out of the box. I'll need to make a converter for both json.dump and json.load, probably turn them into hex strings. It's a bit of work. But it's far far FAR easier to recover a corrupted json file than a pickle file. Yeah, pickle does a lot more than json, storing arbitrary objects, executable code, yada yada yada. What good is that if pickle doesn't function properly? If one little gnat gets into the datafile and pickle has a full-on meltdown, running away screaming and refusing to go back in the house. Cmon man. Grow up, python. In the real world, errors happen. People and code deal with them. A default freakout is fine. But not providing any other options is inexcusable.
4
3
77,197,398
2023-9-28
https://stackoverflow.com/questions/77197398/error-running-pyttsx3-code-on-os-x-nameerror-name-objc-is-not-defined
I am trying to run this program using Python 3.11.5 on macOS Ventura 13.6: import pyttsx3 engine = pyttsx3.init() engine.say("Hello, how are you today?") engine.runAndWait() But I am getting this error and I don't know where to start looking: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/__init__.py", line 20, in init eng = _activeEngines[driverName] ~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/weakref.py", line 136, in __getitem__ o = self.data[key]() ~~~~~~~~~^^^^^ KeyError: None During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/raelynmarie/Desktop/Tester.py", line 3, in <module> engine = pyttsx3.init() ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/__init__.py", line 22, in init eng = Engine(driverName, debug) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/engine.py", line 30, in __init__ self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/driver.py", line 50, in __init__ self._module = importlib.import_module(name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py", line 12, in <module> class NSSpeechDriver(NSObject): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py", line 13, in NSSpeechDriver @objc.python_method ^^^^ NameError: name 'objc' is not defined. Did you mean: 'object'? I reinstalled the dependencies following a different order and that did not fix the error. Order of installation: pyobjc-core pyobjc-framework-Cocoa pyobjc-framework-Quartz pyobjc
I checked pypi and I found py3-tts: https://pypi.org/project/py3-tts/ I followed the installation for py3-tts and it works now without the dummy parameter!
2
8
77,204,822
2023-9-29
https://stackoverflow.com/questions/77204822/gekko-ipopt-trajectory-propagation
I am trying to propagate a spacecraft to optimize the time of flight using IPOPT in GEKKO/Python. Here is the code for my GEKKO model: m = GEKKO() #manipulating variables and initial guesses al_a = m.MV(value = -1, lb = -2, ub = 2) al_a.STATUS = 1 l_e = m.MV(value = 0.001, lb = 0, ub = 10**6) l_e.STATUS = 1 l_i = m.MV(value = 1, lb = 0, ub = 10**6) l_i.STATUS = 1 #variables and initial guesses a = m.Var(value = oe_i[0], lb = oe_i[0] - 6378000, ub = oe_f[0] + 6378000, name = 'sma') e = m.Var(value = oe_i[1], lb = 0, ub = 1, name = 'ecc') i = m.Var(value = oe_i[2], lb = 0, ub = math.radians(90), name = 'inc') Om = m.Var(value = oe_i[3], lb = 0, ub = math.radians(360), name = 'raan') om = m.Var(value = oe_i[4], lb = 0, ub = math.radians(360), name = 'ap') nu = m.Var(value = oe_i[5], lb = 0, ub = math.radians(360), name = 'ta') mass = m.Var(value = m0, lb = 0, ub = m0, name = 'mass') #objective function tf = m.FV(1.2 * ((m0 - mass)/dm), lb = 0, ub = t_max) tf.STATUS = 1 #propagate t = 0 while t <= tf: deltas, Tp = propagate(a, e, i, Om, om, nu, mass) m.Equation(Tp * a.dt() == (deltas[0] * delta_t * deltas[7])) m.Equation(Tp * e.dt() == (deltas[1] * delta_t * deltas[7])) m.Equation(Tp * i.dt() == (deltas[2] * delta_t * deltas[7])) m.Equation(Tp * Om.dt() == (deltas[3] * delta_t * deltas[7])) m.Equation(Tp * om.dt() == (deltas[4] * delta_t * deltas[7])) m.Equation(nu.dt() == deltas[5] * delta_t) m.Equation(Tp * mass.dt() == (deltas[6] * delta_t * deltas[7])) t = t + delta_t #starting constraints m.fix(a, pos = 0, val = oe_i[0]) m.fix(e, pos = 0, val = oe_i[1]) m.fix(i, pos = 0, val = oe_i[2]) m.fix(Om, pos = 0, val = oe_i[3]) m.fix(om, pos = 0, val = oe_i[4]) m.fix(nu, pos = 0, val = oe_i[5]) m.fix(mass, pos = 0, val = m0) #boundary constraints m.fix(a, pos = len(m.time) - 1, val = oe_f[0]) m.fix(e, pos = len(m.time) - 1, val = oe_f[1]) m.fix(i, pos = len(m.time) - 1, val = oe_f[2]) m.fix(Om, pos = len(m.time) - 1, val = oe_f[3]) m.fix(om, pos = len(m.time) - 1, val = oe_f[4]) m.fix(nu, pos = len(m.time) - 1, val = oe_f[5]) m.fix(mass, pos = len(m.time) - 1, val = 0) m.time = np.linspace(0,0.2,100) m.Obj(tf) m.options.IMODE = 6 # non-linear model m.options.SOLVER = 3 # solver (IPOPT) m.options.MAX_ITER = 15000 m.options.RTOL = 1e-7 m.options.OTOL = 1e-7 m.open_folder() m.solve(display=false) # Solve print('Optimal time: ' + str(tf.value[0])) m.solve() m.open_folder(infeasibilities.txt) I know that the problem I am having is related to the propagate part. I want to propagate from the orbit corresponding to oe_i in 3600 s increments (delta_t) from time 0 to the final time (which is my objective function), achieving the orbit corresponding to oe_f, using the propagate function, which relies on the variations in the manipulation variables. I had originally tried propagating without any sort of loop to go from 0 to the end time, and the model ran fine, but never found a solution. Looking at that code, I realized that it was not actually propagating the orbit over a long period of time, which is why I added the loop. I tried a for loop first, but had similar problems with errors about tf not being an int. I tried looping time to reach the calculated value for tf (1.2 * ((m0 - mass)/dm)), but had the problem with mass not being able to be used in the calculations. If anyone is able to point out where I am going wrong in trying to do my propagation or to an example similar to what I am attempting, I would appreciate it. Thanks!
Conditional statements can't be used to define the equations of the model such as: while t <= tf: deltas, Tp = propagate(a, e, i, Om, om, nu, mass) m.Equation(Tp * a.dt() == (deltas[0] * delta_t * deltas[7])) m.Equation(Tp * e.dt() == (deltas[1] * delta_t * deltas[7])) m.Equation(Tp * i.dt() == (deltas[2] * delta_t * deltas[7])) m.Equation(Tp * Om.dt() == (deltas[3] * delta_t * deltas[7])) m.Equation(Tp * om.dt() == (deltas[4] * delta_t * deltas[7])) m.Equation(nu.dt() == deltas[5] * delta_t) m.Equation(Tp * mass.dt() == (deltas[6] * delta_t * deltas[7])) t = t + delta_t because the value of tf is determined by the optimizer. In Gekko, there is a model building phase where the variables and equations are defined. After the model building phase, there is a model compilation phase that converts the model to byte-code and it gives the model to the solver. There are no callbacks to the Python code. Use m.if3() to create a switching variables such as: p = m.if3(t-tf,1,0) deltas, Tp = propagate(a, e, i, Om, om, nu, mass) m.Equation(p * Tp * a.dt() == p*(deltas[0] * delta_t * deltas[7])) m.Equation(p * Tp * e.dt() == p*(deltas[1] * delta_t * deltas[7])) m.Equation(p * Tp * i.dt() == p*(deltas[2] * delta_t * deltas[7])) m.Equation(p * Tp * Om.dt() == p*(deltas[3] * delta_t * deltas[7])) m.Equation(p * Tp * om.dt() == p*(deltas[4] * delta_t * deltas[7])) m.Equation(p * nu.dt() == p*deltas[5] * delta_t) m.Equation(p * Tp * mass.dt() == p*(deltas[6] * delta_t * deltas[7])) This turns on the equation with p=1 when t<tf and the equation if off when p=0 and t>tf. This way, the optimizer can choose the value of p and the compiled equations automatically use this input. There is an example in the paper: Beal, L., Park, J., Petersen, D., Warnick, S., Hedengren, J.D., Combined Model Predictive Control and Scheduling with Dominant Time Constant Compensation, 2017, doi: 10.1016/j.compchemeng.2017.04.024. that is related to this problem of setting arrival constraints mid-way through the time horizon (see Section 4).
3
1
77,213,053
2023-10-2
https://stackoverflow.com/questions/77213053/why-did-flask-start-failing-with-importerror-cannot-import-name-url-quote-fr
Environment: Python 3.10.11 Flask==2.2.2 I run my Flask backend code in docker container, with BASE Image: FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime But when I run the pytest with version pytest 7.4.2, pip install pytest pytest it raised an Error, with logs: ==================================== ERRORS ==================================== _____________ ERROR collecting tests/test_fiftyone_utils_utils.py ______________ ImportError while importing test module '/builds/kw/data-auto-analysis-toolkit-backend/tests/test_fiftyone_utils_utils.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/conda/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_fiftyone_utils_utils.py:2: in <module> import daat # noqa: F401 /opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/__init__.py:1: in <module> from daat.app import app /opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/app/__init__.py:6: in <module> from flask import Flask, jsonify, request /opt/conda/lib/python3.10/site-packages/flask/__init__.py:5: in <module> from .app import Flask as Flask /opt/conda/lib/python3.10/site-packages/flask/app.py:30: in <module> from werkzeug.urls import url_quote E ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py) My codes works well when I directly run it with python run.py run.py shown below from daat import app app.run(host='0.0.0.0') I guess it should be the pytest versions issue, because it used to work well without changing any related code, and I use pip install pytest without defined a specific version. And my backend runs well without pytest.
I had the same problem. It is because Werkzeug 3.0.0 was released and Flask doesn't specify the dependency correctly (requirements says Werkzeug>=2.2.0). This is why, Werkzeug 3.0.0 is still installed and Flask 2.2.2 isn't made for Werkzeug 3.0.0. Solution: Just set a fix version for Werkzeug such as Werkzeug==2.2.2 in your requirements.txt and it should work.
196
329
77,202,743
2023-9-29
https://stackoverflow.com/questions/77202743/how-to-efficiently-implement-forward-fill-in-pytorch
How can I efficiently implement the fill forward logic (inspired for pandas ffill) for a vector shaped NxLxC (batch, sequence dimension, channel). Because each channel sequence is independent this can be equivalent to working with a tensor shaped (N*C)xL. The computation should keep the torch variable so that the actual output is differentiable. I managed to make something with advanced indexing, but it is L**2 in the memory and number of operations, so not very great and gpu friendly. Example: Assuming you have the sequence [0,1,2,0,0,3,0,4,0,0,0,5,6,0] in a tensor shaped 1x14 the fill forward will give you the sequence [0,1,2,2,2,3,3,4,4,4,4,5,6,6]. An other example shaped 2x4 is [[0, 1, 0, 3], [1, 2, 0, 3]] which should be forward filled into [[0, 1, 1, 3], [1, 2, 2, 3]]. Method used today: We use the following code that is highly unoptimized but still faster than non vectorized loops: def last_zero_sequence_start_indices(t: torch.Tensor) -> torch.Tensor: """ Given a 3D tensor `t`, this function returns a two-dimensional tensor where each entry represents the starting index of the last contiguous sequence of zeros up to and including the current index. If there's no zero at the current position, the value is the tensor's length. In essence, for each position in `t`, the function pinpoints the beginning of the last contiguous sequence of zeros up to that position. Args: - t (torch.Tensor): Input tensor with shape [Batch, Channel, Time]. Returns: - torch.Tensor: Three-dimensional tensor with shape [Batch, Channel, Time] indicating the starting position of the last sequence of zeros up to each index in `t`. """ # Create a mask indicating the start of each zero sequence start_of_zero_sequence = (t == 0) & torch.cat([ torch.full(t.shape[:-1] + (1,), True, device=t.device), t[..., :-1] != 0, ], dim=2) # Duplicate this mask into a TxT matrix duplicated_mask = start_of_zero_sequence.unsqueeze(2).repeat(1, 1, t.size(-1), 1) # Extract the lower triangular part of this matrix (including the diagonal) lower_triangular = torch.tril(duplicated_mask) # For each row, identify the index of the rightmost '1' (start of the last zero sequence up to that row) indices = t.size(-1) - 1 - lower_triangular.int().flip(dims=[3]).argmax(dim=3) return indices
Here is an approach to this problem, without creating TxT matrix: import torch def forward_fill(t: torch.Tensor) -> torch.Tensor: n_dim, t_dim = t.shape # Generate indices range rng = torch.arange(t_dim) rng_2d = rng.unsqueeze(0).repeat(n_dim, 1) # Replace indices to zero for elements that equal zero rng_2d[t == 0] = 0 # Forward fill of indices range so all zero elements will be replaced with previous non-zero index. idx = rng_2d.cummax(1).values t = t[torch.arange(n_dim)[:, None], idx] return t Note that this is a solution for 2D input but can be easily modified for more dimensions.
5
1
77,221,564
2023-10-3
https://stackoverflow.com/questions/77221564/add-mypy-options-enable-incomplete-feature-unpack-in-pyproject-toml
I would like to use the experimental typing.Unpack in my project. In the CLI command, it works when adding --enable-incomplete-feature=Unpack. However, I have mypy issues reported by pyright (in neovim), therefore I would like to add this option in the section mypy of pyproject.toml. How can I achieve it?
It should be like this [tool.mypy] enable_incomplete_feature = ["Unpack"]
5
7
77,200,817
2023-9-29
https://stackoverflow.com/questions/77200817/generalized-goertzel-algorithm-for-better-peaks-detection-than-fft-shifted-freq
I have translated the algorithm of the Generalized Goertzel technique in Python from the Matlab code that can be found here. I have trouble using it on real data and only on real data: generating a testing "synthetic" signal with 5 sin components the Goertzel returns correct frequencies, obviously with better accuracy than FFT; both the techniques are aligned. However, if I provide market data on N samples the FFT gives the lowest frequency f = 1/N as expected; Goertzel returns all frequencies higher than 1. The time frame of the data is 1 hour, but the data is unlabeled with timestamps, it could be also seconds, so my expectation is that the two ways of calculating the frequency transform should return, apart from different accuracies, the same harmonics on the frequency domain. Why am I getting the lowest frequency in one method but greater than 1 using another method with real data? import numpy as np def goertzel_general_shortened(x, indvec, maxes_tollerance = 100): # Check input arguments if len(indvec) < 1: raise ValueError('Not enough input arguments') if not isinstance(x, np.ndarray) or x.size == 0: raise ValueError('X must be a nonempty numpy array') if not isinstance(indvec, np.ndarray) or indvec.size == 0: raise ValueError('INDVEC must be a nonempty numpy array') if np.iscomplex(indvec).any(): raise ValueError('INDVEC must contain real numbers') lx = len(x) x = x.reshape(lx, 1) # forcing x to be a column vector # Initialization no_freq = len(indvec) y = np.zeros((no_freq,), dtype=complex) # Computation via second-order system for cnt_freq in range(no_freq): # Precompute constants pik_term = 2 * np.pi * indvec[cnt_freq] / lx cos_pik_term2 = 2 * np.cos(pik_term) cc = np.exp(-1j * pik_term) # complex constant # State variables s0 = 0 s1 = 0 s2 = 0 # Main loop for ind in range(lx - 1): s0 = x[ind] + cos_pik_term2 * s1 - s2 s2 = s1 s1 = s0 # Final computations s0 = x[lx - 1] + cos_pik_term2 * s1 - s2 y[cnt_freq] = s0 - s1 * cc # Complex multiplication substituting the last iteration # and correcting the phase for potentially non-integer valued # frequencies at the same time y[cnt_freq] = y[cnt_freq] * np.exp(-1j * pik_term * (lx - 1)) return y Here are the charts of the FFT and Goertzel transform for the synthetic testing 5 components signal here the Goertzel one the original frequencies were signal_frequencies = [30.5, 47.4, 80.8, 120.7, 133] Instead, if I try to download market data data = yf.download("SPY", start="2022-01-01", end="2023-12-31", interval="1h") and try to transform the data['Close'] of SPY, this is what I get with the FFT transform with N = 800 samples and the rebuilt signal on the first 2 components (not so good) and this is what I get with the Goertzel transform Note that the first peaks on FFT are below 0.005, for Goertzel above 1. This is the way in which I tested the FFT on SPY data import yfinance as yf import numpy as np import pandas as pd import plotly.graph_objs as go from plotly.subplots import make_subplots def analyze_and_plot(data, num_samples, start_date, num_harmonics): num_harmonics = num_harmonics *1 # Seleziona i dati nell'intervallo specificato original_data = data data = data[data.index >= start_date] # Estrai i campioni desiderati data = data.head(num_samples) # Calcola la FFT dei valori "Close" fft_result = np.fft.fft(data["Close"].values) frequency_range = np.fft.fftfreq(len(fft_result)) print("Frequencies: ") print(frequency_range) print("N Frequencies: ") print(len(frequency_range)) print("First frequencies magnitude: ") print(np.abs(fft_result[0:num_harmonics])) # Trova le armoniche dominanti # top_harmonics = np.argsort(np.abs(fft_result))[::-1][:num_harmonics] top_harmonics = np.argsort(np.abs(fft_result[0:400]))[::-1][1:(num_harmonics + 1)] # skip first one print("Top harmonics: ") print(top_harmonics) # top_harmonics = [1, 4]#, 8, 5, 9] # Creazione del grafico per lo spettro spectrum_trace = go.Scatter(x=frequency_range, y=np.abs(fft_result), mode='lines', name='FFT Spectrum') fig_spectrum = go.Figure(spectrum_trace) fig_spectrum.update_layout(title="FFT Spectrum", xaxis=dict(title="Frequency"), yaxis=dict(title="Magnitude")) # Calcola la ricostruzione basata sulle prime N armoniche reconstructed_signal = np.zeros(len(data)) time = np.linspace(0, num_samples, num_samples, endpoint=False) # print('time') # print(time) for harmonic_index in top_harmonics[:num_harmonics]: amplitude = np.abs(fft_result[harmonic_index]) #.real phase = np.angle(fft_result[harmonic_index]) frequency = frequency_range[harmonic_index] reconstructed_signal += amplitude * np.cos(2 * np.pi * frequency * time + phase) # print('first reconstructed_signal len') # print(len(reconstructed_signal)) zeros = np.zeros(len(original_data) - 2*len(data)) reconstructed_signal = np.concatenate((reconstructed_signal, reconstructed_signal), axis = 0) # print('second reconstructed_signal len') # print(len(reconstructed_signal)) reconstructed_signal = np.concatenate((reconstructed_signal, zeros), axis = 0) original_data['reconstructed_signal'] = reconstructed_signal # print('reconstructed_signal len') # print(len(reconstructed_signal)) # print('original_data len') # print(len(original_data)) # print('reconstructed_signal[300:320]') # print(reconstructed_signal[290:320]) # print('original_data[300:320]') # print(original_data[290:320][['Close', 'reconstructed_signal']]) # reconstructed_signal = np.fft.ifft(fft_result[top_harmonics[:num_harmonics]]) # print('reconstructed_signal') # print(reconstructed_signal) # Converte i valori complessi in valori reali per la ricostruzione # reconstructed_signal_real = reconstructed_signal.real # Creazione del secondo grafico con due subplot fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=("Original Close", "Reconstructed Close")) # Aggiungi il grafico di "Close" originale al primo subplot fig.add_trace(go.Scatter(x=original_data.index, y=original_data["Close"], mode="lines", name="Original Close"), row=1, col=1) # Aggiungi il grafico della ricostruzione al secondo subplot fig.add_trace(go.Scatter(x=original_data.index, y=original_data['reconstructed_signal'] , mode="lines", name="Reconstructed Close"), row=2, col=1) # Aggiorna il layout del secondo grafico fig.update_xaxes(title_text="Time", row=2, col=1) fig.update_yaxes(title_text="Value", row=1, col=1) fig.update_yaxes(title_text="Value", row=2, col=1) # Aggiorna il layout generale fig.update_layout(title="Close Analysis and Reconstruction") # Visualizza il grafico dello spettro fig_spectrum.show() # fig.update_layout(xaxis = dict(type="category")) # Aggiorna il layout dell'asse X per includere tutti i dati # fig.update_xaxes(range=[original_data.index.min(), original_data.index.max()], row=2, col=1) fig.update_xaxes(type="category", row=1, col=1) fig.update_xaxes(type="category", row=2, col=1) # Visualizza il secondo grafico con i subplot fig.show() # Esempio di utilizzo data = yf.download("SPY", start="2022-01-01", end="2023-12-31", interval="1h") analyze_and_plot(data, num_samples=800, start_date="2023-01-01", num_harmonics=2) as well as the test of SPY data on Goertzel import yfinance as yf import numpy as np import pandas as pd import plotly.graph_objs as go from plotly.subplots import make_subplots def analyze_and_plot(data, num_samples, start_date, num_harmonics): # Seleziona i dati nell'intervallo specificato original_data = data data = data[data.index >= start_date] # Estrai i campioni desiderati data = data.head(num_samples) # Frequenze desiderate frequency_range = np.arange(0, 20, 0.001) # Calcola lo spettro delle frequenze utilizzando la funzione Goertzel transform = goertzel_general_shortened(data['Close'].values, frequency_range) harmonics_amplitudes = np.abs(transform) frequency_range = frequency_range # Creazione del grafico per lo spettro spectrum_trace = go.Scatter(x=frequency_range, y=harmonics_amplitudes, mode='lines', name='FFT Spectrum') fig_spectrum = go.Figure(spectrum_trace) fig_spectrum.update_layout(title="Frequency Spectrum", xaxis=dict(title="Frequency"), yaxis=dict(title="Magnitude")) # Visualizza il grafico dello spettro fig_spectrum.show() peaks_indexes = argrelmax(harmonics_amplitudes, order = 10)[0] # find indexes of peaks peak_frequencies = frequency_range[peaks_indexes] peak_amplitudes = harmonics_amplitudes[peaks_indexes] print('peaks_indexes') print(peaks_indexes[0:30]) print('peak_frequencies') print(peak_frequencies[0:30]) print('peak_amplitudes') print(peak_amplitudes[0:30]) lower_freq_sort_peak_indexes = np.sort(peaks_indexes)[0:num_harmonics] # lower indexes <--> lower frequencies higher_amplitudes_sort_peak_indexes = peaks_indexes[np.argsort(harmonics_amplitudes[peaks_indexes])[::-1]][0:num_harmonics] print('higher_amplitudes_sort_peak_indexes') print(higher_amplitudes_sort_peak_indexes[0:10]) # used_indexes = lower_freq_sort_peak_indexes used_indexes = higher_amplitudes_sort_peak_indexes # Creazione del segnale ricostruito utilizzando i picchi time = np.linspace(0, num_samples, num_samples, endpoint=False) reconstructed_signal = np.zeros(len(time), dtype=float) print('num_samples') print(num_samples) print('time[0:20]') print(time[0:20]) print('reconstructed_signal') print(reconstructed_signal[0:10]) for index in used_indexes: phase = np.angle(transform[index]) amplitude = np.abs(transform[index]) frequency = frequency_range[index] print('phase') print(phase) print('amplitude') print(amplitude) print('frequency') print(frequency) reconstructed_signal += amplitude * np.sin(2 * np.pi * frequency * time + phase) # Estrai la parte reale del segnale ricostruito reconstructed_signal_real = reconstructed_signal print('reconstructed_signal[1]') print(reconstructed_signal[1]) print('reconstructed_signal.shape') print(reconstructed_signal.shape) zeros = np.zeros(len(original_data) - 2*num_samples) reconstructed_signal_real = np.concatenate((reconstructed_signal_real, reconstructed_signal_real), axis = 0) print('reconstructed_signal_real.shape') print(reconstructed_signal_real.shape) reconstructed_signal_real = np.concatenate((reconstructed_signal_real, zeros), axis = 0) print('reconstructed_signal_real.shape') print(reconstructed_signal_real.shape) original_data['reconstructed_signal'] = reconstructed_signal_real # Creazione del secondo grafico con due subplot fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=("Original Close", "Reconstructed Close")) # Aggiungi il grafico di "Close" originale al primo subplot fig.add_trace(go.Scatter(x=original_data.index, y=original_data["Close"], mode="lines", name="Original Close"), row=1, col=1) # Aggiungi il grafico della ricostruzione al secondo subplot fig.add_trace(go.Scatter(x=original_data.index, y=original_data['reconstructed_signal'] , mode="lines", name="Reconstructed Close"), row=2, col=1) # Aggiorna il layout del secondo grafico fig.update_xaxes(title_text="Time", row=2, col=1) fig.update_yaxes(title_text="Value", row=1, col=1) fig.update_yaxes(title_text="Value", row=2, col=1) # Aggiorna il layout generale fig.update_layout(title="Close Analysis and Reconstruction") # fig.update_layout(xaxis = dict(type="category")) # Aggiorna il layout dell'asse X per includere tutti i dati # fig.update_xaxes(range=[original_data.index.min(), original_data.index.max()], row=2, col=1) fig.update_xaxes(type="category", row=1, col=1) fig.update_xaxes(type="category", row=2, col=1) # Visualizza il secondo grafico con i subplot fig.show() # Esempio di utilizzo analyze_and_plot(data, num_samples=800, start_date="2023-01-01", num_harmonics=2) Edit 30/09/2023 I tried to normalize the SPY data as suggested in the answers, but the problem is still there, here is the resulting chart
I finally found the point. The Goertzel transform basically counts how many times each harmonic stays inside the sampling period returning, in addition, its amplitude and phase. To get the frequency it is necessary to divide by the number of samples, something that probably is implicitly done by the standard FFT libraries. The example with 5 "synthetic" sin waves turned out to be similar between FTT and Goertzel because was sampled in 1 second, so, for example, a 50Hz harmonic had exactly 50 sin waves in one full circle angle that madethe Goertzel transform resulting to be correct too. The correction is in the following added second line of code transform = goertzel_general_shortened(data, frequency_range) frequency_range = frequency_range/num_samples harmonics_amplitudes = np.abs(transform) peaks_indexes = argrelmax(harmonics_amplitudes, order = 10)[0] # find indexes of peaks peak_frequencies = frequency_range[peaks_indexes] peak_periods = 1 / frequency_range[peaks_indexes] peak_amplitudes = harmonics_amplitudes[peaks_indexes] peak_phases = np.angle(transform[peaks_indexes]) peak_amplitudes = peak_amplitudes*peak_frequencies harmonics_amplitudes = harmonics_amplitudes*frequency_range
4
0
77,196,410
2023-9-28
https://stackoverflow.com/questions/77196410/how-can-i-set-authentication-options-for-an-azure-container-app-via-python-sdk
We're using the Python ContainerAppsAPIClient library to deploy a container app to our azure estate, and it works great however I can't find any documentation on how to set the authentication on the container app either during or after it's been created. In the portal it's super easy to do, and there are some models I've found that appear to support it, but I'm not sure what other model I need to inject them into (if any?). We're creating the ContainerApp in this kind of fashion: container_app = ContainerApp( location=container_location, tags=tags, environment_id=f"/subscriptions/{subscription_id}/resourceGroups/{shared_infra_resource_group_name}/providers/Microsoft.App/managedEnvironments/{container_app_environment}", configuration=Configuration( active_revisions_mode="Single", secrets=secrets_config, registries=[registry_credentials], ingress=ingress, ), template=template, identity=identity, ) Posible models I've found to use were: AzureActiveDirectoryLogin, AuthConfig etc. but no idea where to put them.. the documentation is pretty much non-existent around this. More specifically we want to put the container app being our azure active directory login (on the same subscription), using the SDK. Below shows what I did manually in the portal that I'd like to recreate using the SDK: I've tried the following code: client.container_apps_auth_configs.create_or_update( resource_group_name=resource_group_name, container_app_name=container_app_name, auth_config_name="current", # Code: AuthConfigInvalidName. Message: The name 'label-studio' is disallowed for authconfigs, please use the name 'current'. auth_config_envelope=AuthConfig( platform=AuthPlatform( enabled=True ), global_validation=GlobalValidation( unauthenticated_client_action="Return401" ), # Some more settings for Auth if you want 'em identity_providers=IdentityProviders( azure_active_directory=AzureActiveDirectory( enabled=True, registration=AzureActiveDirectoryRegistration( open_id_issuer="https://sts.windows.net/REDACTED-UUID/v2.0" # The azure AD app registration uri ), login=AzureActiveDirectoryLogin(), ) ), login=Login(), http_settings=HttpSettings() ) ) Except that this results in the portal showing this on the auth page: All traffic is blocked, and requests will receive an HTTP 401 Unauthorized. This is because there is an authentication requirement, but no identity provider is configured. Click 'Remove authentication' to disable this feature and remove the access restriction. Or click 'Add identity provider' to configure a way for clients to authenticate themselves. No idea why as it looks like I did provide an identity provider
When I ran your code in my environment, I too got same error in Portal as below: In my case, adding Microsoft as identity provider worked when I included existing application clientId and secret in Python code. For that, you can register one Azure AD application with Redirect URI as <container-app-url>/.auth/login/aad/callback like this: Now, create one client secret in above app and add that secret value in Container app Secret tab: When I ran below modified code by including client ID and secret of existing app, I got response like this: from azure.identity import DefaultAzureCredential from azure.mgmt.appcontainers import ContainerAppsAPIClient def main(): client = ContainerAppsAPIClient( credential=DefaultAzureCredential(), subscription_id="sub_id", ) response = client.container_apps_auth_configs.create_or_update( resource_group_name="Sri", container_app_name="containerapp04", auth_config_name="current", auth_config_envelope={ "properties": { "globalValidation": {"unauthenticatedClientAction": "Return401"}, "identityProviders": { "azureActiveDirectory": {"enabled": True, "isAutoProvisioned": True,"login": {},"registration": {"clientId": "appId","clientSecretSettingName": "secret","openIdIssuer": "https://sts.windows.net/tenantId/v2.0"}, "validation":{"allowedAudiences":["appId"]}} }, "platform": {"enabled": True}, } }, ) print(response) if __name__ == "__main__": main() Response: To confirm that, I checked the same in Portal where Microsoft is configured as identity provider successfully in container app: When I clicked on Edit option, I got below screen with identity provider properties: Reference: Create or Update Auth Config in Azure Container App using Python SDK Β· GitHub
3
4
77,220,442
2023-10-3
https://stackoverflow.com/questions/77220442/multiprocessing-pool-in-a-python-class-without-name-main-guard
I am attempting to run a multiprocessed job within a larger Python class. In a simple form, the class looks as following: class Thing: def test(self): with mp.Pool() as p: yield from p.map(str, range(20)) When I import this class to a script such as: from x import Thing t = Thing() for item in t.test(): print(item) I run into the commonly known issue: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. My question is, how do I guard against this behaviour, without requiring the user of my class to write if __name__ == "__main__": every time they run my class function? Is there a way of performing this inside the class definition? I have tried writing from x import Thing t = Thing() if __name__ == "__main__": for item in t.test(): print(item) which solves the issue, but I do not want this to be the way a user interfaces this class.
Since the value of __name__ in the main module of a spawned child process is '__mp_main__' as discussed here, you can place a guard in the function that spawns child processes by checking if any of the ancestors' frames has a __name__ in the global namespace with a value of '__mp_main__', in which case the current process is a child and the execution should stop so that no more child would be spawned: # x.py import sys import multiprocessing as mp class Thing: def test(self): frame = sys._getframe(1) while frame: if frame.f_globals['__name__'] == '__mp_main__': return frame = frame.f_back with mp.Pool() as p: yield from p.map(str, range(20)) EDIT: An even easier way to guard against a child process is to validate that the current process name is 'MainProcess': import multiprocessing as mp class Thing: def test(self): if mp.current_process().name == 'MainProcess': with mp.Pool() as p: yield from p.map(str, range(20))
3
3
77,224,368
2023-10-3
https://stackoverflow.com/questions/77224368/init-called-twice
Here is a simplified Python code: class Base: __possible_types__ = {} def __new__(cls, *args, **kwargs) -> type: # pop the `__type__` argument because it # should be passed to the class `__init__` type_ = kwargs.pop("__type__", None) if type_ is not None: possible_type = cls.__possible_types__.get(type_) if possible_type is not None: return possible_type(*args, **kwargs) return super().__new__(cls) def __init__(self, *args, **kwargs) -> None: print(f"{self.__class__.__name__}.__init__", args, kwargs) def __init_subclass__(cls) -> None: __possible_types__ = {} for parent in cls.__mro__[1:]: if issubclass(parent, Base): parent.__possible_types__[cls.__name__] = cls class Parent(Base): pass class Child(Parent): pass class Adult(Parent): pass should_be_child = Parent( name="bob", __type__="Child" ) print(should_be_child) and here is the output: Child.__init__ () {'name': 'bob'} Child.__init__ () {'name': 'bob', '__type__': 'Child'} <__main__.Child object at 0x...> A few questions Why is __init__ called twice? How is the popped dict item (__type__) back in the second __init__ call? How do I solve this?
The problem. Two distinct calls. Two disctinct calls for Child are made. One as result of the call to Parent(), and the other when you call possible_type(*args, **kwargs) inside Base's constructor. As possible_type e.g. Child does not override __new__, it inherits the first one available from its parents. That ultimately means that Parent( ..., __type__ = ...) calls to construct a Child in the process as Child is a subclass of Parent, possible_type(*args, **kwargs) gets called, which in turn calls __new__ with no subclasses of it's own and thus it does so failling the check, and calls yet again super's __new__. This is the reason with two distinct __init__ calls get made, and why it appears to be that __new__ is "leaking" the popped dict item __type__. When actually, it's only being passed to its parent's constructor but not Child's This is made clear with some strategically put "debugging" print()s class Base: __possible_types__ = {} def __new__(cls, *args, **kwargs) -> type: # pop the `__type__` argument because it # should be passed to the class `__init__` type_ = kwargs.pop("__type__", None) if type_ is not None: possible_type = cls.__possible_types__.get(type_) if possible_type is not None: return possible_type(*args, **kwargs) print(f"\n__new__ called {cls=}\n") return super().__new__(cls) def __init__(self, *args, **kwargs) -> None: print(f"{self.__class__.__name__}.__init__", args, kwargs) def __init_subclass__(cls) -> None: __possible_types__ = {} print(f"\n__init_subclass__ called {cls}") for parent in cls.__mro__[1:]: if issubclass(parent, Base): print(f"{parent=}") parent.__possible_types__[cls.__name__] = cls class Parent(Base): pass class Child(Parent): pass class Adult(Parent): pass should_be_child = Parent( name="bob", __type__="Child" ) print(should_be_child) should_be_adult = Parent( name="lisa", __type__="Adult" ) print(should_be_adult) should_be_just_parent = Parent( name="mark", __type__ = "isNotASubclass" ) print(should_be_just_parent) Output: __init_subclass__ called <class '__main__.Parent'> parent=<class '__main__.Base'> __init_subclass__ called <class '__main__.Child'> parent=<class '__main__.Parent'> parent=<class '__main__.Base'> __init_subclass__ called <class '__main__.Adult'> parent=<class '__main__.Parent'> parent=<class '__main__.Base'> __new__ called cls=<class '__main__.Child'> Child.__init__ () {'name': 'bob'} Child.__init__ () {'name': 'bob', '__type__': 'Child'} <__main__.Child object at 0x000001CFC783EED0> __new__ called cls=<class '__main__.Adult'> Adult.__init__ () {'name': 'lisa'} Adult.__init__ () {'name': 'lisa', '__type__': 'Adult'} <__main__.Adult object at 0x000001CFC783EF10> __new__ called cls=<class '__main__.Parent'> Parent.__init__ () {'name': 'mark', '__type__': 'isNotASubclass'} <__main__.Parent object at 0x000001CFC783EE90> As you see, it is Parents __init__ that's being called with __type__. And in the example I've given avobe, the code reaches print(f"\n__new__ called {cls=}\n") in very case, despite a subclass being detected and a new instance of that class being constructed. Quick-fix. A one-line edit. You can solve this by only making the call in the case that possible_types is None == True. class Base: __possible_types__ = {} def __new__(cls, *args, **kwargs) -> type: # pop the `__type__` argument because it # should be passed to the class `__init__` type_ = kwargs.pop("__type__", None) if type_ is not None: possible_type = cls.__possible_types__.get(type_) if possible_type is not None: return super().__new__(possible_type) # Modified line print(f"\n__new__ called {cls=}\n") return super().__new__(cls) def __init__(self, *args, **kwargs) -> None: print(f"{self.__class__.__name__}.__init__", args, kwargs) def __init_subclass__(cls) -> None: __possible_types__ = {} print(f"\n__init_subclass__ called {cls}") for parent in cls.__mro__[1:]: if issubclass(parent, Base): print(f"{parent=}") parent.__possible_types__[cls.__name__] = cls class Parent(Base): pass class Child(Parent): pass class Adult(Parent): pass should_be_child = Parent( name="bob", __type__="Child" ) print(should_be_child) should_be_adult = Parent( name="lisa", __type__="Adult" ) print(should_be_adult) should_be_just_parent = Parent( name="mark", __type__ = "isNotASubclass" ) print(should_be_just_parent) Output: __init_subclass__ called <class '__main__.Parent'> parent=<class '__main__.Base'> __init_subclass__ called <class '__main__.Child'> parent=<class '__main__.Parent'> parent=<class '__main__.Base'> __init_subclass__ called <class '__main__.Adult'> parent=<class '__main__.Parent'> parent=<class '__main__.Base'> Child.__init__ () {'name': 'bob', '__type__': 'Child'} <__main__.Child object at 0x000002806AE4EF10> Adult.__init__ () {'name': 'lisa', '__type__': 'Adult'} <__main__.Adult object at 0x000002806AE4EE90> __new__ called cls=<class '__main__.Parent'> Parent.__init__ () {'name': 'mark', '__type__': 'isNotASubclass'} <__main__.Parent object at 0x000002806AE4EED0> In this modified version only call gets made in every case, and the code only reaches print(f"\n__new__ called {cls=}\n") when no subclasses where found. Regarding further clarifications in juanpa.arivillaga's answer indeed 'hijackin' __new__ to make a factory patern is very error-prone, and could be simplified by insted implementing that pattern with a static or class method. A more "sound" solution. Implementing a factory pattern via a classmethod. class Base: __possible_types__ = {} def __init__(self, *args, **kwargs) -> None: print(f"{self.__class__.__name__}.__init__", args, kwargs) def __init_subclass__(cls) -> None: __possible_types__ = {} Base.__possible_types__[cls.__name__] = cls @classmethod def create_base(cls, *args, __type__, **kwargs): return cls.__possible_types__[__type__](*args, **kwargs) class Parent(Base): pass class Child(Parent): pass class Adult(Parent): pass should_be_child = Base.create_base( name="bob", __type__="Child" ) print(should_be_child) should_be_adult = Base.create_base( name="lisa", __type__="Adult" ) print(should_be_adult) Output: Child.__init__ () {'name': 'bob'} <__main__.Child object at 0x0000020FB123ECD0> Adult.__init__ () {'name': 'lisa'} <__main__.Adult object at 0x0000020FB11A4ED0>
2
2
77,224,148
2023-10-3
https://stackoverflow.com/questions/77224148/matching-the-second-ip-address-in-a-range-only-when-its-following-a-specific-te
I have the following patterns (each example is in a different file): example one ----------- alpha: 192.168.50.0 - 192.168.50.24 delta: 192.168.50.100 - 192.168.50.124 other fields: more stuff .... example two ------------- gamma: 200.0.0.0 - 200.0.0.64 lamda: 200.0.0.124 - 200.0.0.255 other fields: more stuff .... I'm using Python to iterate the files, and I'm trying to find a one liner to match only 'alpha' or 'gamma' occurrences, and only the second ip in the range. So in our examples it will be: 192.168.50.24 or 200.0.0.64 Something like, that will give me only the second ip: (?<=alpha:\s)|(?<=gamma:\s).*
I'd always lean on the excellent ipaddress library whenever dealing with IP addresses because of its easy validity checking (use a simpler regex and just try it), comparison methods (ip in network?), support for IPv4 and IPv6, and understanding of special ranges (global, link_local, multicast..) - additionally, you may consider a custom class for managing these ranges If you eventually want the complete ranges and are working around it by just finding the second, something like this may more clearly get what you're after and be useful to build with (note you can match if instance.label: in a script) class IPRange: def __init__(self, line=None, *, label=None, start=None, end=None): if line and (start or end): raise TypeError("only a line or start+end are expected") # adjust regex if IPv6 is needed # or I'd suggest `.split(":", 1)` and `.split(" - ")` again if line: # pull start and end from line match = re.match(r"^([^:]+):\s*([\d\.]{7,15})\s-\s([\d\.]{7,15})$", line) if not match: raise ValueError(f"not a valid line: {line}") _label, start, end = match.groups() label = label or _label # allow setting label self.label = label # allow None # ValueError if either is not an IP Address self.ip_start = ipaddress.ip_address(start) self.ip_end = ipaddress.ip_address(end) if self.ip_start > self.ip_end: # NOTE allows == raise ValueError(f"start must be before end: cmp({self.ip_start} > {self.ip_end})") # opportunity to reject special/reserved addresses or ranges def __repr__(self): return f"{type(self).__name__}({self.label}, {self.ip_start}, {self.ip_end})" def __contains__(self, other): """ support checking `ip in IPRange` """ other = ipaddress.ip_address(other) # opportunity to reject special/reserved addresses or ranges return self.ip_start <= other <= self.ip_end Example Usage >>> r = IPRange("alpha: 192.168.50.0 - 192.168.50.24") >>> "192.168.50.1" in r True >>> "192.168.50.25" in r False >>> print(IPRange(label="foo", start="127.0.0.3", end="127.0.0.5")) IPRange(foo, 127.0.0.3, 127.0.0.5) >>> print(IPRange(label="foo", start="127.0.0.3", end="127.0.0.2")) [..] ValueError: start must be before end: cmp(127.0.0.3 > 127.0.0.2) Chewing on multiple lines for line in data.splitlines(): try: print(IPRange(line)) except ValueError as ex: print(f"failed: {repr(ex)}") failed: ValueError('not a valid line: example one') failed: ValueError('not a valid line: -----------') failed: ValueError('None does not appear to be an IPv4 or IPv6 address') IPRange(alpha, 192.168.50.0, 192.168.50.24) IPRange(delta, 192.168.50.100, 192.168.50.124) failed: ValueError('not a valid line: other fields: more stuff') failed: ValueError('not a valid line: ....') failed: ValueError('None does not appear to be an IPv4 or IPv6 address') failed: ValueError('not a valid line: example two') failed: ValueError('not a valid line: -------------') failed: ValueError('None does not appear to be an IPv4 or IPv6 address') IPRange(gamma, 200.0.0.0, 200.0.0.64) IPRange(lamda, 200.0.0.124, 200.0.0.255) failed: ValueError('not a valid line: other fields: more stuff') failed: ValueError('not a valid line: ....') Also take a look at ipaddress.summarize_address_range(start, end) which might be a useful iterator (__iter__ method?) too
3
4
77,223,323
2023-10-3
https://stackoverflow.com/questions/77223323/plotting-each-tickline-with-individually-specified-color
I am trying to change the color of the ticklines in my plot, where I would like to assign the colors based on a list of strings with color codes. I am following the following approach, but I cannot see why that does not work: import numpy as np import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4, 5] y = np.sin(x) y2 = np.tan(x) fig = plt.figure() ax1 = fig.add_subplot(2, 1, 1) ax1.plot(x, y) ax2 = fig.add_subplot(2, 1, 2) ax2.plot(x, y2) colors = ['b', 'g', 'r', 'c', 'm', 'y'] ax1.set_xticks(x) for tick, tickcolor in zip(ax1.get_xticklines(), colors): tick._color = tickcolor plt.show() Does anyone know the correct implementation of this?
As noted in comments, tick._color/tick.set_color(tickcolor) isn't working due to a bug: Using tick.set_markeredgecolor is the workaround, but it doesn't seem to be the only issue. ax1.get_xticklines() yields the actual ticks lines on every two items, you should thus only zip those: for tick, tickcolor in zip(ax1.get_xticklines()[::2], colors): tick.set_markeredgecolor(tickcolor) Output: NB. also changing the ticks width for better visualization of the colors. Full code: import numpy as np import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4, 5] y = np.sin(x) y2 = np.tan(x) fig = plt.figure() ax1 = fig.add_subplot(2, 1, 1) ax1.plot(x, y) ax2 = fig.add_subplot(2, 1, 2) ax2.plot(x, y2) colors = ['b', 'g', 'r', 'c', 'm', 'y'] ax1.set_xticks(x) for tick, tickcolor in zip(ax1.get_xticklines()[::2], colors): tick.set_markeredgecolor(tickcolor) tick.set_markeredgewidth(4) plt.show()
4
2
77,220,728
2023-10-3
https://stackoverflow.com/questions/77220728/pydantic-accept-integer-as-string-input
For a FastAPI Pydantic interface I want to be as tolerable as possible, such as receiving an integer for a string parameter and parse that integer to string: from pydantic import BaseModel class FooBar(BaseModel): whatever: str FooBar(whatever=12) Gives: ValidationError: 1 validation error for FooBar whatever Input should be a valid string [type=string_type, input_value=12, input_type=int] For further information visit https://errors.pydantic.dev/2.4/v/string_type I think in earlier versions this was possible. Python 3.10 pydantic==2.4.2 pydantic_core==2.10.1
This functionality was just reintroduced in version 2.4.0. You need to add the coerce_numbers_to_str configuration: from pydantic import BaseModel, ConfigDict class FooBar(BaseModel): model_config = ConfigDict(coerce_numbers_to_str=True) whatever: str
3
8
77,221,619
2023-10-3
https://stackoverflow.com/questions/77221619/how-can-i-find-pseudo-element-using-playwright
In my webpage I have some pseudo elements. For example: How can I find the ::after element using Playwright?
You can't get the pseudo-element because the browser doesn't expose it. But you can get the CSS properties using JavaScript. const someCSSPropertyValue = await page.locator('<selector>').first() .evaluate(el => window.getComputedStyle(el, ':after').someCSSProperty); I bet that will be helpful if you need to test that some :after was applied.
2
2
77,221,968
2023-10-3
https://stackoverflow.com/questions/77221968/how-to-convert-numpy-array-into-pydub-audiosegment
I have a TTS model and I want to combine audio. I need a way to convert the model output(numpy array) for pydub.AudioSegment to be able to combine audio This is the model output - audio[0].data.cpu().numpy() = array([ 1.90522405e-04, 3.96589050e-04, 4.41852462e-04, ..., 1.13033675e-05, -1.63643017e-05, -2.01268449e-05], dtype=float32) This is my function to combine the audio from pydub import AudioSegment from os.path import exists def creating_one_audio_file(audio): if exists("/content/audio_file.wav"): sound2 = AudioSegment.from_wav("/content/audio_file.wav") combined_sounds = audio + sound2 combined_sounds.export("/content/audio_file.wav", format="wav") else: combined_sounds = audio combined_sounds.export("/content/audio_file.wav", format="wav") creating_one_audio_file(audio[0].data.cpu().numpy())
You can rely on audiosegment (a wrapper of a pydub.AudioSegment) and its audiosegment.from_numpy_array method or borrow its underlying method implementation from https://github.com/MaxStrange/AudioSegment/blob/master/docs/api/audiosegment.py#L1145
2
2
77,222,038
2023-10-3
https://stackoverflow.com/questions/77222038/numpy-calculate-values-in-middle
I have numpy array which represents my x-axis, ex. x = [1, 2, 3, 4, 5] I want to get values in the middle of two adjacent element, so my example array should turn into x_interp = [1.5, 2.5, 3.5, 4.5] Is there a fast and convinient way to do this using python/numpy
If you have a numpy array, just slice with a shift: x = np.array([1, 2, 3, 4, 5]) x_interp = (x[1:]+x[:-1])/2 Output: array([1.5, 2.5, 3.5, 4.5]) An alternative (and also more generic approach to get the mean of every N items) would be to use sliding_window_view: from numpy.lib.stride_tricks import sliding_window_view as swv N = 2 x_interp = swv(x, N).mean(axis=1)
2
2
77,221,910
2023-10-3
https://stackoverflow.com/questions/77221910/python-pandas-delete-rows-in-the-past-if-there-is-a-row-on-the-1st-of-january-2
I try to force all users to have a date that starts at the earliest on the 1st of january 2023 with 3 use cases: if there is already data on the 01/01/2023, I want to keep it and delete previous rows If there is no data on the 01/01/2023 but older rows, I want to update the date to 01/01/2023 if the 1st row of a user if after the 01/01/2023, keep it. I guess I should start with a groupby() but then i'm not sure what to do? Source data:: USER Date Value USER1 01/06/2022 1000 USER1 01/01/2023 1000 USER1 01/02/2023 1200 USER1 02/02/2023 1300 USER2 01/06/2021 1000 USER2 01/02/2023 1200 USER2 02/02/2023 1250 USER3 02/06/2023 1250 df = pd.DataFrame({"USER" : ['USER1', 'USER1', 'USER1', 'USER1', 'USER2', 'USER2', 'USER2', 'USER3'], "Date" :['01/06/2022', '01/01/2023', '01/02/2023', '02/02/2023', '01/06/2021', '01/02/2023', '02/02/2023', '02/06/2023'], "Value" :['1000', '1000', '1200', '1300', '1000', '1200', '1250', '1250'],} ) Target data:: USER Date Value USER1 01/01/2023 1000 row before is deleted because there is a already a row on january 1st 2023 USER1 01/02/2023 1200 USER1 02/02/2023 1300 USER2 01/01/2023 1000 previous value "01/06/2021" is forced to 1st of january 2023 USER2 01/02/2023 1200 USER2 02/02/2023 1250 USER3 02/06/2023 1250 keep the value as is, it's already after the 1st of january 2023
If I understand correctly, you could force a minimal date with clip, then drop_duplicates keeping the latest row: # ensure datetime df['Date'] = pd.to_datetime(df['Date']) out = (df.assign(Date=df['Date'].clip(lower=pd.Timestamp('2023-01-01'))) .drop_duplicates(subset=['USER', 'Date'], keep='last') ) NB. I am assuming that the dates are already sorted (per USER). Output: USER Date Value 1 USER1 2023-01-01 1000 2 USER1 2023-01-02 1200 3 USER1 2023-02-02 1300 4 USER2 2023-01-01 1000 5 USER2 2023-01-02 1200 6 USER2 2023-02-02 1250 7 USER3 2023-02-06 1250 Intermediates: USER Date Value clip duplicated 0 USER1 2022-01-06 1000 2023-01-01 True 1 USER1 2023-01-01 1000 2023-01-01 False 2 USER1 2023-01-02 1200 2023-01-02 False 3 USER1 2023-02-02 1300 2023-02-02 False 4 USER2 2021-01-06 1000 2023-01-01 False 5 USER2 2023-01-02 1200 2023-01-02 False 6 USER2 2023-02-02 1250 2023-02-02 False 7 USER3 2023-02-06 1250 2023-02-06 False
2
1
77,221,352
2023-10-3
https://stackoverflow.com/questions/77221352/how-to-freeze-package-version-in-requirements
I have installed the a Python Package with pip install pyjwt[crypto] and pip freeze shows me different packages that was installed by that action. What do I need to put in my requirements.txt to correctly freeze the versions I am developing with? I dont want to put to much in the requirements.txt, only what is needed for rebuilding the environment later. Output pip: ... pyjwt[crypto] in ..\site-packages (2.8.0) cryptography>=3.4.0 in ..\site-packages (from pyjwt[crypto]) (41.0.4) cffi>=1.12 in ..\site-packages (from cryptography>=3.4.0->pyjwt[crypto]) (1.16.0) pycparser in ..\site-packages (from cffi>=1.12->cryptography>=3.4.0->pyjwt[crypto]) (2.21) ... Output pip freeze: ... cffi==1.16.0 cryptography==41.0.4 pycparser==2.21 PyJWT==2.8.0 ... Is it ok to put only PyJWT[crypto]==2.8.0 into requirements.txt? When I put to much into requirements.txt I would make it harder to install additional packages in the future ... I think.
You can keep your requirements.txt flexible enough, and use a constraint files. This constraint file is basically the output of pip freeze, and pins all versions, including their dependencies. If you need to add a dependency or upgrade, just do this in requirements.txt and regenerate the constraint file. That way, you are guaranteed to have consistent deployment. Technically: # Create the constraint file pip freeze > constraint.txt # use it: pip install -c constraint.txt Official doc: https://pip.pypa.io/en/stable/user_guide/#constraints-files
2
2
77,219,187
2023-10-3
https://stackoverflow.com/questions/77219187/most-optimal-solution-o1-to-question-given-a-single-word-return-a-list-of-an
I received this question in an interview, and coded out a solution, but it was not optimal. Given a stream of words such as: army, ramy, cat, eat, tea.... How can you store these words to support the following query: Given a word return list of anagrams present in the stream Implement Methods: public void storeWords(String[] words); public String[] getAnagrams(String word); e.g. getAnagrams("army") would return ["army", "ramy"] getAnagrams("tac") would return ["cat"] I need it to have a O(1) time complexity for lookup of getAnagrams(), which means storeWords() needs to store the anagrams in a way that looking it up would not require a loop. Currently, my solution is running on O(n) time, since I am using a loop. I'm not sure how to go about optimizing this. I was thinking of maybe using a Trie, but I don't know how that would work to give me a O(1) solution My solution is this: create an anagram_map that takes in key: sum of their unicode numbers and value: list of words with that unicode sum ex. cat would be key: ord(c) + ord(a) + ord(t) and value : [cat] getAnagrams would grab the list of possible anagrams from the sum of the unicode numbers from the word passed in. I then have a helper function of isAnagram that checks if the word is an anagram of our given word isAnagram has a map that counts the characters for the anagram and word. if everything in the map has a count of 0, it is an anagram append that to a list that we return My code is below: from collections import defaultdict class Anagram: def __init__(self): self.anagram_map = defaultdict(list) def storeWords(self, words): for word in words: unicode_sum = 0 for c in word: unicode_sum += ord(c) self.anagram_map[unicode_sum].append(word) def getAnagrams(self, word): unicode_sum = 0 res = [] for c in word: unicode_sum += ord(c) anagrams = self.anagram_map[unicode_sum] for anagram in anagrams: if self.isAnagram(anagram.word): res.append(anagram) return res def isAnagram(self, anagram, word): count_map = {} for c in anagram: if c in count_map: count_map[c] += 1 else: count_map[c] = 1 for w in word: if w in count_map: count_map[w] -= 1 else: return False for count in count_map.values(): if count != 0: return False return True anagram = Anagram() stream = ['army', 'ramy', 'cat', 'eat','tea'] anagram.storeWords(stream) print(anagram.getAnagrams('army')) print(anagram.getAnagrams('tac')) Does anyone know how I can optimize this?
To detect anagrams, using ascii values to compute a hash is not a strong way to get unique hashes and would require separate handling for collisions. For example, below 2 strings will have the same ascii sum: abd -> 295 bcb -> 295 Instead, you can sort the characters of the string and use this sorted state as the dict key and store all the words that have the same sorted state since all anagrams will have the same state when sorted. abc, cab, bca, acb --> abc(sorted state) This way, you can get all the anagrams for the given word in O(1) time on an average for most of the testcases. Snippet: class Anagram: def __init__(self): self.anagram_map = dict() def storeWords(self, words): for word in words: sortW = ''.join(sorted(word)) self.anagram_map[sortW] = self.anagram_map.get(sortW, []) self.anagram_map[sortW].append(word) def getAnagrams(self, word): return self.anagram_map.get(''.join(sorted(word)), []) Live Demo
2
1
77,220,345
2023-10-3
https://stackoverflow.com/questions/77220345/pandas-adding-data-from-a-column-to-another-dataframe-until-a-specific-time-end
df1 is like below, A time 0 32 2023-09-30 08:00:00 1 18 2023-09-30 08:01:00 2 61 2023-09-30 08:02:00 3 87 2023-09-30 08:03:00 4 46 2023-09-30 08:04:00 5 18 2023-09-30 08:05:00 6 65 2023-09-30 08:06:00 7 18 2023-09-30 08:07:00 8 10 2023-09-30 08:08:00 9 93 2023-09-30 08:09:00 and df2 is like below, AA BB Timestamp 0 27 54 2023-10-03 11:57:57.898397 1 28 56 2023-10-03 11:57:59.398397 2 29 58 2023-10-03 11:58:00.898397 3 30 60 2023-10-03 11:58:02.398397 4 31 62 2023-10-03 11:58:03.898397 5 32 64 2023-10-03 11:58:05.398397 6 33 66 2023-10-03 11:58:06.898397 7 34 68 2023-10-03 11:58:08.398397 8 35 70 2023-10-03 11:59:04.398397 I want to add the A data from df1 to df2 in a new column called CC. I will provide a lower and upper bound for the df1 data to consider. Since df1 has 7 minutes of data and df2 has 2 minutes of data, I will specify a datetime range say 2023-09-30 08:02:00 to 2023-09-30 08:08:00. I want to start adding the 'A' data from df1 to df2 starting from the lower bound, and continue adding 'A' data until df2 is filled. If 1 minute is completed in df2, I want to move on to the next entry in df1, this is how I want to fill CC. The out put should look like this, AA BB Timestamp CC 0 27 54 2023-10-03 11:57:57.898397 61 1 28 56 2023-10-03 11:57:59.398397 61 2 29 58 2023-10-03 11:58:00.898397 87 3 30 60 2023-10-03 11:58:02.398397 87 4 31 62 2023-10-03 11:58:12.898397 87 5 32 64 2023-10-03 11:58:24.398397 87 6 33 66 2023-10-03 11:58:40.898397 87 7 34 68 2023-10-03 11:58:52.398397 87 8 35 70 2023-10-03 11:59:04.398397 46 Here all examples I've given are synthetic examples, typically in my case df1 has 10500 rows of data in min frequency spanning one week and df2 has 472963 rows of data in millisecond frequency spanning 20hrs. Any help will be really appreciated.
If I understand correctly, you could define a cumsum delta from your reference (or first value of df2) and use this as a key for merge_asof: start = pd.Timestamp('2023-09-30 08:02:00') out = pd.merge_asof(df2.assign(delta=df2['Timestamp'].sub(df2['Timestamp'].iloc[0]).cumsum()), df1.assign(delta=df1['time'].sub(start).where(lambda x: x>='0').cumsum()) .dropna(subset=['delta']).rename(columns={'A': 'CC'}), on='delta', direction='forward' ).drop(columns=['delta', 'time']) Output: AA BB Timestamp CC 0 27 54 2023-10-03 11:57:57.898397 61 1 28 56 2023-10-03 11:57:59.398397 87 2 29 58 2023-10-03 11:58:00.898397 87 3 30 60 2023-10-03 11:58:02.398397 87 4 31 62 2023-10-03 11:58:03.898397 87 5 32 64 2023-10-03 11:58:05.398397 87 6 33 66 2023-10-03 11:58:06.898397 87 7 34 68 2023-10-03 11:58:08.398397 87 8 35 70 2023-10-03 11:59:04.398397 46 Intermediates: AA BB Timestamp delta2 A time delta1 0 27 54 2023-10-03 11:57:57.898397 0 days 00:00:00 61 2023-09-30 08:02:00 0 days 00:00:00 1 28 56 2023-10-03 11:57:59.398397 0 days 00:00:01.500000 87 2023-09-30 08:03:00 0 days 00:01:00 2 29 58 2023-10-03 11:58:00.898397 0 days 00:00:04.500000 87 2023-09-30 08:03:00 0 days 00:01:00 3 30 60 2023-10-03 11:58:02.398397 0 days 00:00:09 87 2023-09-30 08:03:00 0 days 00:01:00 4 31 62 2023-10-03 11:58:03.898397 0 days 00:00:15 87 2023-09-30 08:03:00 0 days 00:01:00 5 32 64 2023-10-03 11:58:05.398397 0 days 00:00:22.500000 87 2023-09-30 08:03:00 0 days 00:01:00 6 33 66 2023-10-03 11:58:06.898397 0 days 00:00:31.500000 87 2023-09-30 08:03:00 0 days 00:01:00 7 34 68 2023-10-03 11:58:08.398397 0 days 00:00:42 87 2023-09-30 08:03:00 0 days 00:01:00 8 35 70 2023-10-03 11:59:04.398397 0 days 00:01:48.500000 46 2023-09-30 08:04:00 0 days 00:03:00
2
2
77,218,039
2023-10-2
https://stackoverflow.com/questions/77218039/ways-to-speed-up-rolling-weighted-mean-on-a-pandas-dataframe
I have a large DataFrame, on which I need to calculate the rolling row-wise weighted average. I know I can do the following: import numpy as np import pandas as pd df = pd.DataFrame(np.random.rand(20000, 50)) weights = [1/9, 2/9, 1/3, 2/9, 1/9] rolling_mean = df.rolling(5, axis=1).apply(lambda seq: np.average(seq, weights=weights)) The issue is that this takes around 40 seconds on my PC. Is there any way to speed up this computation?
Code Creating a new dataframe of multiplying df by weights[0], then shifting df by one and multiplying by weights[1], then shifting df by two and multiplying by weights[2], and repeating this process, then adding all of the created dataframes together, will speed up the process. sum([df.shift(num, axis=1) * w for num, w in enumerate(weights)]) it take 0.05986 sec
4
5
77,218,116
2023-10-2
https://stackoverflow.com/questions/77218116/i-want-to-make-a-new-grouped-dataframe-that-has-only-the-dates-with-holidays-in
I have a dataframe: dateRep day month year cases deaths country_name Land.area..sq..km. 0 2021-09-21 21 9 2021 1162 7 Austria 82520.0 1 2021-09-20 20 9 2021 1708 7 Austria 82520.0 2 2021-09-19 19 9 2021 2072 5 Austria 82520.0 3 2021-09-18 18 9 2021 2235 9 Austria 82520.0 4 2021-09-17 17 9 2021 2283 8 Austria 82520.0 ... ... ... ... ... ... ... ... ... 6145 2021-03-05 5 3 2021 4069 15 Sweden 407310.0 6146 2021-03-04 4 3 2021 4882 19 Sweden 407310.0 6147 2021-03-03 3 3 2021 4873 18 Sweden 407310.0 6148 2021-03-02 2 3 2021 6191 23 Sweden 407310.0 6149 2021-03-01 1 3 2021 668975 13086 Sweden 407310.0 6150 rows Γ— 8 columns And my target is to make a new dataframe which has only rows that follow specific condition in 'dateRep' column The condition is being a holiday (Sunday or Saturday) using .weekday() function. when I tried doing: df.loc[(df['dateRep'].weekday() == 5) or (df['dateRep'].weekday == 6)] I got this error: AttributeError: 'Series' object has no attribute 'weekday' How do I sort out these rows?
Convert the dateRep to datetime Series and use .dt. accessor: df["dateRep"] = pd.to_datetime(df["dateRep"]) print(df.loc[(df["dateRep"].dt.weekday == 5) | (df["dateRep"].dt.weekday == 6)]) Prints: dateRep day month year cases deaths country_name Land.area..sq..km. 2 2021-09-19 19 9 2021 2072 5 Austria 82520.0 3 2021-09-18 18 9 2021 2235 9 Austria 82520.0 OR: Use .isin() for shorter code: df.loc[df["dateRep"].dt.weekday.isin([5, 6])]
2
3
77,217,974
2023-10-2
https://stackoverflow.com/questions/77217974/drop-a-single-column-from-a-pandas-dataframe-index-without-reset-index
I am dealing with a very large DataFrame with many index columns and I wish to convert a few columns from the index to regular columns. Below is a simplified example: df = pd.DataFrame( { 'col_a': [1,2,3], 'col_b': [4,5,6], 'index_1': ['a','b','c'], 'index_2': ['f','g','h'], 'index_to_column': [True,False,False], }, ).set_index(['index_1', 'index_2', 'index_to_column']) In the example above I would like index_to_column to be dropped from the index and become a normal column Is it possible to perform this operation without calling df.reset_index().set_index(['index_1', 'index_2']) as when dealing with very large numbers of columns and multiple columns to drop I find it gets confusing very quickly.
Just specify a level(s) to be reset: df.reset_index(2) index_to_column col_a col_b index_1 index_2 a f True 1 4 b g False 2 5 c h False 3 6 To reset multiple selective indices pass a list of levels positions to df.reset_index like: df.reset_index(level=[0, 2]) or a list of level names: df.reset_index(level=['index_to_column', 'index_1'])
2
3
77,196,102
2023-9-28
https://stackoverflow.com/questions/77196102/check-if-table-exists-in-unity-meta-catalog
So I am trying to build a weekly Data import to Unity Catalog in Databricks. I use python. There is no problem overwriting the table in case it exists: %sql Use catalog some_catalog dfTarget #some pandas dataframe df_sparkTarget=spark.createDataFrame(dfTarget) df_sparkTarget.write.format("delta").mode("overwrite").saveAsTable(database+"."+table) But before I ovverwrite anything I would like to check for the existence of this table: if spark.catalog.tableExists( database+"."+table): print("Table exists") else: print("Table does not exist") This returns the following error py4j.security.Py4JSecurityException: Method public boolean org.apache.spark.sql.internal.CatalogImpl.tableExists(java.lang.String) is not whitelisted on class class org.apache.spark.sql.internal.CatalogImpl Where can I whitelist this? Maybe just for this one cluster and not for all.
Yes I have found much the same for now until there is documented ways to do this I am using this. Catalog functionality only seems to work for the hivemetastore def schema_exists(catalog:str, schema_name:str): query = spark.sql(f""" SELECT 1 FROM {catalog}.information_schema.schemata WHERE schema_name = '{schema_name}' LIMIT 1""") return query.count() > 0 def table_exists(catalog:str, schema:str, table_name:str): query = spark.sql(f""" SELECT 1 FROM {catalog}.information_schema.tables WHERE table_name = '{table_name}' AND table_schema='{schema}' LIMIT 1""", ) return query.count() > 0 schema_exists("catalog_name","schema_name") table_exists("catalog_name","schema_name", "table_name")
3
5
77,204,189
2023-9-29
https://stackoverflow.com/questions/77204189/how-to-work-with-topics-through-telethon
I need to receive messages from a specific topic, and send them to my channel. there are no problems with sending, but when I try to receive a message from a group, I cannot understand which topic the message is from. (it is not necessary to use telethon) import asyncio import telethon from telethon import TelegramClient, events api_id = api_hash = "" client = TelegramClient("session_name", api_id, api_hash) client.start() @client.on(events.NewMessage([-1001941512580])) async def main(event): print(event) client.run_until_disconnected()
In v1, the topic is inside the MessageReplyHeader: msg = event.message if msg.reply_to and msg.reply_to.forum_topic: topic_id = msg.reply_to.reply_to_top_id
2
0
77,217,521
2023-10-2
https://stackoverflow.com/questions/77217521/is-psycopg3-a-fork-of-psycopg2-or-a-replacement-upgrade
I see references to both psycopg2 and psycopg3, but no clear guidance wrt a roadmap for transitioning between the two. I see that over time there is a large body of SO questions regarding psycopg2. Is psycopg3 intended to be a replacement for psycopg2? Has there been a significant uptake of this version? Will there be a long-lived version of psycopg2? Are there any compelling reasons to choose one version over the other?
From the documentation of psycopg3: Psycopg 3 is a newly designed PostgreSQL database adapter for the Python programming language. Psycopg 3 presents a familiar interface for everyone who has used Psycopg 2 or any other DB-API 2.0 database adapter, but allows to use more modern PostgreSQL and Python features, such as: Asynchronous support COPY support from Python objects A redesigned connection pool Support for static typing Server-side parameters binding Prepared statements Statements pipeline Binary communication Direct access to the libpq functionalities From a glance, psycopg3 appears to support more modern python and postgresql features like typing and async. Doing so likely required a lot of backwards-incompatible changes from psycopg2, hence the new version and forked development.
15
12
77,216,542
2023-10-2
https://stackoverflow.com/questions/77216542/how-to-merge-the-multiple-dataframes-sequentially
Although I thought this question should be duplicated, I couldn't find the proper answer. I have some problems merging multiple dataframes sequentially. For example, I have four dataframes as below: df1 = pd.DataFrame({'source': ['A', 'A', 'A', 'B', 'B', 'C', 'C'], 'target': ['1', '2', '3', '4', '5', '6', '7']}) df2 = pd.DataFrame({'source': ['A', 'A'], 'temp': ['a', 'b']}) df3 = pd.DataFrame({'source': ['B', 'B'], 'temp': ['c', 'd']}) df4 = pd.DataFrame({'source': ['C'], 'temp': ['e']}) And I'd like to merge the dataframe as below: # source target temp #0 A 1 a #1 A 1 b #2 A 2 a #3 A 2 b #4 A 3 a #5 A 3 b #6 B 4 c #7 B 4 d #8 B 5 c #9 B 5 d #10 C 6 e #11 C 7 e To do so, I tried to run the code, but it returned unexpected results. #Trial 1 dfs = pd.merge(df1, df2, on='source', how='left') dfs = pd.merge(dfs, df3, on='source', how='left') # new column was created with prefix, but I want to keep the three columns; source, target, temp #Trial 2 dfs = pd.merge(df1, df2, on='source', how='left') dfs['temp']=dfs.set_index('source')['temp'].fillna(df3.set_index('source')['temp'].to_dict()).values # it only fills the fixed number of NaN value, but there are some exception; one NaN in dfs, multiple values in other df3 or df4 #Trial 3 dfs = pd.merge(df1, df2, on='source', how='left') dfs[dfs['source']=='B']['temp']=pd.merge(df1, df3, on='source', how='left')['temp'].dropna() # it didn't change the dfs
This is not a simple merge. You want to concat the df2,df3,df4, then merge with df1: df1.merge(pd.concat([df2,df3,df4]).drop_duplicates(), on='source') Output: source target temp 0 A 1 a 1 A 1 b 2 A 2 a 3 A 2 b 4 A 3 a 5 A 3 b 6 B 4 c 7 B 4 d 8 B 5 c 9 B 5 d 10 C 6 e 11 C 7 e
2
3
77,216,151
2023-10-2
https://stackoverflow.com/questions/77216151/extract-words-from-list-after-specific-character
I have two lists: list_1 = ['08667\nST 403', '08667\nST 403', '08667\nST 403'] list_2 = ['12233\nFION', '12233\nFION', '12233\nFION', '12233\nFION', '31147\nARAB\nP1454'] I want to be able to extract the names after the '\n' keyword. I am using this code: def parse_string(string): string = string.rsplit('\n', 1)[1] return string list_1 = ['08667\nST 403', '08667\nST 403', '08667\nST 403'] list_2 = ['12233\nFION', '12233\nFION', '12233\nFION', '12233\nFION', '31147\nARAB\nP1454'] new_list_1 = [] new_list_2 = [] for i in range(len(list_1)): new_list_1.append(parse_string(list_1[i])) for i in range(len(list_2)): new_list_2.append(parse_string(list_2[i])) For the list_1, it works fine: new_list_1 ['ST 403', 'ST 403', 'ST 403'] but for the list_2: new_list_2 ['FION', 'FION', 'FION', 'FION', 'P1454'] So, the last element is P1454 instead of ARAB. How can I automate this to work for all cases?
You can use list comprehension. It will work for both the lists. list_1 = ['08667\nST 403', '08667\nST 403', '08667\nST 403'] list_2 = ['12233\nFION', '12233\nFION', '12233\nFION', '12233\nFION', '31147\nARAB\nP1454'] [x.split('\n')[1] for x in list_1] #['ST 403', 'ST 403', 'ST 403'] [x.split('\n')[1] for x in list_2] #['FION', 'FION', 'FION', 'FION', 'ARAB'] Now if you want P1454 instead of ARAB you can do: [x.split('\n')[2] if len(x.split('\n'))>2 else x.split('\n')[1] for x in list_2] #['FION', 'FION', 'FION', 'FION', 'P1454'] This will work for list_1 as well: [x.split('\n')[2] if len(x.split('\n'))>2 else x.split('\n')[1] for x in list_1] #['ST 403', 'ST 403', 'ST 403'] Universal Solution for both lists both values: [x.split('\n')[1:] for x in list_2] #[['ST 403'], ['ST 403'], ['ST 403']] [x.split('\n')[1:] for x in list_2] #[['FION'], ['FION'], ['FION'], ['FION'], ['ARAB', 'P1454']]
2
1
77,202,124
2023-9-29
https://stackoverflow.com/questions/77202124/pycharm-unable-to-debug-the-python-flask-project
I have a simple Flask based project in Pycharm. I am trying to debug this by right clicking and selecting debug option. But keeps getting below error: Connected to pydev debugger (build 232.9559.58) * Serving Flask app 'app' * Debug mode: on WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8000 * Running on http://192.168.0.29:8000 Press CTRL+C to quit * Restarting with stat C:\Python311\python.exe: can't open file 'C:\\Program': [Errno 2] No such file or directory Process finished with exit code 2 I am unable to understand why and what it is looking for in 'C:\\Program'. I am able to run the file perfectly fine and only having issues in debug. What can I try next?
This appears to be a bug in Pycharm. Multiple issues have been raised for the same in the JetBrains bug tracker. FLASK_DEBUG=1 breaks debugger when Python/PyCharm installation path has spaces Flask Debug session cannot be started in PyCharm Flask debugger is not working on Windows because of white space in installation directory if application is specified flask server run failed because of space character in file path Can not run flask project in debug mod You can follow this comment(or this comment) to get a temporary workaround.
2
3
77,214,515
2023-10-2
https://stackoverflow.com/questions/77214515/how-to-split-a-numpy-array-to-2d-array-based-on-postive-neagtive-changes
I have a numpy 1D array: import numpy as np arr = np.array([1, 1, 3, -2, -1, 2, 0, 2, 1, 1, -3, -1, 2]) I want split it into another two-dimensional array, based on changes in positive and negative values of array's elements(0 is placed in the range of positive values). But the original order of elements should be maintained. The desired result is: new_arr = [[1, 1, 3], [-2, -1], [2, 0, 2, 1, 1], [-3, -1], [2]]
You could use array_split, diff, nonzero: np.array_split(arr, np.nonzero(np.diff(arr>=0))[0]+1) Ouptut: [array([1, 1, 3]), array([-2, -1]), array([2, 0, 2, 1, 1]), array([-3, -1]), array([2])] Intermediates: # arr>0 [ True True True False False True False True True True False False True] # np.diff(arr>=0) [False False True False True False False False False True False True] # np.nonzero(np.diff(arr>=0))[0]+1 [ 3 5 10 12] And for lists as output: out = list(map(list, np.array_split(arr, np.nonzero(np.diff(arr>=0))[0]+1))) Output: [[1, 1, 3], [-2, -1], [2, 0, 2, 1, 1], [-3, -1], [2]] Or using itertools.groupby: from itertools import groupby out = [list(g) for _,g in groupby(arr, key=lambda x: x>=0)] Output: [[1, 1], [3, -2], [-1, 2, 0, 2, 1], [1, -3], [-1, 2]] comparison of approaches
4
4
77,194,570
2023-9-28
https://stackoverflow.com/questions/77194570/warm-start-in-combination-with-new-data-leads-to-broadcasting-error-when-predi
I am trying to train a random forest model with sklearn. I have some original data (x, y) that I use to train the RF initially with. from sklearn.ensemble import RandomForestClassifier import numpy as np x = np.random.rand(30,20) y = np.round(np.random.rand(30)) rf = RandomForestClassifier() rf.fit(x,y) Now I get some new data that I want to use to retrain the model, but I want to keep the already existing trees in the rf untouched. So I set warm_start=True and add additional trees. x_new = np.random.rand(5,20) y_new = np.round(np.random.rand(5)) rf.n_estimators +=100 rf.warm_start = True rf.fit(x_new,y_new) So far so good. Everything works. But when I make predictions I get an error: rf.predict(x) >>> ValueError: non-broadcastable output operand with shape (30,1) doesn't match the broadcast shape (30,2) Why does this happen?
This runs fine for me in colab, with the same sklearn version 1.2.2 mentioned. I suspect the issue is similar to what was indicated by the now-deleted answer, as well as Scikit-learn Randomforest with warm_start results (non-broadcastable output ...): one of your datasets (in this case, the y_new) doesn't have the same classes. I tested this by setting y_new = np.ones(5) instead of random, and I get the same error; so I think you were just unlucky and got all ones or all zeros with whatever random seed numpy had when you ran this the first time.
3
1
77,208,951
2023-10-1
https://stackoverflow.com/questions/77208951/detect-page-change-in-dash
I am developing a dash web application which is a multi page app. Below is my structure - project/ - pages/ - home.py - contact.py - graph.py - app.py - index.py In app.py, I have a component dcc.Location(id="url", refresh=True), to load different pages to a html.Div(id="page-content") using a callback. The pages are getting loaded successfully as user click on sidebar having navlinks for each page. Say I am on home page (localhost/home) and I now click to go to contact page (localhost/contact), It works. Here, before leaving the home page, I am want to save the data in dcc.store component. One way is to have hidden button and activate this button callback through javascript. For that, In assets folder I have created a custom-script.js document.addEventListener('visibilitychange', e=>{ if (document.visibilityState === 'visible') { console.log("user is focused on the page") } else { // code to trigger button click event to activate callback console.log("user left the page") } }); But what I am noticing is that even though I am switching from home page to contact page the event is not firing. May be an update of the page content by callback does not seems to affect refresh( even setting as True). But If, I do hard refresh via browser refresh button on home page the event is fired Is there a any other way to detect page change using dash or javascript?
You will probably need a dash callback and the custom script : If navigating to a page within your app via one of the Location links doesn't trigger the visibilitychange event, you can still update the store using a dash callback (defined in app.py, as well as the Location and Store components), eg. @app.callback( Output("store-id", "data"), Input('url', 'pathname'), State("store-id", "data"), prevent_initial_call=True, ) def update_store(pathname, data): data['someproperty'] = 'some value' return data And for the cases when user browser-refreshes the page or goes to another website, the visibilitychange event should fire properly. However, triggering a button click from the event handler is not the best choice I think because the script will probably terminate before the button-click callback resolves (user leaves the page), instead you could directly load and update the storage in the visibilitychange handler, eg. document.addEventListener('visibilitychange', e => { if (document.visibilityState === 'visible') { console.log("user is focused on the page"); } else { // Update store data const data = JSON.parse(localStorage.getItem('store-id')); data.someproperty = 'some value'; localStorage.setItem('store-id', JSON.stringify(data )); console.log("user left the page"); } });
2
2
77,210,131
2023-10-1
https://stackoverflow.com/questions/77210131/can-i-force-pip-to-install-dependencies-for-x86-64-architecture-in-powershell-to
I'm trying to create a lambda layer that includes pydantic/pydantic_core (Lambda python 3.11, x86_64) but getting the following error: "Unable to import module 'reddit_lambda': No module named 'pydantic_core._pydantic_core'" For context, I'm installing this on a windows x64 machine. From reading up about it here, here and here, it seems that pip installing pydantic for an incompatible architecture (win_amd64). Install log: Using cached pydantic-2.4.2-py3-none-any.whl (395 kB) Using cached pydantic_core-2.10.1-cp311-none-win_amd64.whl (2.0 MB) My question: Is there a way to force pip to install packages into a directory for a specific architecture in powershell? I saw an old discussion here, that this is possible for Linux but not sure how to do it on powershell. FYI: I also tried adding pydantic_core from binary (tried pydantic_core-2.10.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl and pydantic_core-2.10.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl) but I'm a bit confused about which version to choose?
It is possible as part of the options provided by the pip CLI. For example given a requirements.txt file containing the necessary dependencies, you can run: # Create layer based on requirements.txt in python/ directory pip install -r requirements.txt --platform manylinux2014_x86_64 --target ./python --only-binary=:all: This command will create a directory named python that will contain the packages downloaded to satisfy the requirements in the requirements.txt file for the manylinux2014_x86_64 platform. You can change the value provided to the --platform flag according to your needs, you can see examples of available platforms in the examples here
4
10
77,209,153
2023-10-1
https://stackoverflow.com/questions/77209153/how-to-dynamically-create-dataframes-with-a-for-loop
My code currently looks like this: df_1 = portfolio_all[0].rename(columns={'Close': 'Close_1'} ) df_2 = portfolio_all[1].rename(columns={'Close': 'Close_2'} ) df_3 = portfolio_all[2].rename(columns={'Close': 'Close_3'} ) df_4 = portfolio_all[3].rename(columns={'Close': 'Close_4'} ) df_5 = portfolio_all[4].rename(columns={'Close': 'Close_5'} ) df_1['daily_return_1'] = df_1['Close_1'].pct_change(1) df_2['daily_return_2'] = df_2['Close_2'].pct_change(1) df_3['daily_return_3'] = df_3['Close_3'].pct_change(1) df_4['daily_return_4'] = df_4['Close_4'].pct_change(1) df_5['daily_return_5'] = df_5['Close_5'].pct_change(1) df_1['perc_ret_1'] = (1 + df_1.daily_return_1).cumprod() - 1 df_2['perc_ret_2'] = (1 + df_2.daily_return_2).cumprod() - 1 df_3['perc_ret_3'] = (1 + df_3.daily_return_3).cumprod() - 1 df_4['perc_ret_4'] = (1 + df_4.daily_return_4).cumprod() - 1 df_5['perc_ret_5'] = (1 + df_5.daily_return_5).cumprod() - 1 Is there a way to dynamically create these dataframes in a for loop or something like that without having to write each dataframe as a line of code?
You can create using a simple python for-loop and enumerate throught he list import pandas as pd portfolio_all = [df1, df2, df3, df4, df5] for i, df in enumerate(portfolio_all): column_name = f'Close_{i+1}' df.rename(columns={'Close': column_name}, inplace=True) df[f'daily_return_{i+1}'] = df[column_name].pct_change(1) df[f'perc_ret_{i+1}'] = (1 + df[f'daily_return_{i+1}']).cumprod() - 1 Assuming you have all the dataframes as a list. This way, you can avoid repeating the same code for each DataFrame, and it should work for any number of DataFrames in your portfolio_all list. Hope this helps!
2
3
77,208,733
2023-9-30
https://stackoverflow.com/questions/77208733/import-widgets-into-a-function-from-a-class-in-another-file-nameerror-name-te
In the function1 function of the main.py file, i would like to import textbox1 and textbox2 from the Page1(tk.Frame) class of the page1.py file. I get the error NameError: name 'textbox1' is not defined, because textbox1 and textbox2 are not imported correctly. This is the code I'm using. What am I doing wrong? How to solve? main.py import tkinter as tk from tkinter import ttk from tkinter import * from page1 import Page1 root = tk.Tk() root.geometry('480x320') style = ttk.Style() style.theme_use('default') style.configure('TNotebook', tabposition='wn', background='white', tabmargins=0) style.configure('TNotebook.Tab', background='white', width=10, focuscolor='yellow', borderwidth=0) style.map('TNotebook.Tab', background=[('selected', 'yellow')]) nb = ttk.Notebook(root) nb.place(x=0, y=70) page1 = Page1(nb, width=492, height=905) nb.add(page1, text='Tab 1', compound='left') def function1(textbox1, textbox2): val_1 = "example 1" textbox1.insert(0, val_1) val_2 = "example 1" textbox2.insert(0, val_2) button1 = Button(root, text="Button", command= function1(textbox1, textbox2)) button1.place(x=0, y=0) root.mainloop() page1.py import tkinter as tk from tkinter import ttk from tkinter import * from tkinter import ttk import tkinter as tk import tkinter.font as tkFont from tkinter import ttk class Page1(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) self.textbox1 = ttk.Entry(self, width=7) self.textbox1.place(x=10, y=10) self.textbox2 = ttk.Entry(self, width=7) self.textbox2.place(x=10, y=40)
As per my knowledge, to access textbox1 and textbox2 in the function1 function in main.py, you should make them instance variables by using self.textbox1 and self.textbox2. import tkinter as tk from tkinter import ttk class Page1(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) self.textbox1 = ttk.Entry(self, width=7) self.textbox1.place(x=10, y=10) self.textbox2 = ttk.Entry(self, width=7) self.textbox2.place(x=10, y=40) And main.py should be something like this import tkinter as tk from tkinter import ttk from tkinter import * from page1 import Page1 def function1(): val_1 = "example 1" page1.textbox1.delete(0, tk.END) # Clear any existing text page1.textbox1.insert(0, val_1) val_2 = "example 2" page1.textbox2.delete(0, tk.END) # Clear any existing text page1.textbox2.insert(0, val_2) button1 = Button(root, text="Button", command=function1) button1.place(x=0, y=0) root.mainloop() This should possibly solve the issue.Can you try and hope this helps. Note : If you just want to edit you script. you can add this line for button, where page1 object is used button1 = Button(root, text="Button", command= function1(page1.textbox1, page1.textbox2)) I have tested and its working good for me :
2
3
77,205,133
2023-9-29
https://stackoverflow.com/questions/77205133/implementing-montgomery-ladder-methods-for-modular-exponentiation-in-python
I'm trying to implement the mongomery-ladder method for modular exponentiation for RSA (so N=pβ€’q, p,q are primes) in python, as followed in this paper: My code looks like this: x stands for base, k for exp, and N for modulus # uses montgomery-ladder method for modular exponentiation def montgomery_ladder(x, k, N): x %= N k %= N # getting the binary representation of k bin_k = list(map(int, bin(k)[2:])) # Initialize two variables to hold the intermediate results r0 = 1 r1 = x # Loop through each bit of the binary exponent from most significant to the least significant for i in range(len(bin_k)): if bin_k[i] == 0: r1 = (r1 * r0) % N r0 = (r0 ** 2) % N else: r0 = (r0 * r1) % N r1 = (r1 ** 2) % N # The final result is in r0 return r0 it doesn't work well for very large numbers, for example the following test: def main(): x = 412432421343242134234233 k = 62243535235312321213254235 N = 10423451524353243462 print(montgomery_ladder(x, k, N)) print(pow(x, k, N)) if __name__ == '__main__': main() yields: 7564492758006795519 179467895766154563 pow(x, k, n) returns the correct answer, but my function doesn't. Have any ideas?
Remove the k %= N at the start.
3
4
77,197,885
2023-9-28
https://stackoverflow.com/questions/77197885/is-there-a-way-to-avoid-boilerplate-property-getters-in-python-subclasses-using
I am trying to create subclasses of some superclass that has properties (delineated by the property decorator) and a properties() method that returns the properties and their values as a dict. I want the subclasses to be able to make use of the inherited properties() method with minimal boilerplate code needing to be copied and pasted into each subclass definition. I'm using Python 3.8 if it makes a difference here. The emphasis here is on the boilerplate code that I want to refactor. I begin with a simple class, BaseClass, defined as follows: class BaseClass(object): def __init__(self, A, B): self._A = A self._B = B @property def A(self): return self._A @A.setter def A(self, new_value): # ... Validate input ... self._A = new_value @property def B(self): return self._B @B.setter def B(self, new_value): # ... Validate input ... self._B = new_value def properties(self): class_items = self.__class__.__dict__.items() return dict((k, getattr(self, k)) for k, v in class_items if isinstance(v, property)) I should note that I defined the properties() method at the end after reading this other stackoverflow answer. Example usage of BaseClass as defined above: >>> base_instance = BaseClass('foo','bar') >>> base_instance.properties() {'A': 'foo', 'B': 'bar'} From there, I want to create some subclasses that inherit from BaseClass and have the properties() method work the same way. Consider the following: class SubClass(BaseClass): def __init__(self, A, B): super().__init__(A, B) But this doesn't behave the way I expected: >>> sub_instance = SubClass('spam',10) >>> sub_instance.A # works as expected 'spam' >>> sub_instance.B # works as expected 10 >>> sub_instance.properties() # expected {'A': 'spam', 'B': 10} {} I know that the following alternative subclass definition produces the behavior I expected: class SubClass(BaseClass): def __init__(self, A, B): super().__init__(A, B) @BaseClass.A.getter # def A(self): # This is all boilerplate return self._A # that I need to copy and # paste in every subclass @BaseClass.B.getter # of BaseClass I define... def B(self): # return self._B # >>> sub_instance = SubClass('spam',10) >>> sub_instance.properties() {'A': 'spam', 'B': 10} Is there a cleaner way to define properties (with the property decorator) inside the superclass BaseClass so that the properties() method "just works", without the need for the boilerplate lines?
You can try to use inspect.getmro to get all base classes: from inspect import getmro class BaseClass(object): def __init__(self, A, B): self._A = A self._B = B @property def A(self): return self._A @A.setter def A(self, new_value): self._A = new_value @property def B(self): return self._B @B.setter def B(self, new_value): self._B = new_value def properties(self): out = {} for klass in getmro(self.__class__): class_items = klass.__dict__.items() for k, v in class_items: if isinstance(v, property): out[k] = getattr(self, k) return out class SubClass(BaseClass): def __init__(self, A, B): super().__init__(A, B) @property def C(self): return 42 sub_instance = SubClass("spam", 10) print(sub_instance.properties()) Prints: {'C': 42, 'A': 'spam', 'B': 10}
3
1
77,200,529
2023-9-29
https://stackoverflow.com/questions/77200529/pandas-select-three-rows-per-id-amongst-varying-number-of-rows
I have a dataset of 100s of people who have been followed up over varying amounts of time (up to 8 observations per person) and have completed a bunch of tests. The time values are always in an integer sequence for each ID. The goal of my project is to examine these changes in individuals, while sampling each person's data across 3 equally spaced intervals from all their available data. Here is a snapshot of the dataset, with just two tests for brevity. dict = { "ID": [1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4], "Time": [1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2], "AlBa": [5, 2, 1, 8, 7, 6, 5, 9, 7, 6, 4, 2, 3, 1], "Tiri": [4, 3, 2, 10, 9, 5, 4, 5, 4, 3, 2, 1, 4, 4] } df_test = pd.DataFrame(dict) print(df_test) ID Time AlBa Tiri 0 1 1 5 4 1 1 2 2 3 2 1 3 1 2 3 2 1 8 10 4 2 2 7 9 5 2 3 6 5 6 2 4 5 4 7 3 1 9 5 8 3 2 7 4 9 3 3 6 3 10 3 4 4 2 11 3 5 2 1 12 4 1 3 4 13 4 2 1 4 I want to sample each person's data across 3 equally spaced intervals from all their available data, leading to three data points per ID. Accordingly, ID1 gets included because they have exactly three observations. ID 4 gets excluded because they have only two observations. For individuals with more than three, but an odd number of observations (e.g., ID3), I want to keep scores from their first (time 1), last (here, time 5), and middle observation (here, time 3). For individuals with more than three, but an even number of observations (e.g., ID2), I want to keep their first (time 1), last (here, time 4), and find the average of the middle two observations (here, average of times 2 and 3). The final dataset should look like this: ID Time AlBa Tiri 0 1 1.0 5.0 4 1 1 2.0 2.0 3 2 1 3.0 1.0 2 3 2 1.0 8.0 10 4 2 2.5 6.5 7 5 2 4.0 5.0 4 6 3 1.0 9.0 5 7 3 3.0 6.0 3 8 3 5.0 2.0 1 What is the best way to code this in Pandas? I have data with anywhere between 1 and 8 observations per person.
Using aggregation per group (groupby.agg) with first/median/last, then filtering the IDs with groupby.size, and reshaping with stack: g = df_test.groupby('ID') s = g.size() out = (g.agg(['first', 'median', 'last']) .loc[lambda d: s[s>2].index] # remove groups with < 2 values .stack().reset_index() #.drop(columns=['level_1']) ) Note that median take the mid-point of the sorted values, if you want the mid-point by position, use a custom function: def mid_point(s): return s.iloc[(len(s)-1)//2:len(s)//2+1].mean() g = df_test.groupby('ID') s = g.size() out = (g.agg(['first', mid_point, 'last']) .loc[lambda d: s[s>2].index] .stack().reset_index() ) Output: ID level_1 Time AlBa Tiri 0 1 first 1.0 5.0 4.0 1 1 median 2.0 2.0 3.0 2 1 last 3.0 1.0 2.0 3 2 first 1.0 8.0 10.0 4 2 median 2.5 6.5 7.0 5 2 last 4.0 5.0 4.0 6 3 first 1.0 9.0 5.0 7 3 median 3.0 6.0 3.0 8 3 last 5.0 2.0 1.0 As noted by @Lahcen in comments, you can also pre-filter before the groupby.agg with: out = (df_test[df_test.groupby('ID')['ID'].transform('count').gt(2)] .groupby('ID').agg(['first', 'median', 'last']) .stack().reset_index().drop(columns=['level_1']) ) The difference is that in one case you reuse the groupby object (which is faster), but filter after the computation (which is slower). In this alternative you have to compute the groupby twice but avoid aggregating unnecessarily. The best approach might depend on the number of groups and the proportion of groups with less than 3 values. To be tested on the real data.
2
4
77,200,419
2023-9-29
https://stackoverflow.com/questions/77200419/bulk-create-many-to-many-objects-to-self
Having model class MyTable(Model): close = ManyToManyField("MyTable") How to bulk create objects to this relation? With tables not related to itself, one could use db_payload =[MyTable.close.throught(tablea_id=x, tableb_id=y) for x,y in some_obj_list] MyTable.close.through.objects.bulk_create(db_payload) What would the keyword arguments be in case where relation is to the table itself?
In that case it uses as fields from_model and to_model. Indeed, we can see this in the source code [GitHub]: to = make_model_tuple(to_model)[1] from_ = klass._meta.model_name if to == from_: to = "to_%s" % to from_ = "from_%s" % from_ So you can work with: MyTable.close.through.objects.bulk_create( [ MyTable.close.through(from_mytable_id=x, to_mytable_id=y) for x, y in some_obj_list ] )
3
1
77,198,291
2023-9-28
https://stackoverflow.com/questions/77198291/how-do-i-concatenate-columns-values-all-but-one-to-a-list-and-add-it-as-a-colu
I have the input in this format: import polars as pl data = {"Name": ['Name_A', 'Name_B','Name_C'], "val_1": ['a',None, 'a'],"val_2": [None,None, 'b'],"val_3": [None,'c', None],"val_4": ['c',None, 'g'],"val_5": [None,None, 'i']} df = pl.DataFrame(data) print(df) shape: (3, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ val_1 ┆ val_2 ┆ val_3 ┆ val_4 ┆ val_5 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ Name_A ┆ a ┆ null ┆ null ┆ c ┆ null β”‚ β”‚ Name_B ┆ null ┆ null ┆ c ┆ null ┆ null β”‚ β”‚ Name_C ┆ a ┆ b ┆ null ┆ g ┆ i β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I want the output as: shape: (3, 7) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Name ┆ val_1 ┆ val_2 ┆ val_3 ┆ val_4 ┆ val_5 ┆ combined β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ str ┆ str ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════════════════║ β”‚ Name_A ┆ a ┆ null ┆ null ┆ c ┆ null ┆ ["a", "c"] β”‚ β”‚ Name_B ┆ null ┆ null ┆ c ┆ null ┆ null ┆ ["c"] β”‚ β”‚ Name_C ┆ a ┆ b ┆ null ┆ g ┆ i ┆ ["a", "b","g""i"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I want to combine all the columns as a list except the Name column. I have simplified the data for this question but in reality we have many columns of the val_N format and a generic code where I do not have to list each column name would be great.
For the main answer in the question you can do df.with_columns(combined = pl.concat_list(pl.exclude('Name'))) pl.exclude is how to get all columns BUT the ones given. To get rid of the nulls in the final list, version 0.19.4 just introduced list.drop_nulls. df.with_columns(combined = pl.concat_list(pl.exclude('Name')).list.drop_nulls())
2
3
77,197,671
2023-9-28
https://stackoverflow.com/questions/77197671/splitting-a-column-with-delimiter-and-place-a-value-in-the-right-column
I have a data frame with a column that potentially can be filled with 3 options (a,b, and/or c) with a comma delimiter. import pandas as pd df = pd.DataFrame({'col1':['a,b,c', 'b', 'a,c', 'b,c', 'a,b']}) I want to split this column based on ',' df['col1'].str.split(',', expand=True) A problem with this is that new columns are filled from the first column where I want to fill the columns based on values. For example all a's in the first column, b's in the second column, c's in the third column.
Using str.get_dummies: tmp = df['col1'].str.get_dummies(',') out = tmp.mul(tmp.columns) Output: a b c 0 a b c 1 b 2 a c 3 b c 4 a b With NaNs and custom headers: tmp = df['col1'].str.get_dummies(',') out = (tmp.mul(tmp.columns).where(tmp>0) .rename(columns={'a': 'X', 'b': 'Y', 'c': 'Z'}) ) Output: X Y Z 0 a b c 1 NaN b NaN 2 a NaN c 3 NaN b c 4 a b NaN
3
3
77,197,748
2023-9-28
https://stackoverflow.com/questions/77197748/arent-the-values-supposed-to-sum-up-for-each-bar
I was expecting for example the F bar to have 8+9=17 and not only 9 (the last value for F). import matplotlib.pyplot as plt x = ['A', 'B', 'C', 'D', 'D', 'D', 'D', 'E', 'F', 'F'] y = [ 5 , 8 , 7, 9, 9, 2, 7, 8, 8, 9 ] fig, ax = plt.subplots() ax.bar(x, y) plt.show(); Can someone explain the logic please ?
No, it's normal, the bars are superimposed. See for example changing the opacity: ax.bar(x, y, alpha=0.1) You can use pandas to group the values: pd.Series(y).groupby(x).sum().plot.bar() Output: Or in pure python: out = {} for X, Y in zip(x, y): out[X] = out.get(X, 0) + Y fig, ax = plt.subplots() ax.bar(*zip(*out.items()))
2
3