question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,442,172
2023-11-8
https://stackoverflow.com/questions/77442172/ssl-certificate-verify-failed-certificate-verify-failed-unable-to-get-local-is
Working on scripts to connect to AWS and recently started getting this error when I try to install a python module or execute a script I get the following error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) It appears I have defined the certificate in the environment path so not sure what else to use to troubleshoot the issue.
Try this in the command line: pip install pip-system-certs I've been struggling with the CERTIFICATE_VERIFY_FAILED error as well when using the requests module. I tried installing certifi but that didn't help. The only solution to my problem was to install pip-system-certs. For some reason that allowed requests to access a local certificate. Also: requests.get(url, verify=False) is not recommended for production environment because you basically turn safety off.
21
43
77,450,814
2023-11-9
https://stackoverflow.com/questions/77450814/how-to-dynamically-route-and-authenticate-upstream-proxies-in-mitmproxy-based-on
Hello Stack Overflow community, I am working on a project using mitmproxy and I'm facing a challenge where I need to dynamically route requests to different upstream proxies based on the URL, along with handling authentication for these proxies. I would appreciate any guidance or suggestions on how to implement this. Requirements: Dynamic Proxy Routing: If the incoming request URL is https://example.com/123, mitmproxy should forward it through "Proxy A". If the URL is https://example.com/456, it should use "Proxy B". Authentication for Each Proxy: Both "Proxy A" and "Proxy B" require authentication. The solution needs to handle this, ensuring the correct credentials are used based on which proxy is selected Implementation in an Addon I am looking to implement this as an addon in mitmproxy, without using specific command-line arguments like --mode upstream:http://example.com:8081. My Attempts/Research: I've looked into the documentation but haven't found a clear way to change the upstream proxy dynamically based on the request URL, especially when it comes to incorporating authentication for different proxies. Questions: How can I programmatically route requests to different upstream proxies based on the URL in mitmproxy? What is the most efficient method to authenticate with these proxies, keeping in mind that each proxy has different credentials? Are there particular functions or modules within mitmproxy that I should look into for achieving this? Any code examples, documentation references, or insights into how to approach this in mitmproxy would be extremely helpful. Thank you in advance for your help! below is the code I tried but not satisfied import base64 from mitmproxy import http class DynamicUpstreamProxy: def __init__(self): self.proxy_A = ("upstream-proxy-A.com", 8081) self.proxy_B = ("upstream-proxy-B.com", 8082) self.proxy_A_auth = self.encode_credentials("usernameA", "passwordA") self.proxy_B_auth = self.encode_credentials("usernameB", "passwordB") def encode_credentials(self, username, password): credentials = f"{username}:{password}" encoded_credentials = base64.b64encode(credentials.encode()).decode() return f"Basic {encoded_credentials}" def request(self, flow: http.HTTPFlow): url = flow.request.pretty_url if url.startswith("https://example.com/123"): # Upstream Proxy A flow.live.change_upstream_proxy_server(self.proxy_A) flow.request.headers["Proxy-Authorization"] = self.proxy_A_auth elif url.startswith("https://example.com/456"): # Upstream Proxy B flow.live.change_upstream_proxy_server(self.proxy_B) flow.request.headers["Proxy-Authorization"] = self.proxy_B_auth addons = [ DynamicUpstreamProxy() ] then run addon mitmproxy -s my_upstream_addon.py
How about something like the below? This routes each request to an upstream proxy based on the value of a custom header called "X-Upstream-Proxy" or no upstream if the header does not exist (tested with mitmproxy v10.1.3). Regarding authentication with the upstream proxy server, I haven't tested this but I assume an upstream proxy value of "http://user:pass@proxy-hostname:8080" or similar should work. This code can be easily modified to run as a command line add-on to mitmproxy, take a look at a relevant example here: https://github.com/mitmproxy/mitmproxy/blob/main/examples/contrib/change_upstream_proxy.py import asyncio from urllib.parse import urlparse from mitmproxy.addons.proxyserver import Proxyserver from mitmproxy.options import Options from mitmproxy.tools.dump import DumpMaster from mitmproxy.http import HTTPFlow from mitmproxy.connection import Server from mitmproxy.net.server_spec import ServerSpec UPSTREAM_PROXY_HEADER = 'X-Upstream-Proxy' def get_upstream_proxy(flow: HTTPFlow) -> tuple[str, tuple[str, int]] | None: upstream_proxy = flow.request.headers.get(UPSTREAM_PROXY_HEADER) if upstream_proxy is not None: parsed_upstream_proxy = urlparse(upstream_proxy) if parsed_upstream_proxy.scheme not in ('http', 'https'): return None del flow.request.headers[UPSTREAM_PROXY_HEADER] return parsed_upstream_proxy.scheme, (parsed_upstream_proxy.hostname, parsed_upstream_proxy.port) return None class DynamicUpstreamProxy: def request(self, flow: HTTPFlow) -> None: upstream_proxy = get_upstream_proxy(flow) print(flow.request) if upstream_proxy is not None: has_proxy_changed = upstream_proxy != flow.server_conn.via server_connection_already_open = flow.server_conn.timestamp_start is not None if has_proxy_changed and server_connection_already_open: # server_conn already refers to an existing connection (which cannot be modified), # so we need to replace it with a new server connection object. flow.server_conn = Server(address=flow.server_conn.address) flow.server_conn.via = ServerSpec(upstream_proxy) else: flow.server_conn.via = None if __name__ == '__main__': options = Options(listen_host='127.0.0.1', listen_port=8080, http2=True, mode=['upstream:http://dummy:8888/']) m = DumpMaster(options, with_termlog=True, with_dumper=False, loop=asyncio.get_event_loop()) m.server = Proxyserver() m.addons.add(DynamicUpstreamProxy()) asyncio.run(m.run())
4
3
77,446,605
2023-11-8
https://stackoverflow.com/questions/77446605/running-python-poetry-unit-test-in-github-actions
I have my unittests in a top level tests/ folder for my Python project that uses poetry (link to code). When I run my tests locally, I simply run: poetry run python -m unittest discover -s tests/ Now, I want to run this as CI in Github Actions. I added the following workflow file: name: Tests on: push: branches: [ "master" ] pull_request: branches: [ "master" ] permissions: contents: read jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v3 with: python-version: "3.9" - name: Install dependencies run: | python -m pip install --upgrade pip pip install pylint poetry - name: Run tests run: poetry run python -m unittest discover -s tests/ -p '*.py' - name: Lint run: pylint $(git ls-files '*.py') But, this fails (link to logs): Run poetry run python -m unittest discover -s tests/ -p '*.py' Creating virtualenv fixed-income-annuity-WUaNY9r8-py3.9 in /home/runner/.cache/pypoetry/virtualenvs E ====================================================================== ERROR: tests (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: tests Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/home/runner/work/fixed_income_annuity/fixed_income_annuity/tests/tests.py", line 3, in <module> from main import Calculator File "/home/runner/work/fixed_income_annuity/fixed_income_annuity/main.py", line 6, in <module> from dateutil.relativedelta import relativedelta ModuleNotFoundError: No module named 'dateutil' ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) Error: Process completed with exit code 1. Why does this fail in Github Actions but works fine locally? I understand I can go down the venv route to make it work but I prefer poetry run since its a simple 1 liner.
You need to install the dependencies in your action in order for it to work. Adding poetry install after your pip statement is an immediate fix, but there are some further tweaks you should make. Your project needs to be tweaked for pytest to pick up your tests. pytest requires that your files be prefixed with test_, and all test classes should start with Test. You should have poetry manage pytest and pylint as dev dependencies so that they are installed within the venv only when you include them in your github actions (and locally) by running poetry install --with dev: # in pyproject.toml [tool.poetry.group.dev.dependencies] pytest = "^7.4.3" pylint = "^3.0.2" You'll also want to include the current directory in your pythonpath for pytest: # in pyproject.toml [tool.pytest.ini_options] pythonpath = [ "." ] You'll also want to add an init-hook for pylint to handle imports correctly while in the venv: [MASTER] init-hook='import sys; sys.path.append(".")' # ... From there, you can just use poetry to manage both pytest and pylint. # snippet of test.yml # ... - name: Install dependencies run: | python -m pip install --upgrade pip pip install poetry poetry install --with dev - name: Run tests run: poetry run pytest - name: Lint run: poetry run pylint $(git ls-files '*.py')
5
3
77,455,300
2023-11-9
https://stackoverflow.com/questions/77455300/streamlit-how-to-add-proper-citation-with-source-content-to-chat-message
I'm currently building a RAG (Retrieval Augmented Generation) Chatbot in Streamlit that queries my own data from a Postgres database and provides it as context for GPT 3.5 to answer questions about my own data. I have got the basics working already (frontend and backend). Now I also want to display the sources used nicely. I already finished the backend part. It returns a list of dictionaries with the following format: {"id": 1, "file_name": "my_file.pdf", "url_name": "", "content": "my content", "origin": "pdf"} Some documents are actual files and therefore the "url_name" is "" and sometimes the source is an actual website. In that case "url_name" would be: "www.my_website.com" In order to display websources nicely I found the streamlit-extras library which has a nice feature to display mentions: from streamlit_extras.mention import mention mention(label=f"{file_name}", url=url_name) It displays a clickable text with an icon. The url opens in another tab: Now this does not really work for my non url documents. In that case I need something to show the actual text of the document. But I don't want to do it inline. I want something like this: The Microsoft Azure App chatbot example provides the user the option to click on a source which will then show the content of the document on the right side of the page. I want to copy this behaviour by providing a selecatable text element that will expand the right sidebar in streamlit and display the text within this sidebar. Unfortunately I have no idea how I can accomplish this as there is no way to make a selectable text that can expand the sidebar and fill it with content as far as I know.
The simplest way is to serve the PDF via Flask/FastAPI and then use its link in your mention(label=f"{file_name}", url=url_name). The good news here is that most browsers will display the PDF on a specific page when it is provided. For example, if you go to https://arxiv.org/pdf/2401.00107.pdf#page=4, you will automatically land on page 4. So, in the simplest implementation, your url_name would be something like this: http://yourdomain.com/path_to_pdfs/{source["file_name"]}#page={source["page"]}. Now, if you want to display the PDF in your Streamlit app, you would need to embed it somehow with that link. You can use an HTML iframe as mentioned here: https://discuss.streamlit.io/t/rendering-pdf-on-ui/13505/21. Another option is to use st.components.v1.iframe: https://discuss.streamlit.io/t/aligning-st-components-v1-iframe/35885/4.
3
1
77,446,083
2023-11-8
https://stackoverflow.com/questions/77446083/pandas-statawriter-writing-an-iterator-for-large-queries-dta-file-corrupt
I'm trying to subclass the pandas StataWriter to allow passing a SQL query with chunksize to avoid running out of memory on very large result sets. I've gotten most of the way there, but am getting an error when I try to open up the file that is written by pandas in STATA: .dta file corrupt: The marker </data> was not found in the file where it was expected to be. This means that the file was written incorrectly or that the file has subsequently become damaged. From what I can tell in my code, it should be working; the self._update_map("data") is properly update the location of the tag. Here are the contents of self._map: {'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 52339, 'value_labels': 52354, 'stata_data_close': 52383, 'end-of-file': 52395} This end-of-file entry matches the byte-size of the file (52,395 == 52,395): -rw------- 1 tallen wrds 52395 Nov 8 08:34 test.dta Here's the code I've come up with. What am I missing to properly position the end </data> tag? from collections.abc import Hashable, Sequence from datetime import datetime from pandas import read_sql_query from pandas._typing import ( CompressionOptions, DtypeArg, FilePath, StorageOptions, WriteBuffer, ) from pandas.io.stata import StataWriterUTF8 from sqlalchemy import text from sqlalchemy.engine.base import Connection class SQLStataWriter(StataWriterUTF8): """ Writes a STATA binary file without using an eager dataframe, avoiding loading the entire result into memory. Only supports writing modern STATA 15 (118) and 16 (119) .dta files since they are the only versions to support UTF-8. """ def __init__( self, fname: FilePath | WriteBuffer[bytes], sql: str = None, engine: Connection = None, chunksize: int = 10000, dtype: DtypeArg | None = ..., convert_dates: dict[Hashable, str] | None = None, write_index: bool = True, byteorder: str | None = None, time_stamp: datetime | None = None, data_label: str | None = None, variable_labels: dict[Hashable, str] | None = None, convert_strl: Sequence[Hashable] | None = None, version: int | None = None, compression: CompressionOptions = "infer", storage_options: StorageOptions | None = None, *, value_labels: dict[Hashable, dict[float, str]] | None = None, ) -> None: # Bind the additional variables to self self.sql = text(sql) self.engine = engine self.chunksize = chunksize self.dtype = dtype # Create the dataframe for init by pulling the first row only for data in read_sql_query( self.sql, self.engine, chunksize=1, dtype=self.dtype, ): break super().__init__( fname, data, convert_dates, write_index, byteorder=byteorder, time_stamp=time_stamp, data_label=data_label, variable_labels=variable_labels, convert_strl=convert_strl, version=version, compression=compression, storage_options=storage_options, value_labels=value_labels, ) def _prepare_data(self): """ This will be called within _write_data in the loop instead. """ return None def _write_data(self, records) -> None: """ Override this to loop over records in chunksize. """ self._update_map("data") self._write_bytes(b"<data>") # Instead of eagerly loading the entire dataframe, do it in chunks. for self.data in read_sql_query( self.sql, self.engine, chunksize=self.chunksize, dtype=self.dtype, ): # Insert an index column or values expected will be off by one. Be # sure to use len(self.data) in case the last chunk is fewer rows # than the chunksize. self.data.insert(0, "index", list(range(0, len(self.data)))) # Call the parent function to prepare rows for this chunk only. records = super()._prepare_data() # Write the records to the file self._write_bytes(records.tobytes()) self._write_bytes(b"</data>") UPDATE: I've found that the self._map dict has different values when using the chunksize writer versus the original: With chunksize, strls is 13,939 (which doesn't work): {'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 13939, 'value_labels': 13954, 'stata_data_close': 13983, 'end-of-file': 13995} Without chunksize, strls is 13,139 (which works): {'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 13139, 'value_labels': 13154, 'stata_data_close': 13183, 'end-of-file': 13195} It appears that the loop writing the bytes may be padding the beginning or end: It seems to be adding four bytes per row. When I do 200 rows, the difference is 800 bytes. When I do 1000 rows, the different is 4000 bytes. I'm fairly certain this is because of the "index row" I'm adding to keep the arrays storing column types and keys from being off-by-one adding a 4-byte integer to each row. Investigating further.
After diving deep into the internals of pandas to try to make this work, I decided to move in another direction. I ended up having PostgreSQL output a CSV file, then using the readstat C library's CSV to STATA conversion. This library keeps memory usage low by iterating over the CSV, rather than eagerly loading the entire result into memory. Some tips: Be sure libcsv's dev package is installed. On Ubuntu, that means libcsv-dev, on CentOS and friends, that's libcsv-devel. You can tell if you have successfully compiled with libcsv by running the readstat command. If you have, it will include this text: Convert a CSV file with column metadata stored in a separate JSON file (see extract_metadata): bin/readstat input.csv metadata.json output.(dta|por|sav|sas7bdat|xpt|zsav|csv) If you don't see the preceding text, the configure command couldn't find libcsv's dev headers. The JSON format can be a little tricky. All columns must be STRING or NUMERIC, with an optional format passed for things like DATE or DATE_TIME. Here's one with a NUMERIC, DATE, and STRING that worked: { "type": "STATA", "variables": [ { "type": "STRING", "name": "string_column", "label": "String Column", "missing": {"type": "DISCRETE","values": [""]} }, { "type": "NUMERIC", "name": "date_column", "label": "Date Column", "format": "DATE" }, { "type": "NUMERIC", "name": "numeric_column", "label": "Numeric Columns", "format": "UNSPECIFIED" } ] }
3
2
77,453,594
2023-11-9
https://stackoverflow.com/questions/77453594/parallelising-functions-using-multiprocessing-in-jupyter-notebook
Edit: I updated the question with a trivial repeatable example for ipython, PyCharm and Visual Studio Code. They all fail in a different way. I am running CPU-intensive tasks in Jupyter Notebook. The task is trivial to parallelise and I am already able to do this in a notebook via threads. However, due to Python's GIL, this is inefficient as the GIL prevents effectively utilising multiple CPU cores for parallel tasks. The obvious solution would be multiprocessing Python module, and I have this working with Python application code (not notebooks). However, due to how Jupyter Notebook operates, multiprocessing fails due to lack of __main__ entrypoint. I do not want to create separate Python modules, because they defeat the purpose of using notebooks for data research in the first place. Here is the minimum repeatable example. I create a notebook with a single cell: # Does not do actual multiprocessing, but demostrates it fails in a notebook from multiprocessing import Process def task(): return 2 p = Process(target=task) p.start() p.join() Running this with IPython gives: ipython notebooks/notebook-multiprocess.ipynb Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/moo/code/ts/trade-executor/notebooks/notebook-multiprocess.ipynb", line 5, in <module> "execution_count": null, NameError: name 'null' is not defined Running this with PyCharm gives: Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'task' on <module '__main__' (built-in)> Running this with Visual Studio Code gives: Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/opt/homebrew/Cellar/[email protected]/3.10.13/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'task' on <module '__main__' (built-in)> My current parallelisation using a thread pool works: results = [] def process_background_job(a, b): # Do something for the batch of data and return results pass # If you switch to futureproof.executors.ProcessPoolExecutor # here it will crash with the above error executor = futureproof.executors.ThreadPoolExecutor(max_workers=8) with futureproof.TaskManager(executor, error_policy="log") as task_manager: # Send individual jobs to the multiprocess worker pool total_tasks = 0 for look_back in look_backs: for look_forward in look_forwards: task_manager.submit(process_background_job, look_back, look_forward) total_tasks += 1 print(f"Processing grid search {total_tasks} background jobs") # Run the background jobs and read back the results from the background worker # with a progress bar with tqdm(total=total_tasks) as progress_bar: for task in task_manager.as_completed(): if isinstance(task.result, Exception): executor.join() raise RuntimeError(f"Could not complete task for args {task.args}") from task.result look_back, look_forward, long_regression, short_regression = task.result results.append([ look_back, look_forward, long_regression.rsquared, short_regression.rsquared ]) progress_bar.update() How can I use process-based parallelization in notebooks? Python 3.10, but happy to upgrade if it helps.
You appear to be using macOS, and the problems you are running into are because of the lack of full support for forking a process in macOS. As such, multiprocessing on macOS starts subprocesses using the spawn method. The following paragraphs describes why the problem occurs. The simple solution is to define the function in a module that can be imported by both the notebook and the worker processes. Alternatively, skip to the bottom for a workaround using cloudpickle. When you fork a process (the default method for starting a multiprocessing worker process on Linux), you get a copy of the memory of the parent process. Meaning the worker process can access and call any function that was defined in the __main__ module when it was forked. However, worker processes created with the spawn method start with a blank slate. As such, they must be able to find the function by reference. This means importing the origin module of the function and looking for it by name in the module. If a function was defined in the __main__ module then it must be importable and the __main__ that multiprocessing expects. When you start a Jupyter notebook it launches a new kernel. This kernel is REPL-based rather than source/file based. As such the __main__ module will be that of the kernel's REPL and not the code that you are inserting into the cells of the notebook. As it stands, there is no way to force multiprocessing to be able to pick up the source defined in a REPL on macOS (or Windows). There is, however, one possibility. If we change the way python pickles functions, then we can send the function to the worker process. cloudpickle is a third-party library that pickles functions in their entirety, rather than by reference. As such. You can monkey-patch cloudpickle into multiprocessing.reduction.ForkingPickler using a reducer_override, so that multiprocessing will use cloudpickle rather than pickle to pickle functions. import sys from multiprocessing import Pool from multiprocessing.reduction import ForkingPickler from types import FunctionType import cloudpickle assert sys.version_info >= (3, 8), 'python3.8 or greater required to use reducer_override' def reducer_override(obj): if type(obj) is FunctionType: return (cloudpickle.loads, (cloudpickle.dumps(obj),)) else: return NotImplemented # Monkeypatch our function reducer into the pickler for multiprocessing. # Without this line, the main block will not work on windows or macOS. # Alterntively, moving the defintionn of foo outside of the if statement # would make the main block work on windows or macOS (when run from # the command line). ForkingPickler.reducer_override = staticmethod(reducer_override) if __name__ == '__main__': def foo(x, y): return x * y with Pool() as pool: res = pool.apply(foo, (10, 3)) print(res) assert res == 30
3
4
77,481,604
2023-11-14
https://stackoverflow.com/questions/77481604/importing-polars-in-a-notebook-causes-kernel-to-crash
Importing Polars polars==0.19.7 makes my kernel crash logs : import polars The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details. 15:51:25.818 [info] Restarted 58cbe489-1e15-4ee8-a740-73c20717dff4 15:51:30.344 [info] Handle Execution of Cells 0 for ~/dev/playground/test.ipynb 15:51:30.352 [info] Kernel acknowledged execution of cell 0 @ 1699973490351 15:51:30.447 [error] Disposing session as kernel process died ExitCode: undefined, Reason: 15:51:30.448 [info] Dispose Kernel process 6740. 15:51:30.462 [info] End cell 0 execution @ undefined, started @ 1699973490351, elapsed time = -1699973490.351s Would any one know what is causing this ?
have you tried https://github.com/pola-rs/polars/issues/12292 Do you want Polars to run on an old CPU (e.g. dating from before 2011), or on an x86-64 build of Python on Apple Silicon under Rosetta? Install pip install polars-lts-cpu. This version of Polars is compiled without AVX target features. "Celeron" and "Pentium" processors won't have AVX (even after 2011) https://www.ikmultimedia.com/faq/?id=1254 lscpu will tell you your processor family on ubuntu
4
3
77,471,818
2023-11-13
https://stackoverflow.com/questions/77471818/exclude-subplots-without-any-data-and-left-align-the-rest-in-relplot
Related to this question: Use relplot to plot a pandas dataframe leading to error Data for reproducible example is here: import pandas as pd data = {'Index': ['TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN'], 'Stage': [10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 28], 'Z-Score CEI': [-0.688363146221944, 0.5773502691896258, -0.1132178081286216, -0.4278470185781525, 1.0564189237269357, -0.2085144140570746, -0.2085144140570747, 0.2094308186874662, 0.7196177629619716, 0.0, 0.2085144140570762, -1.3803992008056865, -1.3414801279616884, -0.898669162696764, -0.3015113445777637, -0.2953788838542738, 1.1753566728623484, 0.887285779752818, -0.7071067811865475, 0.2847473987257496, 0.1877402877114761, -0.14246249364941, 0.9686648999069224, -0.3015113445777636, -0.2734952011457535, 0.5888914135578924, -0.4488478006064821, -0.7745966692414834, 0.3052145041378634, 0.8197566686157259, 0.3377616284580471, 1.1832159566199232, -0.3015113445777637, -0.2952684241380082, -0.7971688059921156, 0.4479595231454734, -0.5805577953661853, 0.3015113445777642, -0.610500944190139, -0.7734588159553295, -0.5434722467562666, -0.2085144140570747, -0.2085144140570747, 0.8838570486142397, -0.7976091842744983, 2.213211486674006, 0.3779644730092272, -0.6900911175081499, -0.4856558012299846, -0.6044504143545613, -0.2085144140570746, -0.2085144140570747, 1.6498242899497324, 0.463638205246897, -0.064684622735315, 0.5488212999484522, -0.665392754456709, -1.096398502672124, 0.9387247898517332, -0.2085144140570747, -0.2085144140570748, 1.5486212537866115, 0.6776076459912243, -0.7973761651368712, 0.4773960376293314, 0.2611306759187019, -0.2450438178293888, 0.1097642599896903, -0.2085144140570746, -0.2085144140570747, 1.2468175442040146, 0.4912008775378222, -0.8071397220005339, 0.3015113445777636, -0.4051430868010012, -0.9843673918740764, 0.4231429298696365, -0.2085144140570746, -0.2182178902359924, 1.0617336112420042, 0.4221998839727844, -0.2267786838055363, 0.2847473987257496, 1.2708306299144654, 2.4058495687034616, -0.1042572070285372, 4.79583152331272, 4.79583152331272, -0.1758750648062869, 0.9614146130140746, -0.6493094697110509, 0.2847473987257496, -0.0566333001085325, 0.0970016157961683, -0.3380617018914065, -0.2085144140570746, -0.2132007163556104, 1.6462867435913509, 0.8920062635166146, -0.649519052838329, 0.2847473987257496, -0.5727902328114448, -0.385256843427376, 0.123403510468459, -0.2085144140570747, -0.2085144140570747, 0.7206954054604126, -0.0169294393471337, -0.1547646465068273, 0.3900382256192578, -0.91200685504817, -0.7643838011372592, -0.8553913029328061, -0.2085144140570746, -0.2132007163556104, 1.999517273479448, 0.2135313581345105, 0.3577708763999664, 0.2085144140570741, -0.5245759407883583, -0.3972170332271401, 0.1363988678940945, -0.2085144140570746, -0.2085144140570747, 2.180043023382912, 0.6949201395674811, -0.0345238339879863, 0.3872983346207417, -1.054383845470446, -0.7524909974608698, -0.79555728417573, -0.2085144140570747, -0.2085144140570747, 2.597515932302782, -0.0173575308522844, -0.7839294959021852, 0.5496481403962044, 0.3346732026206391, -0.1729151200242987, 0.8108848540793832, -0.2085144140570747, -0.2085144140570747, -0.1975075078549267, -0.1333012766349092, -0.7300956427599692, 0.3495310368212778, -0.9383516638143292, 0.3757624051611033, -0.9198662110078, -0.2085144140570747, -0.2085144140570747, 0.1077379509580834, -0.0391099277150297, -0.8006407690254357, 0.5226257719601375, 0.2650955994479978, -0.3323178678594628, 1.348187695720845, -0.2085144140570746, -0.2085144140570748, 0.6009413558916348, 0.455353435995126, -0.5933908290969269, 0.0, 0.1226864783178058, -0.0252747129054563, 0.8212299340934688, -0.2085144140570746, -0.2132007163556105, -0.8954835101738379, -1.1134420487718968], 'Type': ['Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI']} df = pd.DataFrame(data) I want to plot the data; rows should be based on the column Type, cols should be based on the column Index, the x-axis should be Z-Score CEI, and the y-axis should be based on Stage column. Currently, I am using relplot to do this: df = df.groupby('Index').filter(lambda x: not x['Z-Score CEI'].isna().all()) df["Type"] = df["Type"].astype("category") df["Index"] = df["Index"].astype("category") df["Type"] = df["Type"].cat.remove_unused_categories() df["Index"] = df["Index"].cat.remove_unused_categories() g = sns.relplot( data=df, x='Z-Score CEI', y='Stage', col='Index', row='Type', facet_kws={'sharey': True, 'sharex': True}, kind='line', legend=False, ) for (i,j,k), data in g.facet_data(): if data.empty: ax = g.facet_axis(i, j) ax.set_axis_off() However, this leads to a plot where the empty plots are distorting the placement of the subplots with data. I want there to be no empty areas. Current output looks like so: In the graphic above, I want to remove all the subplots which have no data. This will result in different rows having different number of subplots e.g. 1st row might have 5 subplots and 2nd row will have only 4 subplots etc. I want each row to only have the same Type, not mix multiple Types.
Here is another solution that is based on @mwaskom's suggestion in the comments. The basic idea is to create an auxiliary column where for each Type, existing Index values are labeled 0,1,2,... which will act as the column index in the FacetGrid. Then after plotting the relplot, remove all Axes without data and fix the title of the ones with data by replacing the column index by the "real" Index value. # label existing Type-Index pairs col_idx = df.value_counts(['Type', 'Index']).groupby(level=0, observed=False).cumcount().astype(str) # map the labels back to the dataframe df1 = df.merge(col_idx.reset_index(name='column_loc'), on=['Type', 'Index'], how='left') # plot replot g = sns.relplot( data=df1, # <--- new dataframe x='Z-Score CEI', y='Stage', col='column_loc', # <--- column is by the newly created column row='Type', facet_kws={'sharey': True, 'sharex': True}, kind='line', legend=False, ) for ax in g.axes.flat: if not ax.lines: g.fig.delaxes(ax) # remove empty subplots else: # fix the title typ, loc = (x.split(' = ')[1] for x in ax.get_title().split(' | ')) idx, = col_idx[col_idx==loc].loc[typ].index ax.set_title(f"Type = {typ} | Index = {idx}") I think for this particular task, matplotlib is very easy to use IMO. It's because both Type and Index columns are dtype Categorical, so by passing observed=True to pandas groupby, we can simply drop Index values that don't exist for each Type. Basically, we can use a nested groupby to create a sub-dataframe which can be fed into lineplot. However, because we need to manually plot each lineplot, it may be slow (maybe not since relplot is slow anyway). import matplotlib.pyplot as plt gby_obj = df.groupby('Type', observed=True) nrows = gby_obj.ngroups ncols = gby_obj['Index'].nunique().max() fig, axs = plt.subplots(nrows, ncols, figsize=(20,20), sharey=True, sharex=True) for i, (typ, g1) in enumerate(gby_obj): for j, (idx, g2) in enumerate(g1.groupby('Index', observed=True)): sns.lineplot(data=g2, x='Z-Score CEI', y='Stage', ax=axs[i,j]) axs[i,j].set_title(f'Type = {typ} | Index = {idx}') for a in axs[i,j+1:]: fig.delaxes(a) sns.despine(fig, top=True, right=True) fig.tight_layout()
3
3
77,483,981
2023-11-14
https://stackoverflow.com/questions/77483981/80-second-delay-using-google-cloud-speechrecognition-with-python-3-9-on-rpi3b
I'm using the PyPi code ( https://pypi.org/project/SpeechRecognition/) cleaned up to use only Google Cloud SpeechRecognition. Google Json Credentials in shell's environment, and working. I've enabled the Cloud Speech-to-Text API, got the Json credentials, and the service calls ARE hitting the API. The Microphone is fine, and the recording bit happens quickly. However, its taking fully 80 seconds to perform the API call! I've monitored the network traffic, and I can see that the API connection kinda sits idle for 78 seconds, and then TX/RXs really fast in the final 2 seconds. How can I speed this up? Can it be slow-authentication handshake that I might mend? MORE INFORMATION: My application performs 3 API calls: Google Speech-to-text | Google Translate text-to-text | Google text-to-Speech. Those API Calls ALWAYS take 80 seconds , 20s & 80s respectively. Thanks a mil! The delay happens on the 2nd last line here: print("0 seconds") try: print("Google Cloud Speech thinks you said " + r.recognize_google_cloud(audio) print("80th second")
Problem solved! It was the SSL certs; bouquets to @Dean Van Greune & @VonC My fibre router (Sagemcom) was blocking the Pi's SSL certs, or forcing it to a different port, creating massive delays. I remember solving the same problem for JavaMail TLS a while back, and wanting to take the bat to the router ("Office Space"-style). Swapping to the hotspot on my phone, all this is working faster than the lightiest lightning now. Thanks for your help & suggestions guys!!!
4
3
77,459,012
2023-11-10
https://stackoverflow.com/questions/77459012/when-mp4-files-encoded-with-h264-are-set-to-slices-n-where-can-i-find-out-how-m
I am doing an experiment on generating thumbnails for web videos. I plan to extract I-frames from the binary stream by simulating the working principle of the decoder, and add the PPS and SPS information of the original video to form the H264 raw information, which is then handed over to ffmpeg to generate images. I have almost solved many problems, and even wrote a demo to implement my function, but I can't find any information about where there is an identifier when multiple NALUs form one frame (strictly speaking, there is a little, but it can't solve my problem, I will talk about it later). You can use the following command to generate the type of video I mentioned: ffmpeg -i input.mp4 -c:v libx264 -x264-params slices=8 output.mp4 This will generate a video with 8 slices per frame. Since I will use this file later, I will also generate the H264 raw information file with the following command: ffmpeg -i output.mp4 -vcodec copy -an output.h264 When I put it into the analysis program, I can see multiple IDR NALUs connected together, where the first_mb_in_slice in the Slice Header of the non-first IDR NALU is not 0: But when I go back to the mdat in MP4 and look at the NALU, all the first_mb_in_slice become 0: 0x9a= 1001 1010, according to the exponential Golomb coding, first_mb_in_slice == 0( ueg(1B) == 0 ), slice_type == P frame (ueg(00110B) == 5), but using the same algorithm in the H264 raw file, the result is the same as the program gives. Is there any other place where there is an identifier for this information? Assuming I randomly get a NALU, can I know if this video is sliced or not, or is my operation wrong? PS: Putting only one NALU into the decoder is feasible, but only 1/8 of the image can be parsed If you need a reference, the address of the demo program I wrote is: https://github.com/gaowanliang/web-video-thumbnailer
" I plan to extract I-frames" Make sure you go for IDR keyframes (not I-frame keyframes) since IDR bytes can decode into a complete image. Some I-frames can actually need other P/B frames to make a complete image. "I can't find any information about where there is an identifier when multiple NALUs form one frame" Handling at MP4 level: (1) Using SEI: (NALU type 6, usually as byte 0x06) solution: Find text slices= in the SEI text. The MP4 might contain SEI bytes, which will be in front of the bytes of the first video frame. SEI is text data, and if using libx264 as encoder then, it includes a "slices=" entry. An example of SEI text in a libx264 encode: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex ... (other texts) sliced_threads=0 slices=8 nr=0 decimate=1 ... (other texts) constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 ... (other texts) Usually IDR is one big slice (ie: Really libx264? We slice IDR frames now? For what benefit though?). Using -x264-params slices=8 will overwrite the 1-slice default. As you can see there is now a "slices=8" text entry which tells us to find: 8 NAL units per frame (including even if IDR). (2) Using STSS and STSZ: solution: A size (or bytes-length) listed in STSZ will include all NALU slices per frame. "stss" == bytes (hex) 73 74 73 73 == integer 0x73747373. "stsz" == bytes (hex) 73 74 73 7A == integer 0x7374737A. The MP4's video track will have a Sample Table. It has an STSS section for listing all IDR keyframes. Then it has an STSZ section for listing the bytes-length of all frames. Using these two sections of the MP4 header, you can find out which frame numbers that are representing IDR, then check the size by matching the STSZ entry's number to the related frame number. Extract a keyframe by the shown size in STSZ to get a full frame (with all NALU slices). Handling at H.264 level: (1) Using SEI: (NALU type 6, usually as byte 0x06) solution: Find text slices= in the SEI text. The same process as with MP4 (as explained above). (2) Using first_mb_in_slice: solution: The value of first_mb_in_slice is 1 for first slice and 0 for all other slices of the frame. You get first_mb_in_slice by checking the first bit of the next byte after NALU header 0x65. In the case of 8 slices (per frame), when you find an IDR frame, the first_mb_in_slice will be a 1 for the first slice, then following is another 7 IDR units each with a first_mb_in_slice of 0. You will know that you have enough NALUs for one frame when: The first_mb_in_slice of the next IDR becomes 1 again (it means this is now a different IDR). Or when you get the NALU header type of a P/B frame.
2
2
77,481,878
2023-11-14
https://stackoverflow.com/questions/77481878/is-there-any-relation-between-classes-which-use-the-same-type-variable
The typing.TypeVar class allows one to specify reusable type variables. With Python 3.12 / PEP 695, one can define a class A/B with type variable T like this: class A[T]: ... class B[T]: ... Beforehand, with Python 3.11, you would do it like this: from typing import TypeVar, Generic T = TypeVar("T") class A(Generic[T]): ... class B(Generic[T]): ... For the first example, the T is defined in the class scope, so they do not relate to each other. Is there any difference to the second 'old' example? Or: Is there any connection between the two classes A and B?
There is no such connection from the typechecker's POV. TypeVar is declared outside of class scope just because it's convenient to do so, it does not imply any relationships between its users. Type variable is bound in the following scopes: Class scope - if a class inherits from Generic (or parametrized Protocol, or other generic class), type variable that parametrizes its parents is bound to class scope. All occurrences of this type variable are resolved to the same type in same context. Function/method scope - if a function includes a type variable as part of its parameter(s) or return type, this type variable resolves in function body to the same type as in signature. No other binding is taking place. This is explained in detail in PEP 484 (link to relevant section). However, there may be clean semantic relationship (and it was one of my arguments against PEP695 - a battle I, unfortunately, lost). Consider django-stubs (permalink to the declaration): # __set__ value type _ST = TypeVar("_ST", contravariant=True) # __get__ return type _GT = TypeVar("_GT", covariant=True) class Field(RegisterLookupMixin, Generic[_ST, _GT]): ... # omitted (there are 150+ lines here in fact) class IntegerField(Field[_ST, _GT]): ... class PositiveIntegerRelDbTypeMixin: ... class SmallIntegerField(IntegerField[_ST, _GT]): ... class BigIntegerField(IntegerField[_ST, _GT]): ... class PositiveIntegerField(PositiveIntegerRelDbTypeMixin, IntegerField[_ST, _GT]): ... class PositiveSmallIntegerField(PositiveIntegerRelDbTypeMixin, SmallIntegerField[_ST, _GT]): ... class PositiveBigIntegerField(PositiveIntegerRelDbTypeMixin, BigIntegerField[_ST, _GT]): ... These fields do not have to resolve to the same type variables, obviously. However, _ST and _GT have a semantic meaning, which is same for all of them. This does not influence type checking, but helps those who read this code. If _GT and _ST were defined for each class separately, then, in addition to unreadable and unpleasantly looking syntax, we'd have to either repeat comments near every class or document that once in module docstring and add one extra lookup step to demystify their meaning. Using longer forms like _SetterType and _GetterType would remove need for explanation, but also make the already long-ish signatures longer. Additionally, explanation of how Field typing works is still necessary (there are other caveats). Now it's explained in a docstring that comes right after typevar declaration, but with PEP695 style it'd be written far away.
4
2
77,484,060
2023-11-14
https://stackoverflow.com/questions/77484060/efficient-iteration-application-of-a-function-in-pandas-polars-or-torch-is-l
Goal: Find an efficient/fastest way to iterate over a table by column and run a function on each column, in python or with a python library. Background: I have been exploring methods to improve the speed of my functions. This is because I have two models/algorithms that I want to run one small, one large (uses torch) and the large is slow. I have been using the small one for testing. The small model is seasonal decomposition of each column. Setup: Testing environment: ec2, t2 large. X86_64 Python version: 3.11.5 Polars: 0.19.13 pandas: 2.1.1 numpy: 1.26.0 demo data in pandas/polars: rows = 11020 columns = 1578 data = np.random.rand(rows, columns) df = pd.DataFrame(data) # df_p = pl.from_pandas(df) # convert if needed. Pandas pandas and dict: from statsmodels.tsa.seasonal import seasonal_decompose import pandas as pd class pdDictTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -> torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame: trend_data_dict = {} for column in dataframe.columns: trend_data_dict[column] = cls().process_col(dataframe[column]) trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index) return trend_dataframes import timeit start = timeit.default_timer() trend_tensor = pdDictTrendExtractor.process_df(df) stop = timeit.default_timer() execution_time = stop - start print("Program Executed in "+str(execution_time)) Program Executed in 14.349091062998923 with list comprehension instead of for loop: class pdDict2TrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -> pd.Series: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame: trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns} trend_dataframes = pd.DataFrame(trend_data_dict, index=dataframe.index) return trend_dataframes Program Executed in 14.343959668000025 Class using pandas and torch: from statsmodels.tsa.seasonal import seasonal_decompose import torch import pandas as pd class pdTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pd.Series = None) -> torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return torch.tensor(trend, dtype=torch.float32).view(-1, 1) @classmethod def process_df(cls, dataframe: pd.DataFrame) -> torch.Tensor: trend_dataframes = torch.Tensor() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1) return trend_dataframes start = timeit.default_timer() trend_tensor = pdTrendExtractor.process_df(df_p) stop = timeit.default_timer() execution_time = stop - start print("Program Executed in "+str(execution_time)) Program Executed in 23.14214362200073 with dict, multiprocessing & list comprehension: As suggested by @roganjosh & @jqurious below. from multiprocessing import Pool class pdMTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' def process_col(self, column_data: pd.Series = None) -> pd.Series: result = seasonal_decompose(column_data, model=self._model, period=self._period) trend = result.trend.fillna(0).values return trend @classmethod def process_df(cls, dataframe: pd.DataFrame) -> pd.DataFrame: with Pool() as pool: trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns]))) return pd.DataFrame(trend_data_dict, index=dataframe.index) Program Executed in 4.582350738997775, Nice and fast. Polars Polars & torch: class plTorTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pl.Series = None) -> torch.Tensor: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) trend = result.trend[np.isnan(result.trend)] = 0 return torch.tensor(trend, dtype=torch.float32).view(-1, 1) @classmethod def process_df(cls, dataframe: pl.DataFrame) -> torch.Tensor: trend_dataframes = torch.Tensor() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = torch.cat((trend_dataframes, trend_data), dim=1) return trend_dataframes Program Executed in 13.813817326999924 polars & lamdba: start = timeit.default_timer() df_p = df_p.select([ pl.all().map_batches(lambda x: pl.Series(seasonal_decompose(x, model="Additive", period=365).trend)).fill_nan(0) ] ) stop = timeit.default_timer() execution_time = stop - start print("Program Executed in "+str(execution_time)) Program Executed in 82.5596211330012 I suspect this is written poorly & the reason it is so slow. I have yet find a better method. So far I have tried, apply_many, apply, map, map_batches or map_elements.. with_columns vs select and a few other combinations. polars only, for loop: class plTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' # Store data as an instance variable def process_col(self, column_data: pl.Series = None) -> pl.DataFrame: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) # Handle missing values by replacing NaN with 0 result.trend[np.isnan(result.trend)] = 0 return pl.DataFrame({column_data.name: result.trend}) @classmethod def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame: trend_dataframes = pl.DataFrame() for column in dataframe.columns: trend_data = cls().process_col(dataframe[column]) trend_dataframes = trend_dataframes.hstack(trend_data) return trend_dataframes Program Executed in 13.34212675299932 with list comprehensions: I tried with polars and list comprehension. But having difficulty with polars syntax. with a dict & for loop: Program Executed in 13.743039597999996 with dict & list comprehension: class plDict2TrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' def process_col(self, column_data: pl.Series = None) -> pl.Series: self.data = column_data result = seasonal_decompose(self.data, model=self._model, period=self._period) result.trend[np.isnan(result.trend)] = 0 return pl.Series(result.trend) @classmethod def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame: trend_data_dict = {column: cls().process_col(dataframe[column]) for column in dataframe.columns} trend_dataframes = pl.DataFrame(trend_data_dict) return trend_dataframes Program Executed in 13.008102383002552 with dict, multiprocessing & list comprehension: As suggested by @roganjosh & @jqurious below. from multiprocessing import Pool class plMTrendExtractor: def __init__(self, period: int = 365) -> None: self._period = period self._model = 'Additive' def process_col(self, column_data: pl.Series = None) -> pl.Series: result = seasonal_decompose(column_data, model=self._model, period=self._period) result.trend[np.isnan(result.trend)] = 0 return pl.Series(result.trend) @classmethod def process_df(cls, dataframe: pl.DataFrame) -> pl.DataFrame: with Pool() as pool: trend_data_dict = dict(zip(dataframe.columns, pool.map(cls().process_col, [dataframe[column] for column in dataframe.columns]))) return pl.DataFrame(trend_data_dict) Program Executed in 4.997288776001369, Nice!. With lazyFrame? I can add lazy & collect to the df_p.select() method above but doing this does not improve the time. One of the key issues seems to be that the function that is passed to lazy operations needs to be lazy too. I was hoping that it might run each column in parallel. current conclusions & notes I am getting a second to half a second of variation for some of the runs. Pandas and dict, seems to be reasonable. If you care about the index, then this can be a good option. Polars with dict and list comprehension are the "fastest". But not by much. Considering the variation even smaller diff. both options also have the benefit of not needing additional packages. There seems to be room for improvement in polars. In terms of better code, but not sure if this would improve time much. As the main, compute time is seasonal_decompose. Which takes ~0.012 seconds per column, if run alone. open to any feedback on improvements warning: i haven't done full output validation yet on the functions above. how the variable is returned from process_col does have minor impacts on speed. As expected, and part of what I was tuning here. For example, with polars if I returned numpy array I got slower time. If I returned a numpy array, but declare -> pl.series this seems about the same speed, with one or two trials being faster (then above). after feedback/added multiprocessing surprise surprise, multiprocessing for the win. This seems to be regardless of pandas or polars.
With regards to Polars, using .select() and .map_batches() in this type of situation is kind of an "anti-pattern". You are putting all of the data through Polars expression engine, to pass it back out to Python to run your external function, to pass it back into Polars again. You can bypass that and simply pass each Series directly to seasonal_decompose() (similar to how you loop through each column in the Pandas approach): pl.DataFrame({ col.name: seasonal_decompose(col, model="Additive", period=365).trend for col in df_p }) One thing I did notice though is that if you create a LazyFrame from each column and use pl.collect_all() it speeds up the .map_batches() approach by ~50%. (Perhaps this could be investigated.) (Although still slightly slower than the comprehension.) lf = df_p.lazy() lazy_columns = [ lf.select(pl.col(col).map_batches( lambda x: pl.Series(seasonal_decompose(x, model="Additive", period=365).trend)) ) for col in lf.columns ] out = pl.concat(pl.collect_all(lazy_columns), how="horizontal") Essentially the question becomes "How can I parallelize a Python for loop?" Which as @roganjosh pointed out is done with multiprocessing. from multiprocessing import get_context ... if __name__ == "__main__": df_p = ... with get_context("spawn").Pool() as pool: columns = pool.map(process_column, (col for col in df_p)) Out of interest, the example runs ~50% faster for me with multiprocessing versus the regular comprehension. But it's very task/data/platform-specific so you would have benchmark it locally.
2
2
77,470,588
2023-11-12
https://stackoverflow.com/questions/77470588/backtesting-py-backtest-statistics-only-shows-nans-and-0
I am trying to backtest a momentum strategy using Backtesting.py. I've gathered the data and computed indicator values using pandas_ta. I've defined short and long trading conditions. Now I just need Backtesting.py to run a backtest so that I can determine the performance of my strategy on historical data. I am expecting stats from bt.run() to show me values instead of nan and 0. I am working using this notebook: https://github.com/kbs-code/algo_trader/blob/master/backtests/backtestingdotpy_SPY_weekly_vortex.ipynb EDIT: The name of this notebook is now https://github.com/kbs-code/algo_trader/blob/master/backtests/debugging_example_backtestingdotpy.ipynb Thanks
The backtester must use "self.data.foo" and not "self.foo" in order to use values inside any column. Please see the updated notebook with troubleshooting proof.
3
1
77,484,086
2023-11-14
https://stackoverflow.com/questions/77484086/why-is-my-nbody-simulator-not-printing-out-orbital-times-past-3-bodies
Here is my code for simulating planetary orbits. When I have the bodies list set up with only the Earth, Sun and Jupiter, my code works well and prints out a reasonably accurate time for Jupiter and Earth's orbit. However, when I add Saturn into the bodies list, I get a value of 43200 seconds for both Jupiter and Saturn's orbit. What's strange though is that Earth's orbit is printing out accurately even with Saturn in the list of bodies. In addition to this, the plot I'm getting is correct, as if I run the simulation for, say, 28 years, Saturn shows as not having a completed orbit. If I run the simulation for 29.5 years, Saturn just about completes its orbit, which is accurate. So the gravitational calculation is still working. Perhaps the index is failing somehow when another body is added? But I'm not sure why. It seems the line simulation.run(30*u.year,6*u.hr) is causing this. Whenever I change the time-step to around 24*u.hr, it prints out accurate adjacent times for only Jupiter, but not accurate enough. When I change the time-step to 10 days, it prints out a value of 10600 for Saturn's orbital period. This isn't accurate enough however. import numpy as np import matplotlib.pyplot as plt import astropy.units as u import astropy.constants as c import sys import time from mpl_toolkits.mplot3d import Axes3D #making a class for Celestial Objects class CelestialObjects(): def __init__(self,mass,pos_vec,vel_vec,name=None, has_units=True): self.name=name self.has_units=has_units if self.has_units: self.mass=mass.cgs self.pos=pos_vec.cgs.value self.vel=vel_vec.cgs.value else: self.mass=mass #3d array for position of body in 3d space in AU self.pos=pos_vec #3d array for velocity of body in 3d space in km/s self.vel=vel_vec def return_vec(self): return np.concatenate((self.pos,self.vel)) def return_name(self): return self.name def return_mass(self): if self.has_units: return self.mass.cgs.value else: return self.mass v_earth=(((c.G*1.98892E30)/1.495978707E11)**0.5)/1000 v_jupiter=(((c.G*1.98892E30)/7.779089276E11)**0.5)/1000 v_saturn=(((c.G*1.98892E30)/1.421179772E12)**0.5)/1000 #set up first instance of a celestial object, Earth Earth=CelestialObjects(name='Earth', pos_vec=np.array([0,1,0])*u.AU, vel_vec=np.array([v_earth.value,0,0])*u.km/u.s, mass=1.0*c.M_earth) #set up second instance of a celestial object, the Sun Sun=CelestialObjects(name='Sun', pos_vec=np.array([0,0,0])*u.AU, vel_vec=np.array([0,0,0])*u.km/u.s, mass=1*u.Msun) Jupiter=CelestialObjects(name='Jupiter', pos_vec=np.array([0,5.2,0])*u.AU, vel_vec=np.array([v_jupiter.value,0,0])*u.km/u.s, mass=317.8*c.M_earth) Saturn=CelestialObjects(name='Saturn', pos_vec=np.array([0,9.5,0])*u.AU, vel_vec=np.array([v_saturn.value,0,0])*u.km/u.s, mass=95.16*c.M_earth) bodies=[Earth,Sun,Jupiter,Saturn] #making a class for system class Simulation(): def __init__(self,bodies,has_units=True): self.has_units=has_units self.bodies=bodies self.Nbodies=len(self.bodies) self.Ndim=6 self.quant_vec=np.concatenate(np.array([i.return_vec() for i in self.bodies])) self.mass_vec=np.array([i.return_mass() for i in self.bodies]) self.name_vec=[i.return_name() for i in self.bodies] def set_diff_eqs(self,calc_diff_eqs,**kwargs): self.diff_eqs_kwargs=kwargs self.calc_diff_eqs=calc_diff_eqs def rk4(self,t,dt): k1= dt* self.calc_diff_eqs(t,self.quant_vec,self.mass_vec,**self.diff_eqs_kwargs) k2=dt*self.calc_diff_eqs(t+dt*0.5,self.quant_vec+0.5*k1,self.mass_vec,**self.diff_eqs_kwargs) k3=dt*self.calc_diff_eqs(t+dt*0.5,self.quant_vec+0.5*k2,self.mass_vec,**self.diff_eqs_kwargs) k4=dt*self.calc_diff_eqs(t+dt,self.quant_vec+k3,self.mass_vec,**self.diff_eqs_kwargs) y_new=self.quant_vec+((k1+2*k2+2*k3+k4)/6) return y_new def run(self,T,dt,t0=0): if not hasattr(self,'calc_diff_eqs'): raise AttributeError('You must set a diff eq solver first.') if self.has_units: try: _=t0.unit except: t0=(t0*T.unit).cgs.value T=T.cgs.value dt=dt.cgs.value self.history=[self.quant_vec] clock_time=t0 nsteps=int((T-t0)/dt) start_time=time.time() orbit_completed=False orbit_start_time=clock_time initial_position=self.quant_vec[:3] min_distance=float('inf') min_distance_time=0 count=0 for step in range(nsteps): sys.stdout.flush() sys.stdout.write('Integrating: step = {}/{}| Simulation Time = {}'.format(step,nsteps,round(clock_time,3))+'\r') y_new=self.rk4(0,dt) self.history.append(y_new) self.quant_vec=y_new clock_time+=dt #checking if planet has completed an orbit current_position=self.quant_vec[:3] #**THIS IS WHERE ORBIT TIME CALCULATED** To explain, this is earths position, the first three elements of the vector, the next three elements are its velocity, and then the next three are the suns position vectors and so on. Saturns index would be self.quant_vec[18:21]. distance_to_initial=np.linalg.norm(current_position-initial_position) if distance_to_initial<min_distance and orbit_completed is False: min_distance=distance_to_initial min_distance_time=clock_time if count==1: orbit_completed=True count+=1 runtime=time.time()-start_time print(clock_time) print('\n') print('Simulation completed in {} seconds'.format(runtime)) print(f'Minimum distance reached at time: {min_distance_time:.3f} seconds. Minimum distance: {min_distance:.3e}') self.history=np.array(self.history) def nbody_solver(t,y,masses): N_bodies=int(len(y)/6) solved_vector=np.zeros(y.size) distance=[] for i in range(N_bodies): ioffset=i * 6 for j in range(N_bodies): joffset=j * 6 solved_vector[ioffset]=y[ioffset+3] solved_vector[ioffset+1]=y[ioffset+4] solved_vector[ioffset+2]=y[ioffset+5] if i != j: dx= y[ioffset]-y[joffset] dy=y[ioffset+1]-y[joffset+1] dz=y[ioffset+2]-y[joffset+2] r=(dx**2+dy**2+dz**2)**0.5 ax=(-c.G.cgs*masses[j]/r**3)*dx ay=(-c.G.cgs*masses[j]/r**3)*dy az=(-c.G.cgs*masses[j]/r**3)*dz ax=ax.value ay=ay.value az=az.value solved_vector[ioffset+3]+=ax solved_vector[ioffset+4]+=ay solved_vector[ioffset+5]+=az return solved_vector simulation=Simulation(bodies) simulation.set_diff_eqs(nbody_solver) simulation.run(30*u.year,12*u.hr) earth_position = simulation.history[:, :3] # Extracting position for Earth sun_position = simulation.history[:, 6:9] # Extracting position for Sun jupiter_position=simulation.history[:, 12:15] #Jupiter position saturn_position=simulation.history[:, 18:21] #Saturn position fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Plot the trajectories ax.plot(earth_position[:, 0], earth_position[:, 1], earth_position[:, 2], label='Earth') ax.plot(sun_position[:, 0], sun_position[:, 1], sun_position[:, 2], label='Sun') ax.plot(jupiter_position[:, 0], jupiter_position[:, 1], jupiter_position[:, 2], label='Jupiter') ax.plot(saturn_position[:, 0], saturn_position[:, 1], saturn_position[:, 2], label='Saturn') # Add labels and title ax.set_xlabel('X (AU)') ax.set_ylabel('Y (AU)') ax.set_zlabel('Z (AU)') ax.set_title('Trajectories of Earth and Jupiter Around the Sun') ax.scatter([0], [0], [0], marker='o', color='yellow', s=200, label='Sun') # Marking the Sun at the origin ax.scatter(earth_position[0, 0], earth_position[0, 1], earth_position[0, 2], marker='o', color='blue', s=50, label='Earth') # Marking the initial position of Earth ax.scatter(jupiter_position[0, 0], jupiter_position[0, 1], jupiter_position[0, 2], marker='o', color='green', s=100, label='Jupiter') # Marking the initial position of Earth ax.scatter(saturn_position[0, 0], saturn_position[0, 1], saturn_position[0, 2], marker='o', color='red', s=80, label='Saturn') # Marking the initial position of Earth # Add a legend ax.legend() # Show the plot plt.show()
The reason for this behaviour is that the if distance_to_initial<min_distance condition for the time logging is not triggered correctly in this case. The specific cause of this is the "min_distance", which might get "hopped over" in certain circumstances (e.g. due to increasing step distance), and then only the first min_distance_time value is used, without ever "completing the orbit". To fix this, instead of using a min_distance, i would instead check if the initial position is in between the previous and the current position. Here's a run method which implements this approach: def run(self, T, dt, t0=0): if not hasattr(self, "calc_diff_eqs"): raise AttributeError("You must set a diff eq solver first.") if self.has_units: try: _ = t0.unit except: t0 = (t0 * T.unit).cgs.value T = T.cgs.value dt = dt.cgs.value self.history = [self.quant_vec] clock_time = t0 nsteps = int((T - t0) / dt) start_time = time.time() initial_position = self.quant_vec[18:21] min_distance = None min_distance_time = None for step in range(nsteps): sys.stdout.flush() sys.stdout.write( "Integrating: step = {}/{}| Simulation Time = {}".format( step, nsteps, round(clock_time, 3) ) + "\r" ) y_new = self.rk4(0, dt) y_old = self.quant_vec self.history.append(y_new) self.quant_vec = y_new clock_time += dt current_position = self.quant_vec[18:21] last_position = y_old[18:21] if step == 0 or min_distance_time is not None: # do not check orbit criteria for initial step # or if orbit already completed continue # checking if planet has completed an orbit, i.e. if origin is between current and last position last_to_initial = np.linalg.norm(last_position - initial_position) current_to_initial = np.linalg.norm(current_position - initial_position) current_to_last = np.linalg.norm(current_position - last_position) has_passed_initial_position = ( last_to_initial <= current_to_last and current_to_initial <= current_to_last ) if has_passed_initial_position: min_distance = current_to_initial min_distance_time = clock_time runtime = time.time() - start_time print("\n") print("Simulation completed in {} seconds".format(runtime)) print( f"Minimum distance reached at time: {min_distance_time:.3f} seconds. Minimum distance: {min_distance:.3e}" ) self.history = np.array(self.history)
2
1
77,483,917
2023-11-14
https://stackoverflow.com/questions/77483917/scopes-confusion-using-smtp-to-send-email-using-my-gmail-account-with-xoauth2
My application has an existing module I use for sending emails that accesses the SMTP server and authorizes using a user (email address) and password. Now I am trying to use Gmail to do the same using my Gmail account, which, for the sake of argument, we say is [email protected] (it's actually something different). First, I created a Gmail application. On the consent screen, which was a bit confusing, I started to add scopes that were either "sensitive" or "restricted". If I wanted to make the application "production" I was told that it had to go through a verification process and I had to produce certain documentation. This was not for me as I, the owner of this account, am only trying to connect to it for the sake of sending emails programmatically. I them created credentials for a desktop application and downloaded it to file credentials.json. Next I acquired an access token with the following code: from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = ['https://mail.google.com/'] def get_initial_credentials(*, token_path, credentials_path): flow = InstalledAppFlow.from_client_secrets_file(credentials_path, SCOPES) creds = flow.run_local_server(port=0) with open(token_path, 'w') as f: f.write(creds.to_json()) if __name__ == '__main__': get_initial_credentials(token_path='token.json', credentials_path='credentials.json') A browser window opens up saying that this is not a verified application and I am given a chance to go "back to safety" but I click on the Advanced link and eventually get my token. I then try to send an email with the following code: import smtplib from email.mime.text import MIMEText import base64 import json from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = ['https://www.googleapis.com/auth/gmail.send'] def get_credentials(token_path): with open(token_path) as f: creds = Credentials.from_authorized_user_info(json.load(f), SCOPES) if not creds.valid: creds.refresh(Request()) with open(token_path, 'w') as f: f.write(creds.to_json()) return creds def generate_OAuth2_string(access_token): auth_string = f'user=booboo\1auth=Bearer {access_token}\1\1' return base64.b64encode(auth_string.encode('utf-8')).decode('ascii') message = MIMEText('I need lots of help!', "plain") message["From"] = '[email protected]' message["To"] = '[email protected]' message["Subject"] = 'Help needed with Gmail' creds = get_credentials('token.json') xoauth_string = generate_OAuth2_string(creds.token) with smtplib.SMTP('smtp.gmail.com', 587) as conn: conn.starttls() conn.docmd('AUTH', 'XOAUTH2 ' + xoauth_string) conn.sendmail('booboo', ['[email protected]'], message.as_string()) This works but note that I used a different scope https://www.googleapis.com/auth/gmail.send instead of the https://mail.google.com/ I used to obtain the initial access token. I then edited the application to add the scope https://www.googleapis.com/auth/gmail.send. That required me to put the application in testing mode. I did not understand the section to add "test users", that is I had no idea what I could have/should have entered here. I then generated new credentials and a new token as above. Then when I go to send my email, I see (debugging turned on): ... reply: b'535-5.7.8 Username and Password not accepted. Learn more at\r\n' reply: b'535 5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp\r\n' reply: retcode (535); Msg: b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials l19-20020ac84a93000000b0041b016faf7esm2950068qtq.58 - gsmtp' ... But I never sent up a password, but rather the XOAUTH2 authorization string. I don't know whether this occurred because I hadn't added test users. For what it's worth, I do not believe that this new token had expired yet and therefore it was not refreshed. I didn't try it, but had I made the application "production", would it have worked? Again, I don't want to have to go through a whole verification process with Gmail. Unfortunately, I don't have a specific question other than I would like to define an application with the more restricted scope and use that, but it seems impossible without going through this verification. Any suggestions?
Okay first off as this is going to be a single user app. You the developer will be the only one using it, and your just sending emails programticlly lets clear a few things up to begin with. You do not need to verify this app. Yes you will need to just by pass that not a verified application screen as you have done. No worries. Setting the application in Production by clicking the send to production button. Will enable you to request refresh tokens that will not expire. You Want this. Again ignore the screen that says you will need to verify your app you don't need to. Test users, as long as you only login with the user you created the project with you dont need test users. ignore it. Its just you using the app use https://mail.google.com/ scope Don't worry about adding scopes to the google cloud project its just for verification. what matters is what is in your code. Okay all clear on that. You now have two options. XOauth2 which is what you are doing now. Apps password. Just create an apps password on your google account and use that in place of your actual google password and your existing code will work. How I send emails with Python. In 2.2 minutes flat! Xoauth2 If you want to use XOauth2 then you can use the Google api python client library to help you grab the authorization token you need # To install the Google client library for Python, run the following command: # pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib from __future__ import print_function import base64 import os.path import smtplib from email.mime.text import MIMEText import google.auth.exceptions from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ['https://mail.google.com/'] # usr token storage USER_TOKENS = 'token.json' # application credentials CREDENTIALS = 'C:\YouTube\dev\credentials.json' def getToken() -> str: """ Gets a valid Google access token with the mail scope permissions. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists(USER_TOKENS): try: creds = Credentials.from_authorized_user_file(USER_TOKENS, SCOPES) creds.refresh(Request()) except google.auth.exceptions.RefreshError as error: # if refresh token fails, reset creds to none. creds = None print(f'An error occurred: {error}') # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( CREDENTIALS, SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open(USER_TOKENS, 'w') as token: token.write(creds.to_json()) try: return creds.token except HttpError as error: # TODO(developer) - Handle errors from authorization request. print(f'An error occurred: {error}') def generate_oauth2_string(username, access_token, as_base64=False) -> str: # creating the authorization string needed by the auth server. #auth_string = 'user=%s\1auth=Bearer %s\1\1' % (username, access_token) auth_string = 'user=' + username + '\1auth=Bearer ' + access_token + '\1\1' if as_base64: auth_string = base64.b64encode(auth_string.encode('ascii')).decode('ascii') return auth_string def send_email(host, port, subject, msg, sender, recipients): access_token = getToken() auth_string = generate_oauth2_string(sender, access_token, as_base64=True) msg = MIMEText(msg) msg['Subject'] = subject msg['From'] = sender msg['To'] = ', '.join(recipients) server = smtplib.SMTP(host, port) server.starttls() server.docmd('AUTH', 'XOAUTH2 ' + auth_string) server.sendmail(sender, recipients, msg.as_string()) server.quit() def main(): host = "smtp.gmail.com" port = 587 user = "[email protected]" subject = "Test email Oauth2" msg = "Hello world" sender = user recipients = [user] send_email(host, port, subject, msg, sender, recipients) if __name__ == '__main__': main()
2
3
77,484,264
2023-11-14
https://stackoverflow.com/questions/77484264/fields-not-initialized-when-post-init-called-using-ruamel-yaml
I have two dataclasses: Msg and Field. Msg has a field fields of type list[Field]. I want to assign something to a field of each Field after they have all been initialized which is more or less their relative index in the fields list. However, when I add a __post_init__(self) method to the Msg dataclass, the fields list is empty, so I can't update the indices. from dataclasses import dataclass from ruamel.yaml import YAML @dataclass class Msg: id: int desc: str fields: list[Field] def __post_init__(self) -> None: idx: int = 0 for field in self.fields: # why is this empty?? field.index = idx idx += field.size @dataclass class Field: id: int name: str units: str size: int index: int = -1 y = YAML() y.register_class(Msg) y.register_class(Field) msg: Msg = y.load("""\ !Msg id: 1 desc: status fields: - !Field id: 1 name: Temp units: degC size: 2 """) assert(msg.fields[0].index != -1) # fails :( Why is this? How is the Msg being initialized without fields being initialized? Is there any way to do what I am trying to do using the class system? I am using Python 3.11 with ruamel.yaml 0.18.5 on MacOS.
By default, object serializers such as YAML and pickle have no idea what to do with the attribute mapping for a user-defined object other than to assign the mapping directly to the object's attribute dictionary as-is. This is why you can define a __setstate__ method for your class, so that ruamel.yaml's object constructor knows in this case to call the __init__ method with the mapping unpacked as arguments, which in turn calls __post_init__ for post-initialization: @dataclass class Msg: id: int desc: str fields: list[Field] def __post_init__(self) -> None: idx: int = 0 for field in self.fields: field.index = idx idx += field.size def __setstate__(self, state): self.__init__(**state) Demo: https://replit.com/@blhsing1/PowerfulThoseJavadocs
5
2
77,479,851
2023-11-14
https://stackoverflow.com/questions/77479851/torchaudio-cant-find-ffmpeg
Windows, vscode, Python 3.11.4-64bit import torch import torchaudio print(torch.__version__) print(torchaudio.__version__) print(torchaudio._extension._FFMPEG_INITIALIZED) 2.0.1+cu117 2.0.2+cu117 False and i try torchaudio._extension._init_ffmpeg() Traceback (most recent call last): File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\_extension\utils.py", line 85, in _init_ffmpeg _load_lib("libtorchaudio_ffmpeg") File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\_extension\utils.py", line 61, in _load_lib torch.ops.load_library(path) File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\_ops.py", line 643, in load_library ctypes.CDLL(path) File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ctypes\__init__.py", line 376, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: Could not find module 'C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\lib\libtorchaudio_ffmpeg.pyd' (or one of its dependencies). Try using the full path with constructor syntax. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\USER\Desktop\MyChatBot\0.py", line 6, in <module> torchaudio._extension._init_ffmpeg() File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\_extension\utils.py", line 87, in _init_ffmpeg raise ImportError("FFmpeg libraries are not found. Please install FFmpeg.") from err ImportError: FFmpeg libraries are not found. Please install FFmpeg. my libtorchaudio_ffmpeg.pyd exists and it's path is correct, but ctypes can't seem to find it. how should i do? I have also downloaded ffmpeg. C:\Users\USER>ffmpeg ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 10.2.1 (GCC) 20200726 but still not working. how should i do?
You need to install ffmpeg libraries, not CLI. What the error message means is that the dependencies of libtorchaudio_ffmpeg.pyd is not found. The dependencies here mean the libraries that consist FFmpeg, such as libavcodec and libavformat. Usually installing ffmpeg CLI also intall the libraries, but I often see people use statically built ffmpeg CLI on Windows due to its ease of installation. I cannot tell what ffmpeg CLI you have, but if your ffmpeg CLI is statically built, it does not work. As of how to install dynamically built FFmpeg, I cannot advise. There seems to be no straightforward way to install dynamically built FFmpeg on Windows unless using Anacaonda.
2
2
77,483,278
2023-11-14
https://stackoverflow.com/questions/77483278/adding-more-than-one-empty-row-between-pandas-groups
I want to add several empty rows between each groupby in my pandas dataframe. I know similar questions have been asked in the past but all of the answers I could find rely on the recently discontinued append function. I think I am close but I cannot get it to work. From what I've read, the idea is for the concat function to replace append so I have been trying to 1) Make my groups, 2) Make a blank dataframe with the correct columns and number of rows, then 3) Loop through the groups and concatenate them individually with the blank dataframe. This looked something like: Current df: column1 column2 column3 0 a 1 blue 1 b 2 blue 2 a 1 green 3 b 2 green 4 a 1 black 5 b 2 black What I expect: column1 column2 column3 0 a 1 blue 1 b 2 blue 0 1 2 3 4 2 a 1 green 3 b 2 green 0 1 2 3 4 4 a 1 black 5 b 2 black What I tried: # Create my groups by the desired column dfg = df.groupby("column3") # Create my blank df with the same columns as my main df and with the desired number of blank rows blank_df5 = pd.DataFrame(columns=['column1','column2','column3'],index=['0','1','2','3','4']) # Loop through and concatenate groups and the blank df for colors in dfg: pd.concat([colors, blank_df5], ignore_index=True) print(dfg) This returned: TypeError: cannot concatenate object of type '<class 'tuple'>'; only Series and DataFrame objs are valid I then tried making the groups into their own dfs and then looping through that like: dfg = df.groupby('column1') [dfg.get_group(x) for x in dfg.groups] blank_df5 = pd.DataFrame(columns=['column1','column2','column3'],index=['0','1','2','3','4']) for colors in dfg: pd.concat([colors, blank_df5], ignore_index=True) # I also tried [pd.concat([colors, blank_df5], ignore_index=True) for column3 in dfw] with the same result result was still: TypeError: cannot concatenate object of type '<class 'tuple'>'; only Series and DataFrame objs are valid Other things I've tried:** mask = df['column3'].ne(df['column3'].shift(-1)) df1 = pd.DataFrame('', index=mask.index[mask] + .5, columns=df.columns) dfg = pd.concat([df,df1]).sort_index().reset_index(drop=True).iloc[:-1] print(dfg) This works to add one empty row in-between the groups, but I can't get it to add more than that. dfg = (pd.concat([df, df.groupby('column3').apply(lambda x: x.shift(-1).iloc[-1]).reset_index()]) .sort_values('column3') .reset_index(drop=True)) print(dfg) This returns: ValueError: cannot insert column3, already exists dfg = df.groupby('column1') for colors in dfg: new_rows = 5 new_index = pd.RangeIndex(len(colors)*(new_rows+1)) dfg = pd.DataFrame(np.nan, index=new_index, columns=df.columns) ids = np.arange(len(colors))*(new_rows+1) dfg.loc[ids] = df.values print(dfg) This returns: ValueError: could not broadcast input array from shape (710,) into shape (2,) If I remove the loop and just run what is in the loop it adds the empty rows in-between each row of data. Hopefully this makes sense, thank you in advance for any help. If anyone is curious, the reason I need to do this is to dump it into excel, it's a company decision, not mine.
Following your 2nd approach : N = 5 grps = df.groupby("column3", sort=False) out = pd.concat( [ pd.concat([g, pd.DataFrame("", index=range(N), columns=df.columns)]) if i < len(grps)-1 else g for i, (_, g) in enumerate(grps) ] ) Output : print(out) column1 column2 column3 0 a 1 blue 1 b 2 blue 0 1 2 3 4 2 a 1 green 3 b 2 green 0 1 2 3 4 4 a 1 black 5 b 2 black [16 rows x 3 columns]
2
1
77,475,314
2023-11-13
https://stackoverflow.com/questions/77475314/overlaying-images-on-python
I have these three pictures from a SEM Microscope. One is the actual picture whilst the other two just indicated the presence of specific elements (Aluminium and Silicon) on the sample. I'd like to overlay them using Numpy and matplotlib so that I can then see where exactly the elements are, however not sure how to approach this on python, so far I've only gone as far as reading the picture files as np arrays: image_SEM = np.asarray(Image.open('Project_SEI.JPG')) image_Al = np.asarray(Image.open('Project_Al K.JPG')) image_Si = np.asarray(Image.open('Project_Si K.JPG')) Thank you!
I would be inclined to paste Si and Al images using a mask so that they only affect the SEM image where they are coloured and not where they are black/grey - else you will tend to reduce the contrast of your base image: from PIL import Image # Load images sei = Image.open('sei.jpg') si = Image.open('si.jpg') al = Image.open('al.jpg') # Make mask which only allows coloured areas to show siMask = si.convert('L') siMask.save('DEBUG-siMask.jpg') # Paste Si image over SEM image with transparency mask sei.paste(si, siMask) # Make mask which only allows coloured areas to show alMask = al.convert('L') alMask.save('DEBUG-alMask.jpg') # Paste Al image over SEM image with transparency mask sei.paste(al, alMask) sei.save('result.png') DEBUG-siMask.jpg DEBUG-alMask.jpg result.jpg Note that you could enhance the masks before use - for example you could median filter to remove small speckles, or you could contrast stretch to make the magenta/yellow shading come out more or less solid. For example, you can see the yellow is more solid than the magenta, which is because the yellow mask is brighter, so you could threshold the magenta mask to make it pure black and white which would make the magenta come out solid. So, I median-filtered out the speckles and changed the masking so that coloured areas are 50% transparent like this: #!/usr/bin/env python3 from PIL import Image, ImageFilter # Load images sei = Image.open('sei.jpg') si = Image.open('si.jpg') al = Image.open('al.jpg') # Make mask which only allows coloured areas to show siMask = si.convert('L') # Median filter mask to remove small speckles siMask = siMask.filter(ImageFilter.MedianFilter(5)) # Threshold mask and set opacity to 50% for coloured areas siMask = siMask.point(lambda p: 128 if p > 50 else 0) siMask.save('DEBUG-siMask.jpg') # Paste Si image over SEM image with transparency mask sei.paste(si, siMask) # Make mask which only allows coloured areas to show alMask = al.convert('L') # Median filter mask to remove small speckles alMask = alMask.filter(ImageFilter.MedianFilter(5)) # Threshold mask and set opacity to 50% for coloured areas alMask = alMask.point(lambda p: 128 if p > 50 else 0) alMask.save('DEBUG-alMask.jpg') # Paste Al image over SEM image with transparency mask sei.paste(al, alMask) sei.save('result.jpg') That gives these masks and results:
5
2
77,479,584
2023-11-14
https://stackoverflow.com/questions/77479584/local-azure-function-customer-packages-not-in-sys-path-this-should-never-happe
I'm encountering a weird warning with azure functions locally. Whenever I func start my function, I get these error messages: Found Python version 3.10.12 (python3). Azure Functions Core Tools Core Tools Version: 4.0.5455 Commit hash: N/A (64-bit) Function Runtime Version: 4.27.5.21554 [2023-11-14T10:02:39.795Z] Customer packages not in sys path. This should never happen! [2023-11-14T10:02:42.194Z] Worker process started and initialized. In the host.json, extensionBundle version is [3.*, 4.0.0) In the local.settings.json, "FUNCTIONS_WORKER_RUNTIME": "python" The function app is based on the new model of python azure function (func init MyProjFolder --worker-runtime python --model V2 https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&pivots=programming-language-python) My first interrogation is the first warning: Customer packages not in sys path. This should never happen!. I'm using a virtual environment. The function is starting correctly, but what is this warning? local.settings.json: { "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python", "AzureWebJobsStorage": "UseDevelopmentStorage=true", "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" } }
This seems to be the issue with the latest version of Azure function Core tools (4.0.5455), which is published recently (6 days ago) as mentioned in the official doc. I have created a python Azure function to check the same: Python Version: 3.11.5 Azure Functions Core Tools Core Tools Version: 4.0.5348 Commit hash: N/A (64-bit) Function Runtime Version: 4.24.5.21262 Didn't get any such warning: Updated the Azure function Core tools version to 4.0.5455. Running the same Azure Function again with below versions: Python version 3.11.6 (py). Azure Functions Core Tools Core Tools Version: 4.0.5455 Commit hash: N/A (64-bit) Function Runtime Version: 4.27.5.21554
7
5
77,479,119
2023-11-14
https://stackoverflow.com/questions/77479119/calculating-groupby-sum-of-values-on-column-based-on-string-in-pandas
data = {'SYMBOL': ['AAAA','AAAA','AAAA','AAAA','AAAA','AAAA','AAAA'] , 'EXPIRYDT': ['26-Oct-23','26-Oct-23','26-Oct-23','26-Oct-23','26-Oct-23','26-Oct-23','26-Oct-23'], 'STRIKE': [480, 500, 525, 425, 450, 480, 500], 'TYPE': ['CE', 'CE', 'CE', 'PE', 'PE', 'PE', 'PE'], 'CONTRACTS': [1, 31, 1, 0, 12, 2, 6], 'OPENINT': [4000, 25000, 1000, 1000, 64000, 2000, 5000], 'TIMESTAMP': ['4-Sep-23','4-Sep-23','4-Sep-23','4-Sep-23','4-Sep-23','4-Sep-23','4-Sep-23']} df=pd.DataFrame(data) result = df.groupby(['EXPIRYDT', 'TYPE']) df['CE_CONT'] = result['CONTRACTS'].transform('sum') df['PE_CONT'] = result['CONTRACTS'].transform('sum') df['CE_OI'] = result['OPENINT'].transform('sum') df['PE_OI'] = result['OPENINT'].transform('sum') print(df) but i am not getting desired output i need output as SYMBOL EXPIRYDT STRIKE TYPE CONTRACTS OPENINT TIMESTAMP CE_CONT PE_CONT CE_OI PE_OI AAAA 26-Oct-23 480 CE 1 40000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 500 CE 31 25000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 525 CE 1 1000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 425 PE 0 1000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 450 PE 12 64000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 480 PE 2 2000 4-Sep-23 33 20 30000 72000 AAAA 26-Oct-23 500 PE 6 5000 4-Sep-23 33 20 30000 72000 after groupby i want sum of OPENINT of TYPE CE TO CE_OI sum of OPENINT of TYPE PE TO PE_OI sum of CONTRACTS of TYPE CE to CE_CONT sum of CONTRACTS of TYPE PE to PE_CONT
Code groupby & merge I chose to merge in too many ways because your original dataset may have multiple values in the EXPIRYDT column, and it is possible to assign different values depending on the EXPIRYDT. step1. aggregate by groupby tmp = df.groupby(['EXPIRYDT', 'TYPE']).agg(CONT=('CONTRACTS', 'sum'), OI=('OPENINT', 'sum'))\ .unstack().swaplevel(axis=1) tmp: TYPE CE PE CE PE CONT CONT OI OI EXPIRYDT 26-Oct-23 33 20 30000 72000 Step2. make tmp to have single colmns and reset index tmp2 = tmp.set_axis(tmp.columns.map('_'.join), axis=1).reset_index() tmp2: EXPIRYDT CE_CONT PE_CONT CE_OI PE_OI 0 26-Oct-23 33 20 30000 72000 step3. merge df & tmp2 out = df.merge(tmp2, how='left') out:
2
2
77,477,931
2023-11-14
https://stackoverflow.com/questions/77477931/compute-the-order-of-non-unique-array-elements
I'm looking for an efficient method to compute the "order" of each item in a numpy array, with "order" defined as the number of preceding elements equal to the element. Example: order([4, 2, 3, 2, 6, 4, 4, 6, 2, 4]) [0 0 0 1 0 1 2 1 2 3] Current solution loops in pure Python and is not fast enough: def order(A): cnt = defaultdict(int) O = np.zeros_like(A) for i, r in enumerate(A): O[i] = cnt[r] cnt[r] += 1 return O I'm using order to implement scatter: def scatter(A, c): R = A % c I = c * order(R) + R B = np.full(np.max(I) + 1, -1) B[I] = A return B This is useful for multi-threading. For example, if the scattered array contains addresses to write to then no two threads processing the array in parallel will see the same address. Question is are there numpy built-ins that I'm missing that I can use to make order faster and to remove the explicit looping?
Since what you're doing is essentially a Pandas cumcount, and Pandas uses NumPy internally, one idea would be to look at how they implemented cumcount, and do the same thing. If you read the Pandas code for cumcount, it is internally implemented in this way: Sort the array, keeping track of where each element came from. Compare each element of the sorted array to the next element. If it is different, it is the start of a new run. (run) Work out the length of each group. (rep) Do a cumulative sum, incrementing by 1 for each element which is not part of a new run. (out) Track how much each group is affected by groups before it which should not count. (out[run]) Repeat the value to subtract by rep. Undo the initial sort to put elements back in their original place. Here's how to do the same thing without relying on any of Pandas. def order(array): # https://github.com/pandas-dev/pandas/blob/v1.3.5/pandas/core/groupby/groupby.py#L1493 if len(array) == 0: return np.array([]) count = len(array) # Can remove 'stable' here to increase speed if you # don't care what order the order is assigned in ind = np.argsort(array, kind='stable') array = array[ind] run = np.r_[True, array[:-1] != array[1:]] rep = np.diff(np.r_[np.nonzero(run)[0], count]) out = (~run).cumsum() out -= np.repeat(out[run], rep) rev = np.empty(count, dtype=np.intp) rev[ind] = np.arange(count, dtype=np.intp) out = out[rev] return out I find that this is approx 10x faster for arrays 1000 elements and larger.
2
3
77,455,969
2023-11-9
https://stackoverflow.com/questions/77455969/finding-distinct-sublists-with-target-sums
I am currently working on a task that involves identifying distinct sublists from a given list such that each sublist adds up to one of the specified target numbers. Below is the Python code I've written to address this problem. The primary approach in this recursive function involves iteratively removing elements from the list and assigning them to a sublist corresponding to one of the target numbers or discarding them. The function terminates either when a solution is found and outputs the result or when it's unable to complete due to an empty list or overshooting in one of the sublists. def subset_sums(numbers, target_array, total_partial=None, partial_sum=None): # If it's the first function call, initialize partial_sum and enumerate numbers to detect duplicates if partial_sum is None: numbers = [(v, i) for i, v in enumerate(numbers)] total_partial = [[] for _ in range(len(target_array))] partial_sum = np.zeros(len(target_array)) # If all sublists have the correct sum, yield the resulting sublists if (target_array == partial_sum).all(): yield total_partial return # If we don't reach a correct result and have no numbers left, stop the function elif not numbers: return # Get the next number and the remaining list n_with_index = numbers[0] n = n_with_index[0] remaining = numbers[1:] # Case 1: Skip the new number and continue with the rest of the numbers yield from subset_sums(remaining, target_array, total_partial, partial_sum) # Case 2: Use the new number for each possible list for j in range(len(target_array)): # If using the new number would overshoot in that list, stop if (partial_sum[j] + n) > target_array[j]: return # Otherwise, use the new number and continue with the rest of the numbers else: next_total_partial = total_partial next_total_partial[j] = next_total_partial[j] + [n_with_index] next_partial_sum = partial_sum next_partial_sum[j] = next_partial_sum[j] + n yield from subset_sums(remaining, target_array, next_total_partial, next_partial_sum) However, I've encountered a persistent flaw in the code that I can't seem to resolve. The problem lies in the fact that the same list element gets appended to different sublists, and the algorithm fails to exclude list elements as expected. I've thoroughly reviewed the code, but I can't pinpoint why this issue persists. The following snippet shows an example instance: In [1]: list(subset_sums2([1,3,1,3], np.array([3,5]))) Out[1]: [] However, I expect the output: Out[1]: [ [[(3, 1)], [(1, 0), (1, 2), (3, 3)]], # target 3 is the 3 at index 1; target 5 is the sum of all other numbers [[(3, 3)], [(1, 0), (1, 2), (3, 1)]]] # target 3 is the 3 at index 3; target 5 is the sum of all other numbers Note that the output is (value, index) pairs. Here we have two ways of getting the target numbers of 3 & 5: They're identical except for which given 3 is used to achieve the 3 target vs the 5 target. I would greatly appreciate any assistance that could help me identify and rectify the problem in my implementation. Thank you in advance for your help :)
In this section: for j in range(len(target_array)): # If using the new number would overshoot in that list, stop if (partial_sum[j] + n) > target_array[j]: return # Otherwise, use the new number and continue with the rest of the numbers else: next_total_partial = total_partial next_total_partial[j] = next_total_partial[j] + [n_with_index] next_partial_sum = partial_sum next_partial_sum[j] = next_partial_sum[j] + n yield from subset_sums(remaining, target_array, next_total_partial, next_partial_sum) When your function is called with the example you provided: list(subset_sums([1,3,1,3], np.array([3,5]))) Even on the second iteration there's already a problem. On the first iteration, next_total_partial is set to total_partial, and then next_total_partial is updated on the next line. But this modifies both total_partial and next_total_partial, as they have the same value. So, on the second iteration, you think you reset next_total_partial with: next_total_partial = total_partial But in fact, nothing changes - it still has the same object as a value and you now add the same value ((3, 3) in this case) to both next_total_partial and total_partial. Perhaps you wanted next_total_partial = total_partial.copy()? The same applies to next_partial_sum = partial_sum.copy() of course. It would be possible to fix your logic to get it to work, but it's not a very efficient approach and it has a few other problems: the main issue is what I pointed out: you pass around the same objects where you need copies you have a recursive function, but you also use it for initialisation - this would be better with a secondary outer function, or an inner function you're brute-forcing a solution when there's more efficient ways to achieve this An example of a working solution that fixes some of the issues, but has the same approach you chose: from copy import deepcopy def find_sums(target_sums, xs): def _all_sums(enumerated_xs, sums, grouping): if not enumerated_xs: yield grouping return i, v = enumerated_xs[0] for j in range(len(sums)): if sums[j] + v <= target_sums[j]: new_grouping = deepcopy(grouping) new_grouping[j] += [(i, v)] yield from _all_sums(enumerated_xs[1:], sums[:j] + [sums[j] + v] + sums[j+1:], new_grouping) if sum(target_sums) != sum(xs): return yield from _all_sums(list(enumerate(xs)), [0]*len(target_sums), [[] for _ in range(len(target_sums))]) print(list(find_sums([3, 5], [1, 3, 1, 3]))) Even closer, and perhaps preferable to you: from copy import deepcopy def find_sums(target_sums, xs): def _all_sums(enumerated_xs, sums, grouping): if not enumerated_xs: yield grouping return i, v = enumerated_xs[0] for j in range(len(sums)): if sums[j] + v <= target_sums[j]: new_grouping = deepcopy(grouping) new_grouping[j] += [(i, v)] new_sums = sums.copy() new_sums[j] += v yield from _all_sums(enumerated_xs[1:], new_sums, new_grouping) if sum(target_sums) != sum(xs): return yield from _all_sums(list(enumerate(xs)), [0]*len(target_sums), [[] for _ in range(len(target_sums))]) print(list(find_sums([3, 5], [1, 3, 1, 3])))
3
1
77,473,922
2023-11-13
https://stackoverflow.com/questions/77473922/polars-cast-pl-object-to-pl-string-polars-exceptions-computeerror-cannot-cast
Update: numpy.random.choice is no longer parsed as an Object type. The example produces a String column as expected without any casting needed. I got a pl.LazyFrame with a column of type Object that contains date representations, it also includes missing values (None). In a first step I would like to convert the column from Object to String however this results in a ComputeError. I can not seem to figure out why. I suppose this is due to the None values, sadly I can not drop those at the current point in time. import numpy as np import polars as pl rng = np.random.default_rng(12345) df = pl.LazyFrame( data={ "date": rng.choice( [None, "03.04.1998", "03.05.1834", "05.06.2025"], 100 ), } ) df.with_columns(pl.col("date").cast(pl.String)).collect()
When Polars assigns the pl.Object type it essentially means: "I do not understand what this is." By the time you end up with this type, it is generally too late to do anything useful with it. In this particular case, numpy.random.choice is creating a numpy array of dtype=object >>> rng.choice([None, "foo"], 3) array([None, None, 'foo'], dtype=object) Polars has native .sample() functionality which you could use to create your data instead. df = pl.select(date = pl.Series([None, "03.04.1998", "03.05.1834", "05.06.2025"]) .sample(100, with_replacement=True) ) # shape: (100, 1) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ date β”‚ # β”‚ --- β”‚ # β”‚ str β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•‘ # β”‚ null β”‚ # β”‚ 05.06.2025 β”‚ # β”‚ 03.05.1834 β”‚ # β”‚ 03.04.1998 β”‚ # β”‚ … β”‚ # β”‚ null β”‚ # β”‚ 03.04.1998 β”‚ # β”‚ 03.05.1834 β”‚ # β”‚ 03.04.1998 β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
77,475,285
2023-11-13
https://stackoverflow.com/questions/77475285/pytorch-crossentropy-loss-getting-error-runtimeerror-boolean-value-of-tensor
I have a classification model, producing predictions for 4 classes in a tensor of shape (256, 1, 4)...256 is the batch size, while the "1" for the second dimension is due to some model internal logic and can be removed: preds.shape torch.Size([256, 1, 4]) The corresponding annotations are one-hot encoded, in a tensor of the same shape: targets.shape torch.Size([256, 1, 4]) so, in every row there is only one non-zero element: targets[0][0] = [1, 0, 0, 0] I need to calculate the CrossEntropy loss of the model. I know that The CrossEntropyLoss expects class indices as the target, so I tried using argmax to determine the position of the 1 for each sample, and squeezed the extra dimension: vpredictions_squeezed = cls_scores.squeeze(1) targets = torch.argmax(targets.squeeze(1), dim=1) losses = torch.nn.CrossEntropyLoss(predictions_squeezed, targets) But I'm getting the error: RuntimeError: Boolean value of Tensor with more than one value is ambiguous What am I doing wrong here?
you can use direct call cross_entropy from torch.nn.functional import torch.nn.functional as F F.cross_entropy(predictions_squeezed, targets) or you can rewrite your code, because this's class not a function: loss = nn.CrossEntropyLoss() output = loss(input, target)
3
1
77,475,604
2023-11-13
https://stackoverflow.com/questions/77475604/how-to-separately-normalize-each-distribution-group
Lets say I have a dataframe such as: CATEGORY Value a v1 a v2 a v3 a v4 a v5 b v6 b v7 b v8 Now, if i want to plot this distributions by category, i could use something like: sns.histplot(data,"Value",hue="CATEGORY",stat="percent"). The problem with this is that category "a" represents 5/8 of the sample and "b" is 3/8. The histograms will reflect this. I want to plot in a way that each histogram will have an area of 1, instead of 5/8 and 3/8. Below is an example of how it looks like now But each of those areas should be one. I thought of maybe iterating by category and plotting one by one
As per this answer of the duplicate, use common_norm=False. Also see seaborn histplot and displot output doesn't match. This is not specific to stat='percent'. Other options are 'frequency', 'probability', and 'density'. import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset('tips') fig, axes = plt.subplots(nrows=2, figsize=(20, 10), tight_layout=True) sns.histplot(data=tips, x='total_bill', hue='day', stat='percent', multiple='dodge', bins=30, common_norm=True, ax=axes[0]) sns.histplot(data=tips, x='total_bill', hue='day', stat='percent', multiple='dodge', bins=30, common_norm=False, ax=axes[1]) axes[0].set_title('common_norm=True', fontweight='bold') axes[1].set_title('common_norm=False', fontweight='bold') handles = axes[1].get_legend().legend_handles for ax in axes: for c in ax.containers: ax.bar_label(c, fmt=lambda x: f'{x:0.2f}%' if x > 0 else '', rotation=90, padding=3, fontsize=8, fontweight='bold') ax.margins(y=0.15) ax.spines[['top', 'right']].set_visible(False) ax.get_legend().remove() _ = fig.legend(title='Day', handles=handles, labels=tips.day.cat.categories.tolist(), bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) sns.displot g = sns.displot(data=tips, kind='hist', x='total_bill', hue='day', stat='percent', multiple='dodge', bins=30, common_norm=False, height=5, aspect=4) ax = g.axes.flat[0] # ax = g.axes[0][0] also works for c in ax.containers: ax.bar_label(c, fmt=lambda x: f'{x:0.2f}%' if x > 0 else '', rotation=90, padding=3, fontsize=8, fontweight='bold')
2
2
77,471,197
2023-11-13
https://stackoverflow.com/questions/77471197/is-there-a-way-to-add-a-column-of-numpy-random-values-to-a-polars-dataframe-whil
Let's say I have a dataframe that has a column named mean that I want to use as an input to a random number generator. Coming from R, this is relatively easy to do in a pipeline: library(dplyr) tibble(alpha = rnorm(1000), beta = rnorm(1000)) %>% mutate(mean = alpha + beta) %>% bind_cols(random_output = rnorm(n = nrow(.), mean = .$mean, sd = 1)) #> # A tibble: 1,000 Γ— 4 #> alpha beta mean random_output #> <dbl> <dbl> <dbl> <dbl> #> 1 0.231 -0.243 -0.0125 0.551 #> 2 0.213 0.647 0.861 0.668 #> 3 0.824 -0.353 0.471 0.852 #> 4 0.665 -0.916 -0.252 -1.81 #> 5 -0.850 0.384 -0.465 -3.90 #> 6 0.721 0.679 1.40 2.54 #> 7 1.46 0.857 2.32 2.14 #> 8 -0.242 -0.431 -0.673 -0.820 #> 9 0.234 0.188 0.422 -0.662 #> 10 -0.494 -2.15 -2.65 -3.01 #> # β„Ή 990 more rows Created on 2023-11-12 with reprex v2.0.2 In python, I can create an intermediate dataframe and use it as input to np.random.normal(), then bind that to the dataframe, but this feels clunky. Is there a way to add the random_output col as a part of the pipeline/chain? import polars as pl import numpy as np # create a df df = ( pl.DataFrame( { "alpha": np.random.standard_normal(1000), "beta": np.random.standard_normal(1000) } ) .with_columns( (pl.col("alpha") + pl.col("beta")).alias("mean") ) ) # create an intermediate object sim_vals = np.random.normal(df.get_column("mean")) # bind the simulated values to the original df ( df.with_columns(random_output = pl.lit(sim_vals)) ) #> shape: (1_000, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ alpha ┆ beta ┆ mean ┆ random_output β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════════════║ β”‚ -1.380249 ┆ 1.531959 ┆ 0.15171 ┆ 0.938207 β”‚ β”‚ -0.332023 ┆ -0.108255 ┆ -0.440277 ┆ 0.081628 β”‚ β”‚ -0.718319 ┆ -0.612187 ┆ -1.330506 ┆ -1.286229 β”‚ β”‚ 0.22067 ┆ -0.497258 ┆ -0.276588 ┆ 0.908147 β”‚ β”‚ … ┆ … ┆ … ┆ … β”‚ β”‚ 0.299117 ┆ -0.371846 ┆ -0.072729 ┆ 0.592632 β”‚ β”‚ 0.789633 ┆ 0.95712 ┆ 1.746753 ┆ 2.954801 β”‚ β”‚ -0.264415 ┆ -0.761634 ┆ -1.026049 ┆ -1.369753 β”‚ β”‚ 1.893911 ┆ 1.554736 ┆ 3.448647 ┆ 5.192537 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
There are four approaches (that I can think of), 2 of which were mentioned in comments, one that I use, and the last I know it exists but don't personally use it. First (get_column(col) or ['col']) reference Use df.get_column as a parameter of np.random.normal which you can do in a chain if you use pipe so for example df.with_columns( mean=pl.col('alpha') + pl.col('beta') ).pipe(lambda df: ( df.with_columns( rando=pl.lit(np.random.normal(df['mean'])) ) )) Second (map_batches) Use map_batches as an expression df.with_columns( mean=pl.col('alpha') + pl.col('beta') ).with_columns( rando=pl.col('mean').map_batches(lambda col: pl.Series(np.random.normal(col))) ) Third (numba) This approach is the faster than the previous two if you're going to do many randomizations but takes more setup (hence the caveat about many randomizations) numba lets you create ufuncs which are compiled functions which you can use directly inside an expression. You can create this function which just uses the default standard deviation import numba as nb @nb.guvectorize([(nb.float64[:], nb.float64[:])], '(n)->(n)', nopython=True) def rando(means, res): for i in range(len(means)): res[i]=np.random.normal(means[i]) then you can do df.with_columns( mean=pl.col('alpha') + pl.col('beta') ).with_columns(rand_nb=rando(pl.col('mean'))) More reading: guvectorize another numba example limitation Fourth (rust extension) Unfortunately for this answer (and I suppose myself in general) I haven't dabbled in rust programming but there's an extension interface whereby you can create functions in rust and deploy them as expressions. Here is documentation on doing that Performance Using a 1M row df I get... First method: 71.1 ms Β± 8.06 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Second method: 70.7 ms Β± 7.88 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Third method: 45.7 ms Β± 2.86 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) One thing to note is that it's not faster unless you want a different mean for each row, for instance... df.with_columns(z=rando(pl.repeat(5,pl.count()))): 43.8 ms Β± 2.12 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) df.with_columns(z=pl.Series(np.random.normal(5,1,df.shape[0]))): 39.6 ms Β± 3.64 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
3
2
77,475,372
2023-11-13
https://stackoverflow.com/questions/77475372/pandas-subtraction-for-multiindex-pivot-table
I have a following data frame which I converted to pandas pivot table having two indexes "Date" and "Rating. The values are sorted in columns A, B and C. I would like to find a solution which will subtract the values for each column and rating for consecutive days. Say, the change in A from 03/01/2007 to 02/01/2007 for rating M would be 0.4179 - 0.4256 = -0.0077. The subtraction won't always be performed on a one day difference. But it will always be the (new date - the old date). The results I'm looking for can be found in the table below:
If your dataframe is correctly sorted (or use df.sort_values('Date')), you can use groupby_diff: # Replace ['A'] with ['A', 'B', 'C'] df['A_diff'] = df.groupby('Rating')['A'].diff().fillna(0) Output: >>> df Date Rating A A_diff 0 02/01/2007 M 0.4256 0.0000 1 02/01/2007 MM 0.4358 0.0000 2 02/01/2007 MMM 0.4471 0.0000 3 03/01/2007 M 0.4179 -0.0077 4 03/01/2007 MM 0.4325 -0.0033 5 03/01/2007 MMM 0.4476 0.0005 6 04/01/2007 M 0.4173 -0.0006 7 04/01/2007 MM 0.4316 -0.0009 8 04/01/2007 MMM 0.4469 -0.0007 If you don't know how many columns you have, you can try: cols = df.columns[2:] df[cols] = df.groupby('Rating')[cols].diff().fillna(0)
2
2
77,474,980
2023-11-13
https://stackoverflow.com/questions/77474980/discord-py-command-execution-truncates-the-json-file-then-applies-the-edits-l
I'm working on a more advanced leveling system. I want a user on any server as long as they are an administrator to be able to change the XP amount. I'm using a value in my server database to achieve that. Every time it makes an edit for that command, I've noticed that it truncates the entire file and applies the edits. I've used the same 5 lines to do this, and it just started not to work. The other command I used for toggling the level system worked perfectly fine, even with the same 5 lines. I even tried setting the value that was getting changed to another one, and it still had the same issue, so it can't be the value's problem. Does anyone know how to fix this issue? Please and thanks :) its really annoying to stand Code @levelmds.command(name="set_xp_addition", description="Choose how many XP you want to give to your members. Set 0 or blank to go to the default. (4)") async def setxpaddition(interaction: discord.Interaction, xp: int = 0): with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) if str(interaction.guild.id) in data: if not interaction.user.guild_permissions.administrator: await interaction.response.send_message(f":x: Only administrators of {interaction.guild.name} can execute this command.") else: if xp: if xp == 0: with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) data[str(interaction.guild.id)]['xpaddition'] = 0 with open("database\\wae.json", "w+") as fa: json.dump(data, fa) await interaction.response.send_message("βœ… Successfully set XP addition to the default (4)") else: with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) data[str(interaction.guild.id)]['xpaddition'] = xp with open("database\\wae.json", "w+") as fa: json.dump(data, fa) await interaction.response.send_message(f"βœ… Successfully changed XP addition to **{xp}.**") else: with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) data[str(interaction.guild.id)]['xpaddition'] = 0 with open("database\\wae.json", "w+") as fa: json.dump(data, fa) await interaction.response.send_message("βœ… Successfully set XP addition to the default (4)") (btw this command is in a GROUP, so for testing consider changing levelmds to bot.tree.command.)
First of all, the code in the if statement on the line 10 is unreachable cause 0 is evaluated as false, so the if statement on the line 9 is only fulfilled when xp is not 0. Here is the code simplified: @levelmds.command(name="set_xp_addition", description="Choose how many XP you want to give to your members. Set 0 or blank to go to the default. (4)") async def setxpaddition(interaction: discord.Interaction, xp: int = 0): with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) if str(interaction.guild.id) in data: if not interaction.user.guild_permissions.administrator: await interaction.response.send_message(f":x: Only administrators of {interaction.guild.name} can execute this command.") else: if xp != 0: with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) data[str(interaction.guild.id)]['xpaddition'] = xp with open("database\\wae.json", "w+") as fa: json.dump(data, fa) await interaction.response.send_message(f"βœ… Successfully changed XP addition to **{xp}.**") else: with open("database\\wae.json", encoding="utf-8") as f: data = json.load(f) data[str(interaction.guild.id)]['xpaddition'] = 0 with open("database\\wae.json", "w+") as fa: json.dump(data, fa) await interaction.response.send_message("βœ… Successfully set XP addition to the default (4)") And about the problem of the file being truncated and wrote with the modifications, when using w+ the existing content in the file is cleared so that's why it gets erased. This is necessary to dump the new data to the file, but if many users in many servers are using the command, maybe you are going to lose data.
2
1
77,469,040
2023-11-12
https://stackoverflow.com/questions/77469040/python-dataclass-automate-id-incrementation-in-abstract-class
I want to create a unique ID incrementation for my Python subclasses using the abstract method, but I don't know how to separate each subclass's set of ID values. Here is my code: from dataclasses import dataclass, field from itertools import count @dataclass class Basic: identifier: int = field(default_factory=count().__next__, init=False) def __init_subclass__(cls, **__): super().__init_subclass__(**__) # cls.identifier: int = field(default_factory=count().__next__, init=False) class A(Basic): pass class B(Basic): pass The id values are shared between subclasses when I want to separate them: >>> import stack >>> stack.A() A(identifier=0) >>> stack.B() B(identifier=1) Do you know how to solve the problem? I think the __init_subclass__ is a solution, but I'm unsure how to make it work.
The dataclasses library does not fit for your use case. Think a data class as a mutable namedtuple with defaults. I recommend implementing concrete classes. However, if you insist, you can do like the following by defining another decorator which augments an input class before calling the dataclasses.dataclass(). (This assumes using Python 3.10 or later.) from dataclasses import dataclass, field from itertools import count def dataclass_with_id(cls): setattr(cls, 'identifier', field(default_factory=count().__next__, init=False)) cls.__annotations__['identifier'] = int return dataclass(cls) @dataclass_with_id class A: pass @dataclass_with_id class B: pass print(A(), A()) print(B(), B()) If you want to do this work in the __init_subclass__() of a common base class, you need to hack the internal implementation of the dataclasses module. But don't do it because it will be broken for future Python versions.
2
1
77,472,952
2023-11-13
https://stackoverflow.com/questions/77472952/python-code-for-calculation-of-very-large-adjacency-matrix-crashes-using-network
I need to calculate the adjacency matrix (in flat format) of a very large graph. Number nodes is 54327 and number edges is 46 million. The input are 46 million edges, so input looks like 1 6 2 7 1 6 3 8 ... meaning node 1 connects to 6, node 2 to 7, with possible repeats. The adjacency matrix in this case would look like 1 2 3 6 7 8 1 0 0 0 2 0 0 2 0 0 0 0 1 0 3 0 0 0 0 0 1 6 1 0 0 0 0 0 7 0 2 0 0 0 0 8 0 0 1 0 0 0 And in fact i just need the flat version for non-zero entries so: node_x node_y count_edges 1 6 2 2 7 1 3 8 1 6 1 2 7 2 1 8 3 1 With networkx (or with a crosstab) the python code crashes with out of memory (on both a 8GB RAM personal machine or a 64GB RAM linux server). Doing it the slow way, simply with for loops works, but takes 24+ hours to run. Here is the full test code below, any ideas on how to get networkx (or crosstab or any clever alternative to run)? import random import pandas as pd import networkx as nx import sys number_nodes = 54327 # real one is 54,327 number_edges_for_all_nodes = 46000000 # real one is around 46 million print("number_nodes=",number_nodes) print("number_edges_for_all_nodes=",number_edges_for_all_nodes) # Create a list of possible list of nodes list_nodes = [f'{i}' for i in range(1, number_nodes)] # Create random combinations of possible values. nodeid_x = random.choices(list_nodes, k=number_edges_for_all_nodes) nodeid_y = random.choices(list_nodes, k=number_edges_for_all_nodes) # This would look like this in a dataframe. df = pd.DataFrame({'nodeid_x': nodeid_x, 'nodeid_y': nodeid_y}) # remove self nodes df = df.query('nodeid_x != nodeid_y') # df print("Graph in df ->") print(df.head(10)) print("Create edges for MultiDiGraph") edges = df[['nodeid_x', 'nodeid_y']].to_numpy(dtype=int) # print("Create nodes for MultiDiGraph") nodes = sorted(set(node for edge in edges for node in edge)) # create graph that allows parallel connections, so that one node may be connected twice to another # Multi means parallel connections, Di means directional, since we want both directions G = nx.MultiDiGraph() print("Create add_nodes_from for MultiDiGraph") G.add_nodes_from(nodes) print("Create add_edges_from for MultiDiGraph") G.add_edges_from(edges) # memory usage edge = sum([sys.getsizeof(e) for e in G.edges]) node = sum([sys.getsizeof(n) for n in G.nodes]) print("Total memory graph GB:", (edge + node)/1024/1024/1024) # count connections, store to int to save space print("Create to_pandas_adjacency for MultiDiGraph") df = nx.to_pandas_adjacency(G, dtype=int) print("Total memory adjacency GB:", df.memory_usage(deep=True).sum()/1024/1024/1024) print(df) # Reset the index to flatten the DataFrame from a matrix to a long table print("Reset the index to flatten the DataFrame from a matrix to a long table") df = df.stack().reset_index() df.index = df.index.astype('int32', copy = False) # rename the column/row edges print("rename the column/row edges") df.rename(columns={'level_0': 'nodeid_x'}, inplace=True) df.rename(columns={'level_1': 'nodeid_y'}, inplace=True) # rename crosstab intersection column print("rename crosstab intersection column") df.rename(columns={0: 'count_intersections'}, inplace=True) # change count_intersections to int to save memory print("change count_intersections to int to save memory") df['count_intersections'] = df['count_intersections'].astype('int32', copy = False) # now drop rows for which the nodeid_x = nodeid_y print("now drop rows for which the nodeid_x = nodeid_y") df = df.query('nodeid_x != nodeid_y') # now drop zero intersections rows print("now drop zero intersections rows") df = df.query('count_intersections != 0') # sort df.sort_values(by=['count_intersections', 'nodeid_x', 'nodeid_y'], ascending=False, inplace = True) print("len(df)=",len(df)) print("pandas_adjacency matrix flat->") print(df.head(9)) Networkx, Crosstab, slow for loops
Maybe I miss something, buy you can use .groupby + .sum here: out = ( df.assign(count_intersections=1) .groupby(["nodeid_x", "nodeid_y"], as_index=False)["count_intersections"] .sum() ) out.sort_values( by=["count_intersections", "nodeid_x", "nodeid_y"], ascending=False, inplace=True ) print(out.head(10)) Prints (running with random.seed(42), with 46mio edges under one minute): nodeid_x nodeid_y count_intersections 41523794 5587 29315 4 40851660 53761 14855 4 40587824 53478 42683 4 40429793 53307 75 4 36763731 4938 42230 4 30719061 4290 32316 4 6800705 17279 42566 4 45616995 9972 39801 3 45613051 9968 5797 3 45597210 9951 10498 3
2
1
77,473,952
2023-11-13
https://stackoverflow.com/questions/77473952/using-lambda-function-how-to-iterate-over-the-columns-having-list-values-in-pan
import pandas as pd mydata = {"Key" : [567, 568, 569, 570, 571, 572] , "Sprint" : ["Max1;Max2", "Max2", "DI001 2", "DI001 25", "DAS 100" , "DI001 101"]} df = pd.DataFrame(mydata) df ["sprintlist"]= df["Sprint"].str.split(";") print (df) From this dataframe, I want to extract only the numbers that appears in the last part of the string from column "Sprintlist" for each value in the list to the new list "Sprintnumb" as show below Expected output: In one of my previous query, I got clarity on how to extract the number when only one value present in "Sprint" column. I tried using lambda function to achieved the desired output but getting errors "str' object has no attribute 'str'" df["Sprint Number"] = df.Sprint.str.extract(r"(\d+)$").astype(int)
Use Series.explode with Series.str.extractall, converting to numeric and aggregate lists: df["Sprint Number"] = (df["sprintlist"].explode() .str.extractall(r"(\d+)$")[0] .astype(int) .groupby(level=0) .agg(list)) print (df) Key Sprint sprintlist Sprint Number 0 567 Max1;Max2 [Max1, Max2] [1, 2] 1 568 Max2 [Max2] [2] 2 569 DI001 2 [DI001 2] [2] 3 570 DI001 25 [DI001 25] [25] 4 571 DAS 100 [DAS 100] [100] 5 572 DI001 101 [DI001 101] [101] Or use list comprhension with regex: df["Sprint Number"] = [[int(re.search('(\d+)$', y).group(0)) for y in x] for x in df["sprintlist"]] print (df) Key Sprint sprintlist Sprint Number 0 567 Max1;Max2 [Max1, Max2] [1, 2] 1 568 Max2 [Max2] [2] 2 569 DI001 2 [DI001 2] [2] 3 570 DI001 25 [DI001 25] [25] 4 571 DAS 100 [DAS 100] [100] 5 572 DI001 101 [DI001 101] [101] If possible some string not ends with number add assign operator := with testing None: import re mydata = {"Key" : [567, 568, 569, 570, 571, 572] , "Sprint" : ["Max1;Max", "Max2", "DI001 2", "DI001 25", "DAS 100" , "DI001 101"]} df = pd.DataFrame(mydata) df ["sprintlist"]= df["Sprint"].str.split(";") df["Sprint Number"] = [[int(m.group(0)) for y in x if( m:=re.search('(\d+)$', y)) is not None] for x in df["sprintlist"]] print (df) Key Sprint sprintlist Sprint Number 0 567 Max1;Max [Max1, Max] [1] 1 568 Max2 [Max2] [2] 2 569 DI001 2 [DI001 2] [2] 3 570 DI001 25 [DI001 25] [25] 4 571 DAS 100 [DAS 100] [100] 5 572 DI001 101 [DI001 101] [101]
2
1
77,471,991
2023-11-13
https://stackoverflow.com/questions/77471991/how-to-decide-if-streamingresponse-was-closed-in-fastapi-starlette
When looping a generator in StreamingResponse() using FastAPI/starlette https://www.starlette.io/responses/#streamingresponse how can we tell if the connection was somehow disconnected, so a event could be fired and handled somewhere else? Scenario: writing an API with text/event-stream, need to know when client closed the connection.
Consider using request.is_disconnected(). From Starlette's docs: In some cases such as long-polling, or streaming responses you might need to determine if the client has dropped the connection. You can determine this state with disconnected = await request.is_disconnected(). Unfortunately, there seems to be no other documentation regarding this API. Have a look at the MR that introduced this feature: https://github.com/encode/starlette/pull/320
2
4
77,472,748
2023-11-13
https://stackoverflow.com/questions/77472748/how-to-add-text-at-barchart-when-y-is-a-list-using-plotly-express
I have the following pandas dataframe import pandas as pd foo = pd.DataFrame({'country': {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'}, 'unweighted': {0: 18.0, 1: 16.9, 2: 13.3, 3: 11.3, 4: 13.1}, 'weighted_1': {0: 17.7, 1: 15.8, 2: 14.0, 3: 11.2, 4: 12.8}, 'weighted_2': {0: 17.8, 1: 15.8, 2: 14.0, 3: 11.2, 4: 12.8}}) country unweighted weighted_1 weighted_2 0 a 18.0 17.7 17.8 1 b 16.9 15.8 15.8 2 c 13.3 14.0 14.0 3 d 11.3 11.2 11.2 4 e 13.1 12.8 12.8 And I am using the following code to produce the barchart using plotly_express import plotly.express as px px.bar( so, x='country', y=['unweighted', 'weighted_1', 'weighted_2'], barmode='group', ) I would like to display also the text at each bar. I tried px.bar( so, x='country', y=['unweighted', 'weighted_1', 'weighted_2'], text=['unweighted', 'weighted_1', 'weighted_2'], barmode='group', ) but it doesnt work. How could I do that ?
There are automatic annotations in the annotations. import plotly.express as px px.bar( foo, x='country', y=['unweighted', 'weighted_1', 'weighted_2'], text_auto=True, barmode='group', )
2
1
77,468,274
2023-11-12
https://stackoverflow.com/questions/77468274/how-to-make-mutation-which-is-inversion-of-one-of-the-solutions-genes-when-solu
That's what I have so far. As I see from the output, my parameters are not enough to constraint mutation to my needs. Sometimes no one gene is changed, sometimes more than one. import pygad import numpy as np def divider(ga_instance): return np.max(np.sum(ga_instance.population, axis=1)) def on_start(ga_instance): print("on_start()") print(f'ΠΠ°Ρ‡Π°Π»ΡŒΠ½Π°Ρ популяция:\n {ga_instance.population}') def fitness_function(ga_instance, solution, _): return np.sum(solution) / divider(ga_instance) def on_fitness(ga_instance, population_fitness): print(f'\non_fitness()') print(f'Π”Π΅Π»ΠΈΡ‚Π΅Π»ΡŒ: {divider(ga_instance)}') for idx, (instance, fitness) in enumerate(zip(ga_instance.population, ga_instance.last_generation_fitness)): print(f'{idx}. {instance}: {fitness}') def on_parents(ga_instance, selected_parents): print("\non_parents()") print(f'Π’Ρ‹Π±Ρ€Π°Π½Π½Ρ‹Π΅ индСксы Ρ€ΠΎΠ΄ΠΈΡ‚Π΅Π»Π΅ΠΉ: {ga_instance.last_generation_parents_indices}') print(f'Π’Ρ‹Π±Ρ€Π°Π½Π½Ρ‹Π΅ Ρ€ΠΎΠ΄ΠΈΡ‚Π΅Π»ΠΈ:\n {ga_instance.last_generation_parents}') def on_crossover(ga_instance, offspring_crossover): print("\non_crossover()") print(f'Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ кроссинговСра:\n {ga_instance.last_generation_offspring_crossover}') def on_mutation(ga_instance, offspring_mutation): print("\non_mutation()") print(f'Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ ΠΌΡƒΡ‚Π°Ρ†ΠΈΠΈ:\n {ga_instance.last_generation_offspring_mutation}') def on_generation(ga_instance): print(f"\non_generation()") print("Π’Ρ‹Π²Π΅Π΄Π΅Π½Π½ΠΎΠ΅ ΠΏΠΎΠΊΠΎΠ»Π΅Π½ΠΈΠ΅:\n ", ga_instance.population) sol = ga_instance.best_solution() print(f"Π›ΡƒΡ‡ΡˆΠ΅Π΅ Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅: {sol[0]} : {sol[1]}") ga_instance = pygad.GA( num_generations=1, num_parents_mating=2, fitness_func=fitness_function, gene_type=int, init_range_low=0, init_range_high=2, sol_per_pop=10, num_genes=8, crossover_type='single_point', parent_selection_type="rws", mutation_type="inversion", mutation_num_genes=1, on_start=on_start, on_fitness=on_fitness, on_parents=on_parents, on_crossover=on_crossover, on_mutation=on_mutation, on_generation=on_generation, ) ga_instance.run() More details for stackoverflow algorithm: sdfgdsfdddddddddddfffffffffadsgadfgdafgdasgadsgdsagdsagdsagadsgdsagdsagdsag.
Inversion mutation inverts the order of subset of the genes. It does not invert the value of the genes from 0 to 1. That is if you have a chromosome like abcd, then inversion mutation inverts the genes to be dcba. To apply a mutation operator that flips the genes from 0 to 1 and from 1 to 0, use this code. It creates a new function flip_mutation() to flip the bits. import pygad import numpy as np def divider(ga_instance): return np.max(np.sum(ga_instance.population, axis=1)) def on_start(ga_instance): print("on_start()") print(f'ΠΠ°Ρ‡Π°Π»ΡŒΠ½Π°Ρ популяция:\n {ga_instance.population}') def fitness_function(ga_instance, solution, _): return np.sum(solution) / divider(ga_instance) def on_fitness(ga_instance, population_fitness): print(f'\non_fitness()') print(f'Π”Π΅Π»ΠΈΡ‚Π΅Π»ΡŒ: {divider(ga_instance)}') for idx, (instance, fitness) in enumerate(zip(ga_instance.population, ga_instance.last_generation_fitness)): print(f'{idx}. {instance}: {fitness}') def on_parents(ga_instance, selected_parents): print("\non_parents()") print(f'Π’Ρ‹Π±Ρ€Π°Π½Π½Ρ‹Π΅ индСксы Ρ€ΠΎΠ΄ΠΈΡ‚Π΅Π»Π΅ΠΉ: {ga_instance.last_generation_parents_indices}') print(f'Π’Ρ‹Π±Ρ€Π°Π½Π½Ρ‹Π΅ Ρ€ΠΎΠ΄ΠΈΡ‚Π΅Π»ΠΈ:\n {ga_instance.last_generation_parents}') def on_crossover(ga_instance, offspring_crossover): print("\non_crossover()") print(f'Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ кроссинговСра:\n {ga_instance.last_generation_offspring_crossover}') def on_mutation(ga_instance, offspring_mutation): print("\non_mutation()") print(f'Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ ΠΌΡƒΡ‚Π°Ρ†ΠΈΠΈ:\n {ga_instance.last_generation_offspring_mutation}') def on_generation(ga_instance): print(f"\non_generation()") print("Π’Ρ‹Π²Π΅Π΄Π΅Π½Π½ΠΎΠ΅ ΠΏΠΎΠΊΠΎΠ»Π΅Π½ΠΈΠ΅:\n ", ga_instance.population) sol = ga_instance.best_solution() print(f"Π›ΡƒΡ‡ΡˆΠ΅Π΅ Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅: {sol[0]} : {sol[1]}") def flip_mutation(offspring, ga_instance): for idx in range(offspring.shape[0]): mutation_gene1 = np.random.randint(low=0, high=np.ceil(offspring.shape[1]/2 + 1), size=1)[0] if offspring[idx, mutation_gene1] == 1: offspring[idx, mutation_gene1] = 0 else: offspring[idx, mutation_gene1] = 1 return offspring ga_instance = pygad.GA( num_generations=1, num_parents_mating=2, fitness_func=fitness_function, gene_type=int, init_range_low=0, init_range_high=2, sol_per_pop=10, num_genes=8, crossover_type='single_point', parent_selection_type="rws", mutation_type=flip_mutation, mutation_num_genes=1, on_start=on_start, on_fitness=on_fitness, on_parents=on_parents, on_crossover=on_crossover, on_mutation=on_mutation, on_generation=on_generation, ) ga_instance.run()
2
1
77,470,205
2023-11-12
https://stackoverflow.com/questions/77470205/unexpected-behaviour-when-passing-none-as-a-parameter-value-to-sql-server
Given the following test3 table /****** Object: Table [dbo].[test3] Script Date: 11/12/2023 9:30:17 AM ******/ IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[test3]') AND type in (N'U')) DROP TABLE [dbo].[test3] GO /****** Object: Table [dbo].[test3] Script Date: 11/12/2023 9:30:17 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[test3]( [id] [int] IDENTITY(1,1) NOT NULL, [column1] [varchar](10) NOT NULL ) ON [PRIMARY] GO SET IDENTITY_INSERT [dbo].[test3] ON GO INSERT [dbo].[test3] ([id], [column1]) VALUES (1, N'aaa') GO INSERT [dbo].[test3] ([id], [column1]) VALUES (2, N'bbb') GO SET IDENTITY_INSERT [dbo].[test3] OFF GO Problem: The sqlstatement1 returns all two rows of the table. The sqlstatement2 returns zero rows of the table. import pyodbc connectionString = 'DRIVER={ODBC Driver 17 for SQL Server};SERVER=7D3QJR3;DATABASE=mint2;Trusted_Connection=yes' currentConnection = pyodbc.connect(connectionString) sqlStatement1 = ''' SELECT id, column1 FROM test3 WHERE ISNULL(?, id) = id ORDER BY ID ''' sqlStatement2 = ''' SELECT id, column1 FROM test3 WHERE ISNULL(?, column1) = column1 ORDER BY ID ''' #Process sqlStatement1 sqlArgs = [] sqlArgs.append(None) cursor = currentConnection.cursor() cursor.execute(sqlStatement1,sqlArgs) rows = cursor.fetchall() print('ROWS WITH ID=NULL:' + str(len(rows))) cursor.close() #Process sqlStatement2 sqlArgs = [] sqlArgs.append(None) cursor = currentConnection.cursor() cursor.execute(sqlStatement2,sqlArgs) rows = cursor.fetchall() print('ROWS WITH COLUMN1=NULL:' + str(len(rows))) cursor.close() So why does it work with an int data type but not a string data type? My gut is that the sp_prepexec statement is creating the positional parameter P1 as varchar(1) for some reason when the statement compares ? to a varchar column and sets P1 to and int when the statement comapres ? to a int column: declare @p1 int set @p1=1 exec sp_prepexec @p1 output,N'@P1 int',N' SELECT id, column1 FROM test3 WHERE ISNULL(@P1, id) = id ORDER BY ID ',NULL select @p1 vs declare @p1 int set @p1=2 exec sp_prepexec @p1 output,N'@P1 varchar(1)',N' SELECT id, column1 FROM test3 WHERE ISNULL(@P1, column1) = column1 ORDER BY ID ',NULL select @p1
My gut is that the sp_prepexec statement is creating the positional parameter P1 as varchar(1) for some reason when the statement compares ? to a varchar column and sets P1 to and int when the statement comapres ? to a int column. Yes that is exactly what it does. It has no knowledge of how large to make the parameter, because you haven't told it. This has been noted on GitHub. Because you are using it on the left side of an ISNULL, the right side is casted to the left side, hence the query is not giving the correct results. You have a number of solutions: You can use setinputsizes to set the type and size of a string parameter: cursor.setinputsizes([(pyodbc.SQL_VARCHAR, 10, 0)]) Do an explicit cast. This is a much better option, as it can never fail and you can set each value individually. WHERE ISNULL(CAST(? AS varchar(10)), column1) = column1 You can also do this by setting it to a variable of the correct size first. Rewrite your query to not use ISNULL, which you should do anyway, because it prevents the use of indexes. Either use an OR sqlStatement2 = ''' SELECT id, column1 FROM test3 WHERE (? = column1 OR ? IS NULL) ORDER BY ID; ''' Note that you need to pass the parameter twice, or assign it to a variable within the SQL. Or the best option in my opinion, make the query dynamic, so you decide up front whether to filter by that column. sqlStatement2 = ''' SELECT id, column1 FROM test3 ''' if someValue is not None: sqlStatement2 = sqlStatement2 + '''WHERE ? = column1 ''' sqlStatement2 = sqlStatement2 + '''ORDER BY ID; ''' Note that in these last two options, it's probably still ideal to set the variable size.
2
3
77,469,168
2023-11-12
https://stackoverflow.com/questions/77469168/how-to-update-hoover-annotations-when-using-a-slider
My goal. I am using matplotlib slider to plot several series. I want to have hovering labels for each point. Each point corresponds to measurement. So I want to display measurement name to be on these hovering labels. Question. How do I update labels for new series (for slider positions)? If I create new cursor in update function I get several labels for each point. So I need somehow to delete old label first. How do I do this? Description. On first slide I get points with A and B labels. On second slide I should get C and D labels, but I am getting A and B again. My code. import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.widgets import Slider import mplcursors as mplc %matplotlib widget data = {'Name': ['A', 'B', 'C', 'D'], 'x': [1,3,3,2],'y': [4,7,5,1],'color': [1,2,3,2],'size' :[50,40,10,30],'slide': [1,1,2,2]} df=pd.DataFrame.from_dict(data) dff=df.loc[df['slide']==1] z=list(set(df['slide'].to_list())) x = dff['x'].to_list() y = dff['y'].to_list() lbl=dff['Name'].to_list() fig, ax = plt.subplots() ax.clear() points = ax.scatter(x,y,s=100, alpha=0.5) mplc.cursor(ax, hover=True).connect( "add", lambda sel: sel.annotation.set_text(lbl[sel.index])) xmin=min(x) xmax=max(x) ymin=min(y) ymax=max(y) ax.set_ylim([xmin-10,xmax+10]) ax.set_xlim([ymin-10,xmax+10]) axfreq = fig.add_axes([0.15, 0.1, 0.65, 0.03]) plot_slider = Slider( ax=axfreq, label='', valmin=0, valmax=len(z), valinit=1, valstep=1,) def update(val): dff=df.loc[df['slide']==plot_slider.val] x = dff['x'].to_list() y = dff['y'].to_list() lbl=dff['Name'].to_list() points.set_offsets(np.c_[x,y]) plot_slider.on_changed(update) plt.show()
I had to (!pip install mplcursors ipympl) before I could run your code. Here is my workaround to get the correct annotations/labels after the slider being updated : sliders = df["slide"].unique() fig, ax = plt.subplots() d = {} for sl in sliders: lbl, x, y = df.loc[ df["slide"].eq(sl), ["Name", "x", "y"]].T.to_numpy() pts = ax.scatter(x, y, s=100) curs = mplc.cursor(pts, hover=True) curs.connect("add", lambda sel, lbl=lbl: sel.annotation.set_text(lbl[sel.index])) d[sl] = {"cursor": curs, "scatter": pts} axfreq = fig.add_axes([0.15, 0.01, 0.73, 0.03]) # << I adjusted this plot_slider = Slider( ax=axfreq, label="", valinit=1, valstep=1, valmin=min(sliders), valmax=max(sliders)) def curscatter(sl, op=0.5): """Display the cur/pts for the requested slider only""" for _sl, inf in d.items(): curs = inf["cursor"]; pts = inf["scatter"] pts.set_alpha(op if _sl == sl else 0) curs.visible = (_sl == sl) def update(curr_sl): curscatter(curr_sl) curscatter(plot_slider.val) # or curscatter(1) plot_slider.on_changed(update) plt.show() Name x y color size slide A 1 4 1 50 1 B 3 7 2 40 1 C 3 5 3 10 2 D 2 1 2 30 2
2
1
77,466,563
2023-11-11
https://stackoverflow.com/questions/77466563/how-to-implement-multi-band-pass-filter-with-scipy-signal-butter
Based on the band-pass filter here, I am trying to make a multi-band filter using the code bellow. However, the filtered signal is close to zero which affects the result when the spectrum is plotted. Should the coefficients of the filter of each band be normalized? Can you please someone suggest how I can fix the filter? from scipy.signal import butter, sosfreqz, sosfilt from scipy.signal import spectrogram import matplotlib import matplotlib.pyplot as plt from scipy.fft import fft import numpy as np def butter_bandpass(lowcut, highcut, fs, order=5): nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq sos = butter(order, [low, high], analog=False, btype='band', output='sos') return sos def multiband_filter(data, bands, fs, order=10): sos_list = [] for lowcut, highcut in bands: sos = butter_bandpass(lowcut, highcut, fs, order=order) scalar = max(abs(fft(sos, 2000))) # sos = sos / scalar sos_list += [sos] # sos_list = [butter_bandpass(lowcut, highcut, fs, order=order) for lowcut, highcut in bands] # Combine filters into a single filter sos = np.vstack(sos_list) # Apply the multiband filter to the data y = sosfilt(sos, data) return y, sos_list def get_toy_signal(): t = np.arange(0, 0.3, 1 / fs) fq = [-np.inf] + [x / 12 for x in range(-9, 3, 1)] mel = [5, 3, 1, 3, 5, 5, 5, 0, 3, 3, 3, 0, 5, 8, 8, 0, 5, 3, 1, 3, 5, 5, 5, 5, 3, 3, 5, 3, 1] acc = [5, 0, 8, 0, 5, 0, 5, 5, 3, 0, 3, 3, 5, 0, 8, 8, 5, 0, 8, 0, 5, 5, 5, 0, 3, 3, 5, 0, 1] toy_signal = np.array([]) for kj in range(len(mel)): note_signal = np.sum([np.sin(2 * np.pi * 440 * 2 ** ff * t) for ff in [fq[acc[kj]] - 1, fq[acc[kj]], fq[mel[kj]] + 1]], axis=0) zeros = np.zeros(int(0.01 * fs)) toy_signal = np.concatenate((toy_signal, note_signal, zeros)) toy_signal += np.random.normal(0, 1, len(toy_signal)) toy_signal = toy_signal / (np.max(np.abs(toy_signal)) + 0.1) t_toy_signal = np.arange(len(toy_signal)) / fs return t_toy_signal, toy_signal if __name__ == "__main__": fontsize = 12 # Sample rate and desired cut_off frequencies (in Hz). fs = 3000 f1, f2 = 100, 200 f3, f4 = 470, 750 f5, f6 = 800, 850 f7, f8 = 1000, 1000.1 cut_off = [(f1, f2), (f3, f4), (f5, f6), (f7, f8)] # cut_off = [(f1, f2), (f3, f4)] # cut_off = [(f1, f2)] # cut_off = [f1] t_toy_signal, toy_signal = get_toy_signal() # toy_signal -= np.mean(toy_signal) # t_toy_signal = wiener(t_toy_signal) fig, ax = plt.subplots(6, 1, figsize=(8, 12)) fig.tight_layout() ax[0].plot(t_toy_signal, toy_signal) ax[0].set_title('Original toy_signal', fontsize=fontsize) ax[0].set_xlabel('Time (s)', fontsize=fontsize) ax[0].set_ylabel('Magnitude', fontsize=fontsize) ax[0].set_xlim(left=0, right=max(t_toy_signal)) sos_list = [butter_bandpass(lowcut, highcut, fs, order=10) for lowcut, highcut in cut_off] # Combine filters into a single filter sos = np.vstack(sos_list) # w *= 0.5 * fs / np.pi # Convert w to Hz. ##################################################################### # First plot the desired ideal response as a green(ish) rectangle. ##################################################################### # Plot the frequency response for i in range(len(cut_off)): w, h = sosfreqz(sos_list[i], worN=2000) ax[1].plot(0.5 * fs * w / np.pi, np.abs(h), label=f'Band {i + 1}: {cut_off[i]} Hz') ax[1].set_title('Multiband Filter Frequency Response') ax[1].set_xlabel('Frequency [Hz]') ax[1].set_ylabel('Gain') ax[1].legend() # ax[1].set_xlim(0, max(*cut_off) + 100) ##################################################################### # Spectrogram of original signal ##################################################################### f, t, Sxx = spectrogram(toy_signal, fs, nperseg=930, noverlap=0) ax[2].pcolormesh(t, f, np.abs(Sxx), norm=matplotlib.colors.LogNorm(vmin=np.min(Sxx), vmax=np.max(Sxx)), ) ax[2].set_title('Spectrogram of original toy_signal', fontsize=fontsize) ax[2].set_xlabel('Time (s)', fontsize=fontsize) ax[2].set_ylabel('Frequency (Hz)', fontsize=fontsize) ##################################################################### # Compute filtered signal ##################################################################### # Apply the multiband filter to the data # toy_signal_filtered = sosfilt(sos, toy_signal) toy_signal_filtered = np.sum([sosfilt(sos, toy_signal) for sos in sos_list], axis=0) ##################################################################### # Spectrogram of filtered signal ##################################################################### f, t, Sxx = spectrogram(toy_signal_filtered, fs, nperseg=930, noverlap=0) ax[3].pcolormesh(t, f, np.abs(Sxx), norm=matplotlib.colors.LogNorm(vmin=np.min(Sxx), vmax=np.max(Sxx)) ) ax[3].set_title('Spectrogram of filtered toy_signal', fontsize=fontsize) ax[3].set_xlabel('Time (s)', fontsize=fontsize) ax[3].set_ylabel('Frequency (Hz)', fontsize=fontsize) ax[4].plot(t_toy_signal, toy_signal_filtered) ax[4].set_title('Filtered toy_signal', fontsize=fontsize) ax[4].set_xlim(left=0, right=max(t_toy_signal)) ax[4].set_xlabel('Time (s)', fontsize=fontsize) ax[4].set_ylabel('Magnitude', fontsize=fontsize) N = 1512 X = fft(toy_signal, n=N) Y = fft(toy_signal_filtered, n=N) # fig.set_size_inches((10, 4)) ax[5].plot(np.arange(N) / N * fs, 20 * np.log10(abs(X)), 'r-', label='FFT original signal') ax[5].plot(np.arange(N) / N * fs, 20 * np.log10(abs(Y)), 'g-', label='FFT filtered signal') ax[5].set_xlim(xmax=fs / 2) ax[5].set_ylim(ymin=-20) ax[5].set_ylabel(r'Power Spectrum (dB)', fontsize=fontsize) ax[5].set_xlabel("frequency (Hz)", fontsize=fontsize) ax[5].grid() ax[5].legend(loc='upper right') plt.tight_layout() plt.show() plt.figure() # fig.set_size_inches((10, 4)) plt.plot(np.arange(N) / N * fs, 20 * np.log10(abs(X)), 'r-', label='FFT original signal') plt.plot(np.arange(N) / N * fs, 20 * np.log10(abs(Y)), 'g-', label='FFT filtered signal') plt.xlim(xmax=fs / 2) plt.ylim(ymin=-20) plt.ylabel(r'Power Spectrum (dB)', fontsize=fontsize) plt.xlabel("frequency (Hz)", fontsize=fontsize) plt.grid() plt.legend(loc='upper right') plt.tight_layout() plt.show() The following is after using @Warren Weckesser comment: toy_signal_filtered = np.mean([sosfilt(sos, toy_signal) for sos in sos_list], axis=0) The following is after using @Warren Weckesser comment: toy_signal_filtered = np.sum([sosfilt(sos, toy_signal) for sos in sos_list], axis=0) Here is an example where a narrow band is used:
Easier and recommended method is what Warren wrote in comments. Just calculate sum of separately band-pass filtered signals. That being said, for someone who wants to create and apply single multi-band filter, he can try to achieve this by combining filters: lowpass (to cut everything above last pass-filter), highpass (to cut everything below first pass-filter), N-1 band-stop filters, where N being number of pass-bands (to cut parts in-between filters). It may be difficult though to make it stable (be careful with filter orders) and harder to make it steep. Found it interesting and tried myself: from scipy.signal import butter, lfilter import matplotlib.pyplot as plt from scipy.fft import fft import numpy as np def multi_band_filter(bands, subfilter_order=5): # high-pass filter nyq = 0.5 * fs normal_cutoff = bands[0][0] / nyq b, a = butter(subfilter_order, normal_cutoff, btype='highpass', analog=False) all_b = [b] all_a = [a] # band-stop filters for idx in range(len(bands) - 1): normal_cutoff1 = bands[idx][1] / nyq normal_cutoff2 = bands[idx+1][0] / nyq b, a = butter(subfilter_order, [normal_cutoff1, normal_cutoff2], btype='bandstop', analog=False) all_b.append(b) all_a.append(a) # low-pass filter normal_cutoff = bands[-1][1] / nyq b, a = butter(subfilter_order, normal_cutoff, btype='lowpass', analog=False) all_b.append(b) all_a.append(a) # combine filters: combined_a = all_a[0] for a in all_a[1:]: combined_a = np.convolve(a, combined_a) combined_b = all_b[0] for b in all_b[1:]: combined_b = np.convolve(b, combined_b) return combined_b, combined_a bands = [[400, 700], [1000, 1500]] fs = 8000 time = np.arange(0, 1 - 0.5/fs, 1/fs) signal_to_filter = np.sum([np.sin(2 * np.pi * (freq + 0.01 * np.random.random()) * time + np.pi*np.random.random()) for freq in range(10, 3800)], axis=0) b, a = multi_band_filter(bands) filtered_signal = lfilter(b, a, signal_to_filter) original_spectrum = fft(signal_to_filter) filtered_signal_spectrum = fft(filtered_signal) plt.figure(figsize=(16, 10)) plt.plot(np.linspace(0, fs, len(original_spectrum)), np.abs(original_spectrum), color='b') plt.plot(np.linspace(0, fs, len(filtered_signal_spectrum)), np.abs(filtered_signal_spectrum), color='orange') plt.xlim([0, 4000]) plt.show() SOS version from scipy.signal import butter, sosfilt, freqz import matplotlib.pyplot as plt from scipy.fft import fft import numpy as np def multi_band_filter(bands, subfilter_order=5): # high-pass filter nyq = 0.5 * fs normal_cutoff = bands[0][0] / nyq sos = butter(subfilter_order, normal_cutoff, btype='highpass', analog=False, output='sos') all_sos = [sos] # band-stop filters for idx in range(len(bands) - 1): normal_cutoff1 = bands[idx][1] / nyq normal_cutoff2 = bands[idx+1][0] / nyq sos = butter(subfilter_order, [normal_cutoff1, normal_cutoff2], btype='bandstop', analog=False, output='sos') all_sos.append(sos) # low-pass filter normal_cutoff = bands[-1][1] / nyq sos = butter(subfilter_order, normal_cutoff, btype='lowpass', analog=False, output='sos') all_sos.append(sos) # combine filters: combined_sos = np.vstack(all_sos) return combined_sos bands = [[400, 700], [1000, 1500]] fs = 8000 time = np.arange(0, 1 - 0.5/fs, 1/fs) signal_to_filter = np.sum([np.sin(2 * np.pi * (freq + 0.01 * np.random.random()) * time + np.pi*np.random.random()) for freq in range(10, 3800)], axis=0) sos = multi_band_filter(bands) filtered_signal = sosfilt(sos, signal_to_filter) original_spectrum = fft(signal_to_filter) filtered_signal_spectrum = fft(filtered_signal) plt.figure(figsize=(16, 10)) plt.plot(np.linspace(0, fs, len(original_spectrum)), np.abs(original_spectrum), color='b') plt.plot(np.linspace(0, fs, len(filtered_signal_spectrum)), np.abs(filtered_signal_spectrum), color='orange') plt.xlim([0, 4000]) plt.show() w, h = freqz(b, a) freq_domain = np.linspace(0, fs/2, len(w)) plt.figure(figsize=(16, 10)) plt.plot(freq_domain, 20 * np.log10(abs(h)), 'b') plt.show() As you can see, the slope of the filter is not very steep.
2
4
77,460,705
2023-11-10
https://stackoverflow.com/questions/77460705/how-to-detect-any-key-pressed-without-blocking-execution-in-python
I have an script that checks the position of the mouse every 60 seconds. If the mouse has not moved, it moves it, makes a right click, clicks esc, and sleeps. It is pretty handy to avoid the computer going to sleep. If the mouse has moved, does nothing, goes to sleep and checks again in 60 sec. Now I want to extend it to detect any key pressed. and take it in the same way as the mouse. If a key has been pressed, then just go to sleep. I found the module keyboard, that allows to detect pressed keys. However, it blocks the execution while waiting for a pressed key. Therefore, I would like to find a way to stop "listening" with keyboard after an specified time. That way, instead of sleep, I would use that. If either the mouse is moved or the keyboard is pressed don't do anything. I have not been able to find anything that works for Windows. I hope you can help.
You can use method #3 of https://stackoverflow.com/a/57644349/9997212: Method #3: Using the function on_press_key: import keyboard keyboard.on_press_key("p", lambda _: print("You pressed p")) It needs a callback function. I used _ because the keyboard function returns the keyboard event to that function. Once executed, it will run the function when the key is pressed. You can stop all hooks by running this line: keyboard.unhook_all() This will allow your program to be event-oriented and will not block your main thread. To listen to all keys instead of a specific one, use keyboard.hook. It works the same way: import keyboard keyboard.hook(lambda event: print(f'You pressed {event.name}')) The key is to set up a global variable as a boolean. The hook works as an interrumption, therefore, everytime that it is triggered, you activate as True the boolean. After the check that you need you turn it back to False. Something like this: import keyboard try: #the try, finally is used to activate and relase the hooks to the keyboard def handle_key(event): global KeyPressed #the almighty global variable that monitors whether the keyboard was pressed or not. KeyPressed = True # print("KeyPressed is now:", event.name) #in case you want to know what did you pressed. return global KeyPressed #the almighty global variable that monitors whether the keyboard was pressed or not. KeyPressed = False #initialize as False, no key was touched keyboard.hook(lambda event: handle_key(event) ) #this activates the detection of pressed keys. Must end with "keyboard.unhook_all()" if not KeyPressed: #means you have not pressed a key do_something() else: #Then a key was pressed KeyPressed = False #reset the pressed state to False sleep(PAUSE) #wait the specified pause finally: keyboard.unhook_all() #this resets the "keyboard.hook(lambda event: handle_key(event) )" so that no more keys are registered
2
3
77,460,094
2023-11-10
https://stackoverflow.com/questions/77460094/python-pyqt5-how-to-show-statustip-for-qmenu-and-submenu-actions
# -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'menu_example_statustip.ui' # # Created by: PyQt5 UI code generator 5.15.9 # # WARNING: Any manual changes made to this file will be lost when pyuic5 is # run again. Do not edit this file unless you know what you are doing. from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 22)) self.menubar.setObjectName("menubar") self.menu1 = QtWidgets.QMenu(self.menubar) self.menu1.setObjectName("menu1") self.menu1_1 = QtWidgets.QMenu(self.menu1) self.menu1_1.setObjectName("menu1_1") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.action1_1_1 = QtWidgets.QAction(MainWindow) self.action1_1_1.setObjectName("action1_1_1") self.action1_1_2 = QtWidgets.QAction(MainWindow) self.action1_1_2.setObjectName("action1_1_2") self.menu1_1.addAction(self.action1_1_1) self.menu1_1.addAction(self.action1_1_2) self.menu1.addAction(self.menu1_1.menuAction()) self.menubar.addAction(self.menu1.menuAction()) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.menu1.setStatusTip(_translate("MainWindow", "1")) self.menu1.setTitle(_translate("MainWindow", "1")) self.menu1_1.setStatusTip(_translate("MainWindow", "1.1")) self.menu1_1.setTitle(_translate("MainWindow", "1.1")) self.action1_1_1.setText(_translate("MainWindow", "1.1.1")) self.action1_1_1.setStatusTip(_translate("MainWindow", "1.1.1")) self.action1_1_2.setText(_translate("MainWindow", "1.1.2")) self.action1_1_2.setStatusTip(_translate("MainWindow", "1.1.2")) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) Maybe a duplicate question. How can I show the "1" status tip message and "1.1" status tip message for menu and submenu actions. For "1.1.1" and "1.1.2" there is no problem. Is this possible without code in QtDesigner? Edit: There is a related bug here.
The issue here is that you're setting the status-tip on the wrong object. You need to set it on the item that represents the menu, rather than the menu itself. This can be done via the menu's associated action, like this: menu.menuAction().setStatusTip('Hello World') It's a little surprising that this doesn't happen automatically. The menu-action does inherit the title and icon of the menu - so why not also its status-tip, tool-tip, etc? This would obviously be quite convenient for menus created in Qt Designer. I suppose a work-around might be something like this: for menu in MainWindow.findChildren(QtWidgets.QMenu): action = menu.menuAction() action.setStatusTip(menu.statusTip()) action.setToolTip(menu.toolTip())
2
3
77,446,607
2023-11-8
https://stackoverflow.com/questions/77446607/why-does-cubic-spline-create-not-logical-shape
I am trying to draw an arch-like Cubic Spline using SciPy's Cubic Spline function but at some point is creating a non logical shape between two of the control points. The line in black is what the function is evaluating and in green is what I expect to happen (just as it does between points 4 and 8) This is how I create the image (You can check the code and run it here) from scipy.interpolate import CubicSpline import numpy as np import matplotlib.pyplot as plt x = [-0.0243890844, -0.0188174509, -0.00021640210000000056, 0.0202699043, 0.0239562802] # X values of the coordinates for points 2, 4, 8, 13 and 15 y = [-0.0117638968, 0.00469300617, 0.0177650191, 0.00215831073, -0.0154924048] # Y values of the coordinates for points 2, 4, 8, 13 and 15 cs = CubicSpline(x, y) dsX = np.linspace(x[0], x[len(x)-1], num=1000) plt.plot(dsX, cs(dsX), 'k') plt.plot(x, y, 'mo') plt.show() Do you know how could I fix this? Or what could be causing this? Is there any kind of option/configuration parameter I am missing?
Cubic splines are prone to overshooting like this due to the constraint of matching 2nd derivatives. Thus small variations in data may cause large variations in the curve itself, including what you seem to have here. There is no way to "fix" this with CubicSpline. What you could do is to clarify your requirements and select an appropriate interpolant. If you can forgo the C2 requirement and C1 interpolant is OK, you can use pchip or Akima1D, as suggested in comments. If you want smoothing not interpolation, there's make_smoothing_spline (as also suggested in the comments).
3
1
77,459,386
2023-11-10
https://stackoverflow.com/questions/77459386/how-to-implement-nested-for-loops-with-branches-efficiently-in-jax
I am wanting to reimplement a function in jax that loops over a 2d array and modifies the output array at an index that is not necessarily the same as the current iterating index based on conditions. Currently I am implementing this via repeated use of jnp.where for the conditions separately, but the function is ~4x slower than the numba implementation on cpu, on gpu it is ~10x faster - which I suspect is due to the fact that I am iterating over the whole array again for every condition. The numba implementation is as follows: from jax.config import config config.update("jax_enable_x64", True) import jax import jax.numpy as jnp import numpy as np import numba as nb rng = np.random.default_rng() @nb.njit def raytrace_np(ir, dx, dy): assert ir.ndim == 2 n, m = ir.shape assert ir.shape == dx.shape == dy.shape output = np.zeros_like(ir) for i in range(ir.shape[0]): for j in range(ir.shape[1]): dx_ij = dx[i, j] dy_ij = dy[i, j] dxf_ij = np.floor(dx_ij) dyf_ij = np.floor(dy_ij) ir_ij = ir[i, j] index0 = i + int(dyf_ij) index1 = j + int(dxf_ij) if 0 <= index0 <= n - 1 and 0 <= index1 <= m - 1: output[index0, index1] += ( ir_ij * (1 - (dx_ij - dxf_ij)) * (1 - (dy_ij - dyf_ij)) ) if 0 <= index0 <= n - 1 and 0 <= index1 + 1 <= m - 1: output[index0, index1 + 1] += ( ir_ij * (dx_ij - dxf_ij) * (1 - (dy_ij - dyf_ij)) ) if 0 <= index0 + 1 <= n - 1 and 0 <= index1 <= m - 1: output[index0 + 1, index1] += ( ir_ij * (1 - (dx_ij - dxf_ij)) * (dy_ij - dyf_ij) ) if 0 <= index0 + 1 <= n - 1 and 0 <= index1 + 1 <= m - 1: output[index0 + 1, index1 + 1] += ( ir_ij * (dx_ij - dxf_ij) * (dy_ij - dyf_ij) ) return output and my current jax reimplementation is: @jax.jit def raytrace_jax(ir, dx, dy): assert ir.ndim == 2 n, m = ir.shape assert ir.shape == dx.shape == dy.shape output = jnp.zeros_like(ir) dxfloor = jnp.floor(dx) dyfloor = jnp.floor(dy) dxfloor_int = dxfloor.astype(jnp.int64) dyfloor_int = dyfloor.astype(jnp.int64) meshyfloor = dyfloor_int + jnp.arange(n)[:, None] meshxfloor = dxfloor_int + jnp.arange(m)[None] validx = (meshxfloor >= 0) & (meshxfloor <= m - 1) validy = (meshyfloor >= 0) & (meshyfloor <= n - 1) validx2 = (meshxfloor + 1 >= 0) & (meshxfloor + 1 <= m - 1) validy2 = (meshyfloor + 1 >= 0) & (meshyfloor + 1 <= n - 1) validxy = validx & validy validx2y = validx2 & validy validxy2 = validx & validy2 validx2y2 = validx2 & validy2 dx_dxfloor = dx - dxfloor dy_dyfloor = dy - dyfloor output = output.at[ jnp.where(validxy, meshyfloor, 0), jnp.where(validxy, meshxfloor, 0) ].add( jnp.where(validxy, ir * (1 - dx_dxfloor) * (1 - dy_dyfloor), 0) ) output = output.at[ jnp.where(validx2y, meshyfloor, 0), jnp.where(validx2y, meshxfloor + 1, 0), ].add(jnp.where(validx2y, ir * dx_dxfloor * (1 - dy_dyfloor), 0)) output = output.at[ jnp.where(validxy2, meshyfloor + 1, 0), jnp.where(validxy2, meshxfloor, 0), ].add(jnp.where(validxy2, ir * (1 - dx_dxfloor) * dy_dyfloor, 0)) output = output.at[ jnp.where(validx2y2, meshyfloor + 1, 0), jnp.where(validx2y2, meshxfloor + 1, 0), ].add(jnp.where(validx2y2, ir * dx_dxfloor * dy_dyfloor, 0)) return output Test and timings: shape = 2000, 2000 ir = rng.random(shape) dx = (rng.random(shape) - 0.5) * 5 dy = (rng.random(shape) - 0.5) * 5 _raytrace_np = raytrace_np(ir, dx, dy) _raytrace_jax = raytrace_jax(ir, dx, dy).block_until_ready() assert np.allclose(_raytrace_np, _raytrace_jax) %timeit raytrace_np(ir, dx, dy) %timeit raytrace_jax(ir, dx, dy).block_until_ready() Output: 14.3 ms Β± 84.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) 62.9 ms Β± 187 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) So is there a way to implement this algorithm in jax with performance more comparable to the numba implementation?
The way you implemented it in JAX is pretty close to what I'd recommend. Yes, it's 3x slower than a custom Numba implementation on CPU, but I think for an operation like this, that is to be expected. The operation you defined applies specific logic to each individual entry of the array – that is precisely the computational regime that Numba is designed for, and precisely the kind of computation that CPUs were designed for: it's not surprising that with Numba on CPU your computation is very fast. I suspect the reason you used Numba rather than NumPy here is that NumPy is not designed for this sort of algorithm: it is an array-oriented language, not an array-element-oriented language. JAX/XLA is more similar to NumPy than to Numba: it is an array-oriented language; it encodes operations across whole arrays at once, rather than choosing a different computation per-element. The benefit of this array-oriented computing model becomes really apparent when you move away from CPU and run the code on an accelerator like a GPU or TPU: this hardware is specifically designed for vectorized array operations, which is why you found that the same, array-oriented code was 10x faster on GPU.
3
1
77,459,155
2023-11-10
https://stackoverflow.com/questions/77459155/is-there-a-way-to-find-a-carmichael-number-having-n-prime-factors-in-a-given-ran
I'm trying to solve the problem. I need to find a Carmichael number which is a product of seven prime numbers, each between 10^7 and 10^9. Is there any way to do it? I tried to solve this task using Chernick's formula: M(m) = (6m+1)(12m+1)(18m+1)(36m+1)(72m+1)(144m+1)(288*m+1), on condition that all factors are prime and m is divisible by 8. Also I used Korselt's criterion: a positive composite integer n is a Carmichael number if and only if n is square-free, and for all prime divisors p of n, it is true that (n-1) is divisible by (p-1). Here is the code: from gmpy2 import is_prime from math import prod m = 1666672 while True: x = [i * m + 1 for i in [6, 12, 18, 36, 72, 144, 288]] n = prod(x) if all(is_prime(i) and (n - 1) % (i - 1) == 0 and i in range(10 ** 7, 10 ** 9 + 1) for i in x): print(m, n) break elif x[6] > 10 ** 9: print('Nothing found') break m += 8 Finally, I got m = 18726360 and n = 112504155402956714375844566364658214310419726067438239203656961. Prime factors of n are: 112358161, 224716321, 337074481, 674148961, 1348297921, 2696595841, 5393191681. But only the first four of them are in the required range [10^7, 10^9], so my solution didn't pass :(
Lets first approach this problem by exploring Korselt's Criterion a bit. As you state, n is a Carmichael Number if and only if: n is square-free for all prime divisors p of n, it is true that p βˆ’ 1 divides n βˆ’ 1 We achieve (1) by making each of the 7 prime factors distinct. That leaves (2). This means n - 1 is a multiple of each p - 1, the smallest such value being the Least Common Denominator (LCM) of the p - 1 values. The probability that a truly random n - 1 is divisible by some other integer K is 1/K. This gives us the intuition that we should try to minimize our LCM, to increase our chances that it divides n - 1. Of course n - 1 will not be truly random, but the heuristic still works. To minimize the LCM of p - 1 values, we'll generate values that contain intentionally contain many of the same prime factors. In fact, we'll generate 5-smooth numbers, meaning their largest prime factor is at most 5. This way, at worst we a LCM that dominated by three prime powers: 2^x, 3^y, 5^z. In practice, we'll avoid this worst case; most of our p - 1 values will be composed of some "balanced" prime powers, resulting in a smaller LCM. This alone would be enough to brute force a solution, but we have one more trick we can use to speed up the search further. Let each prime factor p of n be one greater than a multiple of some constant k. Symbolically: p_i = a_i * k + 1. We notice that k divides n - 1: n = p_0 * p_1 * ... * p_m n = (a_0 * k + 1) * (a_1 * k + 1) * ... * (a_m * k + 1) n = k * [...] + 1 n - 1 = k * [...] With this information, we can actually pick a handful of small, distinct 5-smooth numbers, and some larger common factor k, and multiply them for our p - 1 values. We're guaranteed that k divides n - 1, and the 5-smooth numbers (and their LCM) are much smaller, greatly increasing our chances of their LCM evenly dividing n - 1. Putting this all into code: from queue import PriorityQueue from itertools import combinations from math import prod from gmpy2 import is_prime def gen_5_smooth(): primes = [2, 3, 5] queue = PriorityQueue() for i, p in enumerate(primes): queue.put((p, i)) yield 1 while True: x, i = queue.get() yield x for j, q in enumerate(primes[i:], i): queue.put((q*x, j)) def solve(): limit_lower = 10**7 limit_upper = 10**9 smooth_lower = 1000 smooth_upper = smooth_lower * 10 mult_lower = limit_lower // smooth_lower + 1 mult_upper = limit_upper // smooth_upper smooth_factors = [] for fact in gen_5_smooth(): if fact >= smooth_upper: break if fact >= smooth_lower: smooth_factors.append(fact) for mult in range(mult_lower, mult_upper): primes = [fact * mult + 1 for fact in smooth_factors if is_prime(fact * mult + 1)] for comb in combinations(primes, 7): product = prod(comb) if all((product - 1) % (x - 1) == 0 for x in comb): yield comb I chose the smooth bounds manually based on the number of 5-smooth numbers that fell into range and the size of the corresponding multipliers to push them into the [1e7, 1e9) range. You could adjust that value for potentially faster results. You could also generate combinations in order of increasing LCM with some extra effort, which would also produce solutions faster. However, the above code produces a couple solutions per second on average, so its probably fast enough. Iterating over the first handful of solutions yields: (13501351, 14581459, 16201621, 17281729, 58325833, 64806481, 77767777) (12070801, 17381953, 23175937, 25147501, 45265501, 50295001, 57939841) (13606651, 14513761, 21770641, 22677751, 36284401, 48983941, 87082561) (10991161, 14654881, 17585857, 35171713, 36637201, 54955801, 58619521) (12268801, 15336001, 16562881, 23556097, 26500609, 47112193, 70668289) (13269761, 17914177, 37321201, 59713921, 62202001, 82936001, 95542273)
2
2
77,459,646
2023-11-10
https://stackoverflow.com/questions/77459646/how-to-pivot-a-pandas-dataframe-and-calculate-product-of-combinations
I have a pandas dataframe that looks like this: import pandas as pd pd.DataFrame({ 'variable': ['gender','gender', 'age_group', 'age_group'], 'category': ['F','M', 'Young', 'Old'], 'value': [0.6, 0.4, 0.7, 0.3], }) variable category value 0 gender F 0.6 1 gender M 0.4 2 age_group Young 0.7 3 age_group Old 0.3 which represents my population. So in my population I have 60% Females, 70% Young etc. I want to calculate the combinations of gender-age_group and output it in a dataframe that looks like this: pd.DataFrame({ 'gender': ['F', 'F', 'M', 'M'], 'age_group': ['Young', 'Old', 'Young', 'Old'], 'percentage': [0.42, 0.18, 0.28, 0.12] }) gender age_group percentage 0 F Young 0.42 1 F Old 0.18 2 M Young 0.28 3 M Old 0.12 which will show that the Young Females in the population are 42% (which comes from 0.6*0.7), the Old Males are 12% (which comes from 0.4*0.3) etc How could I do that ?
You can split by variable using groupby and compute the combinations with a cross-merge: from functools import reduce group = df.groupby('variable', sort=False) out = reduce(lambda a,b: pd.merge(a, b, how='cross'), (g.rename(columns={'category': k}) .drop(columns='variable') for k, g in group) ) out['percentage'] = (x:=out.filter(like='value')).prod(axis=1) out = out.drop(columns=list(x)) Output: gender age_group percentage 0 F Young 0.42 1 F Old 0.18 2 M Young 0.28 3 M Old 0.12 Example with more combinations: # input variable category value 0 gender F 0.60 1 gender M 0.40 2 age_group Young 0.70 3 age_group Old 0.30 4 other a 0.45 5 other b 0.55 # output gender age_group other percentage 0 F Young a 0.189 1 F Young b 0.231 2 F Old a 0.081 3 F Old b 0.099 4 M Young a 0.126 5 M Young b 0.154 6 M Old a 0.054 7 M Old b 0.066
2
3
77,458,463
2023-11-10
https://stackoverflow.com/questions/77458463/canot-slice-index-unicode-strings-with-underscores
I have this Unicode string: my_string = "₁ᴀaΜ²aΜ²Μ²" How can index and slice it to make other Unicode strings? If I run print([x for x in my_string]) ['₁', 'α΄€', 'a', 'Μ²', 'a', 'Μ²', 'Μ²'] when I expected ['₁', 'α΄€', 'aΜ²', 'Μ²aΜ²Μ²'] this prints my_string[3] 'Μ²' when I expected aΜ²Μ² I tried t define my_string = u"₁ᴀaΜ²aΜ²Μ²" but the u is automatically deleted by vscode I need my_string as buffer to compose other strings according to his human-readable index.
You could use the \X regex and findall from the regex module: import regex out = regex.findall(r'\X', my_string) Output: ['₁', 'α΄€', 'aΜ²', 'aΜ²Μ²']
2
3
77,455,738
2023-11-9
https://stackoverflow.com/questions/77455738/finding-the-nouns-in-a-sentence-given-the-context-in-python
How to find the nouns in a sentence regarding the context? I am using the nltk library as follows: text = 'I bought a vintage car.' text = nltk.word_tokenize(text) result = nltk.pos_tag(text) result = [i for i in result if i[1] == 'NN'] #result = [('vintage', 'NN'), ('car', 'NN')] The problem with this script is that it considers vintage as a noun, which can be true, but given the context, it is an adjective. How can we achieve this task? Appendix: Using textblob, we get "vintage car" as the noun: !python -m textblob.download_corpora from textblob import TextBlob txt = "I bought a vintage car." blob = TextBlob(txt) print(blob.noun_phrases) #['vintage car']
Using spacy might solve your task. Try this: import spacy nlp = spacy.load("en_core_web_lg") def analyze(text): doc = nlp(text) for token in doc: print(token.text, token.pos_) analyze("I bought a vintage car.") print() analyze("This old wine is a vintage.") Output I PRON bought VERB a DET vintage ADJ <- correctly identified as adjective car NOUN . PUNCT This DET old ADJ wine NOUN is AUX a DET vintage NOUN <- correctly identified as noun . PUNCT
3
2
77,455,334
2023-11-9
https://stackoverflow.com/questions/77455334/how-would-i-implement-idxmax-with-random-tiebreaking-on-a-dataframe
If I have a dataframe like this: id col1 col2 idxmax 1 3.0 4.0 col2 2 5.0 5.0 tiebreak 3 6.0 9.0 col 2 In the case of my example dataframe I'd like to return either col1 or col2 based on whichever name wins the tie. Not including the row ID. At the moment the df.idxmax(axis = 1) function just returns the column name of the column with first max value, as per the documentation. However, to ensure elimination of bias, I'd like to turn this into a random tie break but I genuinely have no idea how to do this. Could you please help?
I like @Timeless' approach with random sampling, the issue is that it will always use the same tie-breaker for different rows that have the same combination of equal maxes. An alternative would be to first stack the data: df['idxmax'] = (df .drop(columns=['id', 'idxmax'], errors='ignore') .stack() .sample(frac=1) .groupby(level=0).idxmax().str[1] ) Alternatively: cols = df.columns.difference(['id', 'idxmax']) m = df[cols].eq(df[cols].max(axis=1), axis=0) df['idxmax'] = (m[m].stack().reset_index(1) .groupby(level=0)['level_1'].sample(n=1) ) Example output: id col1 col2 idxmax 0 1 3.0 4.0 col2 1 2 5.0 5.0 col2 2 3 6.0 9.0 col2
3
1
77,455,158
2023-11-9
https://stackoverflow.com/questions/77455158/how-do-i-label-features-in-an-array-by-their-size
I have a 2D boolean numpy array, mask: array([[False, False, False, True, True, False, False, False], [ True, True, True, False, True, False, False, False], [False, False, True, False, False, True, False, True], [ True, False, False, False, True, True, False, False]]) mask was generated by: np.random.seed(43210) mask = (np.random.rand(4,8)>0.7) I visualize mask via: plt.pcolormesh(mask) plt.gca().invert_yaxis() plt.gca().set_aspect('equal') Result: I use scipy.ndimage.label to label the features, ie sections of neighbouring True elements in the array. label, num_features = scipy.ndimage.label(mask) label is then: array([[0, 0, 0, 1, 1, 0, 0, 0], [2, 2, 2, 0, 1, 0, 0, 0], [0, 0, 2, 0, 0, 3, 0, 4], [5, 0, 0, 0, 3, 3, 0, 0]], dtype=int32) visualization: However, I would like to have an array where the features are marked by an number showing the size of the feature. I achieve this by: newlabel = np.zeros(label.shape) for i in range(1,num_features+1): # works but very slow newlabel[label==i]=sum((label==i).flatten()) newlabel is then: array([[0., 0., 0., 3., 3., 0., 0., 0.], [4., 4., 4., 0., 3., 0., 0., 0.], [0., 0., 4., 0., 0., 3., 0., 1.], [1., 0., 0., 0., 3., 3., 0., 0.]]) visualization: This result above (the newlabel array) is correct, this is what I want. The features with only 1 pixel are marked by 1. (blue squares in the visualization). Features with 3 pixels are marked by 3. (green shapes on plot), while the feature with 4 pixels are marked by 4. in newlabel (yellow shape on plot). The problem with this approach is that the for loop takes a long time when mask is big. Testing with a 100 times larger mask: import time np.random.seed(43210) mask = (np.random.rand(40,80)>0.7) t0 = time.time() label, num_features = scipy.ndimage.label(mask) t1 = time.time() newlabel = np.zeros(label.shape) for i in range(1,num_features+1): newlabel[label==i]=sum((label==i).flatten()) t2 = time.time() print(f"Initial labelling takes: {t1-t0} seconds.") print(f"Relabelling by feature size takes: {t2-t1} seconds.") print(f"Relabelling takes {(t2-t1)/(t1-t0)} times as much time as original labelling.") Output: Initial labelling takes: 0.00052642822265625 seconds. Relabelling by feature size takes: 0.3239290714263916 seconds. Relabelling takes 615.333786231884 times as much time as original labelling. This makes my solution unviable on real world examples. How can I label the features by their size faster?
You could use numpy.unique: n, idx, cnt = np.unique(label, return_inverse=True, return_counts=True) n2, idx2 = np.unique(cnt, return_inverse=True) out = np.where(mask, n2[idx2][idx].reshape(mask.shape), 0) Output: array([[0, 0, 0, 3, 3, 0, 0, 0], [4, 4, 4, 0, 3, 0, 0, 0], [0, 0, 4, 0, 0, 3, 0, 1], [1, 0, 0, 0, 3, 3, 0, 0]])
2
4
77,454,771
2023-11-9
https://stackoverflow.com/questions/77454771/create-an-nxm-matrix-a-to-an-nxmxl-matrix-b-where-bi-j-kronecker-deltaai
Is there a way to convert a NxM matrix A where all values of A are positive integers to an NxMxL matrix B where L = 1 + max(A) B[i,j,k] = {1 if k==A[i,j] and 0 otherwise} using loops I have done the following: B = np.zeros((A.shape[0],A.shape[1],1+np.amax(A))) for i in range(A.shape[0]): for j in range(A.shape[1]): B[i,j,A[i,j]] = 1 the solution ideally avoids any use of for loops and uses only slicing or indexing or numpy functions
A sample A: In [231]: A = np.array([1,0,3,2,2,4]).reshape(2,3) In [232]: A Out[232]: array([[1, 0, 3], [2, 2, 4]]) Your code and B: In [233]: B = np.zeros((A.shape[0],A.shape[1],1+np.amax(A))) ...: for i in range(A.shape[0]): ...: for j in range(A.shape[1]): ...: B[i,j,A[i,j]] = 1 ...: In [234]: B Out[234]: array([[[0., 1., 0., 0., 0.], [1., 0., 0., 0., 0.], [0., 0., 0., 1., 0.]], [[0., 0., 1., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 0., 1.]]]) Defining arrays to index B in way that broadcasts with A: In [235]: I,J=np.ix_(np.arange(2),np.arange(3)) In [236]: B[I,J,A] Out[236]: array([[1., 1., 1.], [1., 1., 1.]]) Use that indexing to change all the 1s of B to 20: In [237]: B[I,J,A]=20 In [238]: B Out[238]: array([[[ 0., 20., 0., 0., 0.], [20., 0., 0., 0., 0.], [ 0., 0., 0., 20., 0.]], [[ 0., 0., 20., 0., 0.], [ 0., 0., 20., 0., 0.], [ 0., 0., 0., 0., 20.]]]) the indexes (2,1) and (1,3) pair with (2,3): In [239]: I,J Out[239]: (array([[0], [1]]), array([[0, 1, 2]])) There's also newer pair of functions that do that same thing. I'm more familiar with the earlier method In [241]: np.take_along_axis(B,A[:,:,None],2) Out[241]: array([[[20.], [20.], [20.]], [[20.], [20.], [20.]]]) In [243]: np.put_along_axis(B,A[:,:,None],1,axis=2) In [244]: B Out[244]: array([[[0., 1., 0., 0., 0.], [1., 0., 0., 0., 0.], [0., 0., 0., 1., 0.]], [[0., 0., 1., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 0., 1.]]])
2
2
77,451,661
2023-11-9
https://stackoverflow.com/questions/77451661/how-can-i-add-the-x-or-y-value-from-a-line-above-to-the-line-that-is-missing
I have a .csv file that has this structure: X310.433,Y9.6 X310.54,Y10 X143.52 X144.77 when there is no "X" or "Y" value in a line, I want to take the value from the line above and copy it to the line after that, that is missing the value. For this example copy the Y10 into the next line, and seperate it with a comma. How can i do this with python?
Without any utility modules you could do this: Let's assume that the file content is: X310.433,Y9.6 Y999 X310.54,Y10 X143.52 X144.77 ...then... lines: list[tuple[str, str]] = [] with open("foo.csv") as foo: for line in map(str.strip, foo): if line: a, *b = line.split(",") if a[0] == "X": if b: lines.append((a, b[0])) else: lines.append((a, lines[-1][1])) else: assert a[0] == "Y" if b: lines.append((b[0], a)) else: lines.append((lines[-1][0], a)) for line in lines: print(",".join(line)) Output: X310.433,Y9.6 X310.433,Y999 X310.54,Y10 X143.52,Y10 X144.77,Y10 Note: If the first line of the file contains either one of X or Y (but not both) this will fail EDIT: More robust version that rewrites the original file: with open("foo.csv", "r+") as foo: lines: list[tuple[str, str]] = [] for line in map(str.strip, foo): if line: a, *b = line.split(",") if a.startswith("X"): y = b[0] if b else lines[-1][1] lines.append((a, y)) elif a.startswith("Y"): x = b[0] if b else lines[-1][0] lines.append((x, a)) foo.seek(0) for line in lines: print(",".join(line), file=foo) foo.truncate()
3
1
77,448,073
2023-11-8
https://stackoverflow.com/questions/77448073/how-can-i-check-if-all-given-points-in-space-lie-on-the-same-line
I need to implement a function that takes coordinates of any number of points as input data and return True or False depending on whether these points lie on the same line or not. I use Python to solve this problem and now I have the following implementation: def are_colinear(points, tolerance): # variable "points" is a list of lists with points coordinates for i in range(2, len(points)): if (points[i][0] - points[i-2][0])/(points[i-1][0] - points[i-2][0]) - (points[i][1] - points[i-2][1])/(points[i-1][1] - points[i-2][1]) < tolerance and \ (points[i][1] - points[i-2][1])/(points[i-1][1] - points[i-2][1]) - (points[i][2] - points[i-2][2])/(points[i-1][2] - points[i-2][2]) < tolerance: continue else: return False return True This method is based on the equation of a line passing through two points: The weakness of this approach is that it raises an error if you want to check points belonging to the same plane (one of three coordinates always equals to zero in this case and one of denominators is zero because of this). I need a better implementation. Thank you in advance!
Take any one of your coordinates, take it to be your new origin, translating all coordinates accordingly. Now, treat each coordinate as a position vector. Normalize each vector. Now, if any two vectors are parallel, their dot product is 1. In fact, they are the same vector. If two vectors are antiparallel, their dot product is -1 and one vector is the negation of the other. When you expand it all out, you'll find that you don't need to do any divisions, can avoid any square roots, and don't have special edge cases to handle. 1 ?= abs(dot(norm(u), norm(v)) = abs(dot(u, v) / (mag(u) * mag(v))) = dot(u, v)^2 / (mag(u) * mag(v))^2 1 = (ux*vx + uy*vy + uz*vz)^2 / (sqrt(ux^2 + uv^2 + uz^2) * sqrt(vx^2 + vy^2 + vz^2))^2 1 = (ux*vx + uy*vy + uz*vz)^2 / ((ux^2 + uv^2 + uz^2) * (vx^2 + vy^2 + vz^2)) (ux^2 + uv^2 + uz^2) * (vx^2 + vy^2 + vz^2) = (ux*vx + uy*vy + uz*vz)^2 This is pretty easy to code up: def are_parallel(points): points = set(points) # dedupe if len(points) < 3: return True xo, yo, zo = points.pop() # extract origin translated_points = [(x-xo, y-yo, z-zo) for x, y, z in points] ux, uy, uz = translated_points[0] # first/reference vector u_mag_sqr = ux**2 + uy**2 + uz**2 for vx, vy, vz in translated_points[1:]: # rest of vectors v_mag_sqr = vx**2 + vy**2 + vz**2 uv_dot_sqr = (ux*vx + uy*vy + uz*vz)**2 if u_mag_sqr * v_mag_sqr != uv_dot_sqr: return False return True Again, its worth emphasizing that this avoids division, square roots, or anything that would introduce floating-point comparison to what could otherwise be integer coordinates, its faster because its just multiplications and additions, and it doesn't have weird edge cases around specific classes of coordinates.
2
3
77,447,360
2023-11-8
https://stackoverflow.com/questions/77447360/import-from-typing-within-type-checking-block
Does it make sense to import from typing inside a TYPE_CHECKING block? Is this good/bad or does it even matter? from __future__ import annotations from typing import TYPE_CHECKING, Protocol, runtime_checkable if TYPE_CHECKING: from typing import Any, Callable, Generator
Since typing is a built-in module and you are already importing it to use TYPE_CHECKING anyway, the answer is no, it does not make much sense. Also, it will only work if all of the usages of imported classes are within quotes (for lazy evaluation). Otherwise you will get a NameError when the code runs: from typing import TYPE_CHECKING if TYPE_CHECKING: from typing import List li: List = [] results with NameError: name 'List' is not defined as opposed to from typing import TYPE_CHECKING if TYPE_CHECKING: from typing import List li: 'List' = [] which works.
2
2
77,443,428
2023-11-8
https://stackoverflow.com/questions/77443428/how-can-i-check-if-an-instance-of-a-class-exists-in-a-list-in-python-3-according
In Python 3, I have a list (my_array) that contains an instance of the Demo class with a certain attribute set on that instance. In my case, the attribute is value: int = 4. Given an instance of the Demo class, how can I determine if that instance already exists in my_array with the same properties. Here's a MCRE of my issue: from typing import List class Demo: def __init__(self, value: int): self.value = value my_array: List[Demo] = [Demo(4)] print(Demo(4) in my_array) # prints False However, Demo(4) clearly already exists in the list, despite that it prints False. I looked through the Similar Questions, but they were unrelated. How can I check if that already exists in the list?
You need to add an equality function to Demo: def __eq__(self, other): return isinstance(other, Demo) and self.value == other.value P.S, since Python 3.9 you don't need to import List and just use: my_array: list[Demo] = [Demo(4)]
2
2
77,401,730
2023-11-1
https://stackoverflow.com/questions/77401730/modulenotfounderror-no-module-named-imp
I need to install the eb command on windows. I would like to try to deploy an application on AWS using the elasticbeanstalk service, and through this command you can configure and deploy an environment directly with a configuration file. To do this I followed the guide. I first installed python via the site (Python version 3.12.0), and then all the steps described in the guide link. Now if I run the eb command from cmd I always get this error. Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\Utente\.ebcli-virtual-env\Scripts\eb.exe\__main__.py", line 4, in <module> File "C:\Users\Utente\.ebcli-virtual-env\Lib\site-packages\ebcli\core\ebcore.py", line 16, in <module> from cement.core import foundation, handler, hook File "C:\Users\Utente\.ebcli-virtual-env\Lib\site-packages\cement\core\foundation.py", line 11, in <module> from ..core import output, extension, arg, controller, meta, cache, mail File "C:\Users\Utente\.ebcli-virtual-env\Lib\site-packages\cement\core\extension.py", line 8, in <module> from imp import reload # pragma: no cover ^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'imp' I've tried several things but can't come to a conclusion. Does anyone know how to help me? I also tried installing previous versions of python, even though I didn't like it as a solution, but still I still have the problem.
I encountered this as well. As far as I understand its a deprecation issue. awsebcli will install with Python 3.12 but imp will not. If you type import imp into Python 3.11 you will get the following response DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses At the time of writing this, Elastic Beanstalk is only supporting 3.8, 3.9 & 3.11 https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.python
24
26
77,418,896
2023-11-3
https://stackoverflow.com/questions/77418896/attributeerror-grouperview-object-has-no-attribute-join
I'm trying to reproduce this answer but getting the following error: AttributeError: 'GrouperView' object has no attribute 'join' --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[283], line 7 4 flights = flights.pivot("month", "year", "passengers") 5 f,(ax1,ax2,ax3, axcb) = plt.subplots(1,4, 6 gridspec_kw={'width_ratios':[1,1,1,0.08]}) ----> 7 ax1.get_shared_y_axes().join(ax2,ax3) 8 g1 = sns.heatmap(flights,cmap="YlGnBu",cbar=False,ax=ax1) 9 g1.set_ylabel('') AttributeError: 'GrouperView' object has no attribute 'join' Also, seaborn version is as below: print(sns.__version__) #0.13.0 import matplotlib print('matplotlib: {}'.format(matplotlib.__version__) #matplotlib: 3.8.1) I checked some working around but couldn't solve the problem in the plot since there is no real string ax2,ax3 when I print : I am getting an error as list object has no attribute 'join' Even I downgraded seaborn but didn't solve the problem based on this thread
As proposed in the 3.6 API changes, (and repeated in the 3.8 API changes), use Axes.sharey. ax2.sharey(ax1) ax3.sharey(ax1) Since an Axes can only sharey with one other Axes, I'm unaware of an alternative that lets ax1 share with both ax2 and ax3 in one step.
2
4
77,433,576
2023-11-6
https://stackoverflow.com/questions/77433576/how-to-apply-rolling-map-in-python-polars-for-a-function-that-uses-multiple-in
I have a function using Polars Expressions to calculate the standard deviation of the residuals from a linear regression (courtesy of this post). Now I would like to apply this function using a rolling window over a dataframe. My approaches below fail because I don't know how to pass two columns as arguments to the function, since rolling_map() applies to an Expr. Is there a way to do this directly in Polars, or do I need to use a workaround with Pandas? Thank you for your support! (feels like I'm missing something obvious here...) import polars as pl def ols_residuals_std(x: pl.Expr, y: pl.Expr) -> pl.Expr: # Calculate linear regression residuals and return the standard deviation thereof x_center = x - x.mean() y_center = y - y.mean() beta = x_center.dot(y_center) / x_center.pow(2).sum() e = y_center - beta * x_center return e.std() df = pl.DataFrame({'a': [45, 76, 4, 88, 66, 5, 24, 72, 93, 87, 23, 40], 'b': [77, 11, 56, 43, 61, 25, 63, 7, 66, 17, 64, 75]}) # Applying the function over the full length - works df = df.with_columns(ols_residuals_std(pl.col('a'), pl.col('b')).alias('e_std')) df.with_columns(pl.col('a').rolling_map(ols_residuals_std(pl.col('a'), pl.col('b')), window_size=4, min_periods=1).alias('e_std_win')) # PanicException: python function failed: PyErr { type: <class 'TypeError'>, value: TypeError("'Expr' object is not callable"), traceback: None } df.with_columns(pl.col('a', 'b').rolling_map(ols_residuals_std(), window_size=4, min_periods=1).alias('e_std_win')) # TypeError: ols_residuals_std() missing 2 required positional arguments: 'x' and 'y'
One thing to note about in rolling_map is that it is used for a custom function. While your expression is defined with a function, it isn't what they mean. What they mean is a python function which takes in values and outputs a value. This is also hinted at by the name having map which coincides to map_elements and map_batches. Additionally there's a Warning that it will be extremely slow which also hints at its expectation. To get at what you want to do, you can use rolling which unfortunately doesn't infer an index column so you have to manually create it. ( df .with_row_index('i') .with_columns( ols_residuals_std(pl.col('a'), pl.col('b')) .rolling('i',period='4i').alias('e_std_win') ) .drop('i') ) shape: (12, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ e_std_win β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════════║ β”‚ 45 ┆ 77 ┆ 0.0 β”‚ β”‚ 76 ┆ 11 ┆ 0.0 β”‚ β”‚ 4 ┆ 56 ┆ 26.832826 β”‚ β”‚ 88 ┆ 43 ┆ 23.440663 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 93 ┆ 66 ┆ 28.72105 β”‚ β”‚ 87 ┆ 17 ┆ 28.981351 β”‚ β”‚ 23 ┆ 64 ┆ 29.063269 β”‚ β”‚ 40 ┆ 75 ┆ 22.362099 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
4
77,416,106
2023-11-3
https://stackoverflow.com/questions/77416106/how-do-i-wrap-a-byte-string-in-a-bytesio-object-using-python
I'm writing a script with the Pandas library that involves reading the contents of an excel file. The line currently looks like this: test = pd.read_excel(archive_contents['spreadsheet.xlsx']) The script works as intended with no issues, but I get a future warning depicting the following: FutureWarning: Passing bytes to 'read_excel' is deprecated and will be removed in a future version. To read from a byte string, wrap it in a `BytesIO` object. test = pd.read_excel(archive_contents['spreadsheet.xlsx']) In the interest of future proofing my code, how would I go about doing that?
As user459827 has commented, this will do the trick: from io import BytesIO test = pd.read_excel(BytesIO(archive_contents['spreadsheet.xlsx']))
3
3
77,404,746
2023-11-1
https://stackoverflow.com/questions/77404746/cors-policy-error-on-second-render-of-react-app-from-fastapi-backend
I am working on a React frontend to chart some data from a fastapi backend. I am using a couple of dropdown components to change the month and year for the requested data. With the initial render the fetch request works fine and returns the data and the charts display. Once I change the dropdowns, I get the following CORS Policy Error in the browser console. Access to fetch at 'https://fake-url.com/endpoint/' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. React code snippet with the fetch call: const [month, setMonth] = useState(1); const [year, setYear] = useState('2023'); const [revenueData, setRevenueData] = useState({}); useEffect(() => { const inputs = { "month": month, "year": year, "duration": 1 } const myHeaders = new Headers(); myHeaders.append("X-API-KEY", "fake-api-key"); myHeaders.append("Content-Type", "application/json"); const requestOptions = { method: 'POST', headers: myHeaders, body: JSON.stringify(inputs), redirect: 'follow' }; fetch("https://fake-url.com/endpoint/", requestOptions) .then(response => response.json()) .then(data => { setRevenueData((data)) }).catch(error => { console.log('error', error) }); }, [month, year]); I confirmed that I am using CORSMiddleware in fastapi with the following settings: app.add_middleware(HTTPSRedirectMiddleware) app.add_middleware( CORSMiddleware, allow_origins=['*'], allow_methods=['*'], allow_headers=['*'] ) I also confirmed that the backend is returning access-control headers for preflight with an options request in postman as shown: UPDATE The network panel shows that the second request preflight is successful but ultimately fails in an Internal Server Error. Which lead me to: CORS and Internal Server Error responses
CORS headers are not added when the request ends in an error, that is, when a response is returned with a status code such as 4xx or 5xx. As shown in the screenshot you provided, when calling the /dashboard_data API endpoint for the third time, the server responds with 500 Internal Server Error response code, indicating that the server encountered an unexpected condition that prevented it from fulfilling the request. Hence, in that case, the server won't include the appropriate Access-Control-Allow-Headers / Access-Control-Allow-Origin headers, and the browser will report a CORS error in the devtools console. Thus, please check what causes the error on server side in the first place and try to fix it (the console running the server should give you insights on that). Related answers to the concept of CORS issues that might prove helpful to you and future readers can be found here, as well as here and here.
4
5
77,410,600
2023-11-2
https://stackoverflow.com/questions/77410600/is-opentelemetry-in-python-safe-to-use-with-async
I want to use OpenTelemetry with an Async application, and I want to be 101% sure that it will work as intended. Specifically, I'm worried about what happens with the current_span when we switch back and forth between asynchronous functions. I have this fear that if I rely on tracer.start_as_current_span to set the span in each function, and then I pass execution to another function which also sets the current_span, then when execution passes back to the first function it won't be tied to the correct span anymore. Now, I have tried testing this a bit, and found no evidence that it breaks. But I also haven't found any documentation that says it explicitly won't break. Can anyone confirm? I've done the following basic test, but I'm worried it misses something: async def async_span(): with tracer.start_as_current_span(name=f"span_{uuid.uuid4}") as span: for x in range(1000): assert trace.get_current_span() == span await asyncio.sleep(0.0001 * randint(1, 10)) async def main(): with tracer.start_as_current_span(name="parent"): await asyncio.gather(*(async_span() for _ in range(10)))
OpenTelemetry for Python supports asynchronous code. Went through the code for version 1.24.0/0.45b0 of opentelemetry-python. The code contains abstract context class _RuntimeContext. _RuntimeContext has single implementation ContextVarsRuntimeContext that utilizes contextvars. ContextVarsRuntimeContext is used as a default context.
5
3
77,406,316
2023-11-2
https://stackoverflow.com/questions/77406316/how-do-you-safely-pass-values-to-sqlite-pragma-statements-in-python
I'm currently writing an application in Python that stores its data in a SQLite database. I want the database file to be stored encrypted on disk, and I found the most common solution for doing this to be SQLCipher. I added sqlcipher3 to my project to provide the DB-API, and got started. With SQLCipher, the database encryption key is provided in the form of a PRAGMA statement which must be provided before the first operation on the database is executed. PRAGMA key='hunter2'; -- like this When my program runs, it prompts the user for the database password. My concern is that since this is a source of user input, it's potentially vulnerable to SQL injection. For example, a naive way to provide the key might look something like this: from getpass import getpass import sqlcipher3 con = sqlcipher3.connect(':memory:') cur = con.cursor() password = getpass('Password: ') cur.execute(f"PRAGMA key='{password}';") ### do stuff with the unencrypted database here If someone was to enter something like "hunter2'; DROP TABLE secrets;--" into the password prompt, the resulting SQL statement would look like this after substitution: PRAGMA key='hunter2'; DROP TABLE secrets;--'; Typically, the solution to this problem is to use the DB-API's parameter substitution. From the sqlite3 documentation: An SQL statement may use one of two kinds of placeholders: question marks (qmark style) or named placeholders (named style). For the qmark style, parameters must be a sequence whose length must match the number of placeholders, or a ProgrammingError is raised. For the named style, parameters must be an instance of a dict (or a subclass), which must contain keys for all named parameters; any extra items are ignored. Here’s an example of both styles: con = sqlite3.connect(":memory:") cur = con.execute("CREATE TABLE lang(name, first_appeared)") # This is the named style used with executemany(): data = ( {"name": "C", "year": 1972}, {"name": "Fortran", "year": 1957}, {"name": "Python", "year": 1991}, {"name": "Go", "year": 2009}, ) cur.executemany("INSERT INTO lang VALUES(:name, :year)", data) # This is the qmark style used in a SELECT query: params = (1972,) cur.execute("SELECT * FROM lang WHERE first_appeared = ?", params) print(cur.fetchall()) This works as expected in the sample code from the docs, but when using placeholders in a PRAGMA statement, we get an OperationalError telling us there's a syntax error. This is the case for both types of parameter substitution. # these will both fail cur.execute('PRAGMA key=?;', (password,)) cur.execute('PRAGMA key=:pass;', {'pass': password}) I'm not sure where to go from here. If we actually enter our malicious string at the password prompt, it won't work, producing the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> sqlcipher3.ProgrammingError: You can only execute one statement at a time. So is the "naive" code from earlier safe? I'm not confident saying the answer is "yes" just because the one malicious string I could come up with didn't work, but there doesn't seem to be a better way of doing this. The answers to the only other person on here asking this question that I could find suggested equivalent solutions (python + sqlite insert variable into PRAGMA statement). I'd also rather not use an ORM, especially if it's just for this one case. Any suggestions would be appreciated, thanks.
According to the accepted answer to β€œPython sqlite3 string variable in execute”, there are limitations on where DB-API substitutions can be used: Parameter markers can be used only for expressions, i.e., values. You cannot use them for identifiers like table and column names. Seeing this, I figured that arguments to PRAGMA must fall into the same category as β€œtable and column names”. In fact, my specific use case was PRAGMA table_info, where the argument is a table name. On digging into it further, I found that Python’s sqlite3 module relies on SQLite’s own sqlite3_bind_* functions to do parameter substitutions. For example, here is the code for substituting string values. And I found further confirmation that substitution won’t work for PRAGMA arguments. β€œBut wait,” I thought. β€œSam’s argument is a key, not a table name.” Without digging even deeper, I can only conjecture that it doesn’t matter, and SQLite (or SQLCipher) just doesn’t allow binding values to PRAGMA statements. Maybe you can supply the key via SQLCipher’s C API instead of through SQL? It doesn’t fix my use case, but it might help with yours! For me, and for anyone else trying to programmatically provide a table name to PRAGMA table_info, I guess the official solution is to double- and triple-check that the variable cannot possibly contain user input, validate and escape it anyway just in case, cross fingers, toes, knees and nose, and do a string substitution! What could possibly go wrong…
3
2
77,425,682
2023-11-5
https://stackoverflow.com/questions/77425682/what-is-the-point-of-usedforsecurity
The parameter usedforsecurity was added to every hash function in hashlib in Python 3.9. Changed in version 3.9: All hashlib constructors take a keyword-only argument usedforsecurity with default value True. A false value allows the use of insecure and blocked hashing algorithms in restricted environments. False indicates that the hashing algorithm is not used in a security context, e.g. as a non-cryptographic one-way compression function. However, this provides zero guidance on When you should use usedforsecurity When you should not use usedforsecurity What "restricted environments" are And while I'm not a security researcher, I darn well know md5 is not secure in any sense of the word. Consequently, the name usedforsecurity boggles my mind in more ways than one. What is the point of usedforsecurity?
TL;DR For almost everyone, ignore the flag, it has no effect whatsoever. The full story involves FIPS and how that gets exposed as a python API. For our purposes, FIPS is a standard that supposedly specifies a safe set of practices. In certain scenarios (e.g. writing software for US government agencies), you are forced to comply with FIPS. To comply with FIPS, your python would have had FIPS mode turned on by building python with FIPS enabled from source. This is the "restricted environment" mentioned in the documentation. If you have a standard python build, then you aren't complying with FIPS and the flag literally does nothing. One aspect of FIPS is restricting the hash functions you are allowed to use. In particular, MD5 is not allowed under FIPS. When you use MD5 in a FIPS environment, you will encounter an error. That is what the introduction of usedforsecurity is supposed to fix: give you an escape hatch in the case that you truly want to use MD5 in a FIPS environment. The parameter is designed to be specified at each call site so it can be audited on a case by case basis. There seems to be confusion on many sides that usedforsecurity has anything to do with security. It's not. Having it set to False doesn't reduce your security. On regular python builds, you use the exact same hash function regardless of usedforsecurity. On FIPS enabled environments, it does however switch your implementation of (allowed) hash functions between those that were explicitly certified† or not. In conclusion, for all intents and purposes, the parameter might as well have been called exceptionforfips because that's the singular purpose it serves: as an escape hatch if you happen to work under a FIPS environment and still need to use a FIPS non-compliant hash. It is quite unfortunate it is part of the API for all users with a seriously misleading name. † However, the certified version doesn't use a different algorithm, certification is very much bureaucratic in nature.
6
8
77,433,096
2023-11-6
https://stackoverflow.com/questions/77433096/notimplementederror-loading-a-dataset-cached-in-a-localfilesystem-is-not-suppor
I try to load a dataset using the datasets python module in my local Python Notebook. I am running a Python 3.10.13 kernel as I do for my virtual environment. I cannot load the datasets I am following from a tutorial. Here's the error: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) /Users/ari/Downloads/00-fine-tuning.ipynb Celda 2 line 3 1 from datasets import load_dataset ----> 3 data = load_dataset( 4 "jamescalam/agent-conversations-retrieval-tool", 5 split="train" 6 ) 7 data File ~/Documents/fastapi_language_tutor/env/lib/python3.10/site-packages/datasets/load.py:2149, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2145 # Build dataset for splits 2146 keep_in_memory = ( 2147 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2148 ) -> 2149 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2150 # Rename and cast features to match task schema 2151 if task is not None: 2152 # To avoid issuing the same warning twice File ~/Documents/fastapi_language_tutor/env/lib/python3.10/site-packages/datasets/builder.py:1173, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1171 is_local = not is_remote_filesystem(self._fs) 1172 if not is_local: -> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") 1174 if not os.path.exists(self._output_dir): 1175 raise FileNotFoundError( 1176 f"Dataset {self.dataset_name}: could not find data in {self._output_dir}. Please make sure to call " 1177 "builder.download_and_prepare(), or use " 1178 "datasets.load_dataset() before trying to access the Dataset object." 1179 ) NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. How do I resolve this? I don't understand how this error is applicable, given that the dataset is something I am fetching and thus cannot be cached in my LocalFileSystem in the first place.
Try doing: pip install -U datasets This error stems from a breaking change in fsspec. It has been fixed in the latest datasets release (2.14.6). Updating the installation with pip install -U datasets should fix the issue. git link : https://github.com/huggingface/datasets/issues/6352 If you are using fsspec, then do: pip install fsspec==2023.9.2 There is a problem with fsspec==2023.10.0 git link : https://github.com/huggingface/datasets/issues/6330 Edit: Looks like it broken again in 2.17 and 2.18 downgrading to 2.16 should work.
24
58
77,418,738
2023-11-3
https://stackoverflow.com/questions/77418738/python-pystray-update-menu-use-variable-text-for-item
I want to change the text for an item variably. to do this, i tried to update the menu using update_menu(). unfortunately, this didn't work and i couldn't find anything more detailed in the pystray documentation. I hope you can help me. thank you. from pystray import Icon as icon, Menu as menu, MenuItem as item import PIL.Image image = PIL.Image.new('RGB', (100, 100), 255) adapter = 'before' def test(): global ps global adapter adapter = 'after' ps.update_menu() ps = icon(name='name', icon=image, menu=menu( item(text='Adapter', action=menu(item(text='Test', action=test))), item(text=adapter, action=lambda: test()), ) ) ps.run()
I've got to a similar problem earlier, I wanted to update the submenu while running. I made a version for what you wanted to adjust: def test(icon, this_item): global adapter adapter = 'after' global menu_items menu_items.pop() # remove last element, here containing 'adapter' # add new item with updated adapter value menu_items.append(item(text=adapter, action=test)) icon.update_menu() menu_items = [ item(text='Adapter', action=menu(item(text='Test', action=test))), item(text=adapter, action=test) ] ps = icon(name='name', icon=image, menu=menu(lambda: menu_items)) ps.run() The MenuItem takes (*items) so giving it a list with items in a lambda function works to update it later. You have to alter the menu_items list to your needs and update. Couldn't figure out tho how to just adjust the text, with a lambda e.g. either. Found also this example #17, they changed the whole menu each time, but well... UPDATE: Here's what you wanted, cleaner than my previous answer: adapter = 'before' def test(icon, this_item): global adapter adapter = 'after' icon.update_menu() ps = icon(name='name', icon=image, menu=menu( item(text='Adapter', action=menu(item(text='Test', action=test))), item(lambda text: adapter, action=test) ) ) found here
3
6
77,433,139
2023-11-6
https://stackoverflow.com/questions/77433139/mask-r-cnn-load-weights-function-does-not-work-in-google-colab-with-tensorflow-c
I want to train a Mask R-CNN model in Google Colab using transfer learning. For that, I'm utilizing the coco.h5 dataset. I installed Mask R-CNN with !pip install mrcnn-colab. I noticed that the following code does not load the weights: model.load_weights(COCO_MODEL_PATH, by_name=True). The names are right and by_name=False results in the same problem. I can confirm this by checking with the following lines: from mrcnn import visualize visualize.display_weight_stats(model) This displays the same values both before and after loading (I just show the first 10 layers): I believe I've found the solution to this problem. It involves the following lines of code: import tensorflow.compat.v1 as tf tf.disable_v2_behavior() tf.compat.v1.get_default_graph() This solution is often recommended because Mask R-CNN actually requires TensorFlow 1.X, whereas the latest TensorFlow version is 2.X, and Colab doesn't support TensorFlow 1.X. Therefore, I used this solution, which unfortunately results in the load_weights function not working. I managed to adjust my code so that import tensorflow.compat.v1 is not necessary and used the modified model.py and utils.py code from https://github.com/ahmedfgad/Mask-RCNN-TF2/tree/master, which requires a Python version lower than 3.10 (the standard in Colab). For the Python downgrade, I used the following commands: !apt-get update -y !update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 !update-alternatives --config python3 !apt install python3-pip !apt install python3.7-distutils This resulted in the installation of another Python version, but I am unable to use it in Colab. Colab always defaults to using Python 3.10. This can be confirmed by running the following code: import sys print("User Current Version:-", sys.version) which results in the following output: User Current Version:- 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] Therefore, I created a new runtime in Colab with Python 3.7.6 as follows: !wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh !chmod +x mini.sh !bash ./mini.sh -b -f -p /usr/local !conda install -q -y jupyter !conda install -q -y google-colab -c conda-forge !python -m ipykernel install --name "py37" --user After switching to this runtime, I upgraded the Python version to 3.7.11, which I actually needed: !conda install python=3.7.11 -y With these adjustments, I can load the weights; however, I am limited to using the CPU. The reason for this limitation is that the CUDA version of Colab is not compatible with this Python version, and I was unable to achieve a downgrade. Additionally, the new runtime solution often necessitates frequent restart runtime actions, as it tends to freeze when I click the run button. So, regarding this problem, I have the following questions: How can I downgrade the CUDA version to 10.1? I've already tried various approaches, but I always come to the conclusion that it's not possible in Colab. Is it possible to force Colab to use a previously installed Python version? Is there an alternative to the import tensorflow.compat.v1 as tf code that allows loading the weights?
You can use this implementation which is built on top of the original Mask R-CNN repo to support TF2. This repository allows to train and test the Mask R-CNN model with TensorFlow 2.14.0, and Python 3.10.12. You can also use it on Google Colab (current colab environment also uses Python 3.10.12 and TF 2.14.0) and it's working without any issues on GPU. Please make sure your runtime is using the GPU: and then follow these exact steps: # Clone the repo !git clone https://github.com/z-mahmud22/Mask-RCNN_TF2.14.0.git maskrcnn # Change the runtime directory to the cloned repo import os os.chdir('/content/maskrcnn/') # Download pre-trained weights !wget https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5 And then use this snippet to load weights into the Mask R-CNN model: import mrcnn import mrcnn.config import mrcnn.model # create a config file CLASS_NAMES = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'] class SimpleConfig(mrcnn.config.Config): # Give the configuration a recognizable name NAME = "coco_inference" # set the number of GPUs to use along with the number of images per GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 # Number of classes = number of classes + 1 (+1 for the background). The background class is named BG NUM_CLASSES = len(CLASS_NAMES) # Initialize the Mask R-CNN model for inference and then load the weights. # This step builds the Keras model architecture. model = mrcnn.model.MaskRCNN(mode="inference", config=SimpleConfig(), model_dir=os.getcwd()) # Load the weights into the model model.load_weights(filepath="mask_rcnn_coco.h5", by_name=True) If you could properly follow all those steps, you should be able to load the pre-trained weights without any issue and verify the change in weights with: from mrcnn import visualize visualize.display_weight_stats(model) which prints out: # Showing the first 10 layers as done in the question WEIGHT NAME SHAPE MIN MAX STD conv1/kernel:0 (7, 7, 3, 64) -0.8616 +0.8451 +0.1315 conv1/bias:0 (64,) -0.0002 +0.0004 +0.0001 bn_conv1/gamma:0 (64,) +0.0835 +2.6411 +0.5091 bn_conv1/beta:0 (64,) -2.3931 +5.3610 +1.9781 bn_conv1/moving_mean:0 (64,) -173.0470 +116.3013 +44.5654 bn_conv1/moving_variance:0*** Overflow? (64,) +0.0000 +146335.3594 +21847.9668 res2a_branch2a/kernel:0 (1, 1, 64, 64) -0.6574 +0.3179 +0.0764 res2a_branch2a/bias:0 (64,) -0.0022 +0.0082 +0.0018 bn2a_branch2a/gamma:0 (64,) +0.2169 +1.8489 +0.4116 bn2a_branch2a/beta:0 (64,) -2.1180 +3.7332 +1.1786 Here's a snippet to visualize the predictions from the pre-trained Mask R-CNN: import cv2 import mrcnn.visualize # load the input image, convert it from BGR to RGB channel image = cv2.imread("test.jpg") image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # Perform a forward pass of the network to obtain the results r = model.detect([image], verbose=0) # Get the results for the first image. r = r[0] # Visualize the detected objects. mrcnn.visualize.display_instances(image=image, boxes=r['rois'], masks=r['masks'], class_ids=r['class_ids'], class_names=CLASS_NAMES, scores=r['scores']) which yields:
2
4
77,434,087
2023-11-6
https://stackoverflow.com/questions/77434087/execute-gcp-cloud-run-job-with-environment-variable-override-using-python-client
I am trying to trigger a GCP Cloud Run job from a python script following the run_job documentation (https://cloud.google.com/python/docs/reference/run/latest/google.cloud.run_v2.services.jobs.JobsClient#google_cloud_run_v2_services_jobs_JobsClient_run_job). However, I'm getting errors that I haven't been able to debug. This is a job that already exists, but I need to overwrite an environment variable. Here's is my code: env_vars = [ run_v2.EnvVar(name="VAR_1", value="var_1_value"), run_v2.EnvVar(name="VAR_2", value="var_2_value"), ] # Set the env vars as container overrides container_overrides = run_v2.RunJobRequest.Overrides.ContainerOverride( name="myjobname", env=env_vars ) request_override = run_v2.RunJobRequest.Overrides( container_overrides=container_overrides ) # Initialize the request job_name = f"projects/myproject/locations/mylocation/jobs/myjob" request = run_v2.RunJobRequest( name=job_name, overrides=request_override ) # Make the request operation = client.run_job(request=request) logging.info("Waiting for operation to complete...") response = operation.result() logging.info(f"Operation result: {response}") And this is the error I'm getting: Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/site-packages/airflow/decorators/base.py", line 220, in execute return_value = super().execute(context) File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 181, in execute return_value = self.execute_callable() File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 198, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File "/home/airflow/gcs/dags/etl.py", line 128, in run_acolite request = run_v2.RunJobRequest( File "/opt/python3.8/lib/python3.8/site-packages/proto/message.py", line 604, in __init__ super().__setattr__("_pb", self._meta.pb(**params)) TypeError: Message must be initialized with a dict: google.cloud.run.v2.RunJobRequest Thank you!
We recently started using the Cloud Run Jobs service in my workplace and I found myself needing to carry out the same task. A dictionary with the Override Specification is required. I've amended the Initialize request block as per your example. override_spec = { 'container_overrides': [ { 'env': [ {'name': 'VAR_1', 'value': 'var_1_value'} ] } ] } # Initialize the request job_name = f"projects/myproject/locations/mylocation/jobs/myjob" request = run_v2.RunJobRequest( name=job_name, overrides=override_spec ) Ref: https://cloud.google.com/python/docs/reference/run/latest/google.cloud.run_v2.types.RunJobRequest.Overrides
2
12
77,425,962
2023-11-5
https://stackoverflow.com/questions/77425962/how-to-compose-functions-through-purely-using-pythons-standard-library
Python's standard library is vast, and my intuition tells that there must be a way in it to accomplish this, but I just can't figure it out. This is purely for curiosity and learning purposes: I have two simple functions: def increment(x): return x + 1 def double(x): return x * 2 and I want to compose them into a new function double_and_increment. I could of course simply do that as such: double_and_increment = lambda x: increment(double(x)) but I could also do it in a more convoluted but perhaps more "ergonomically scalable" way: import functools double_and_increment = functools.partial(functools.reduce, lambda acc, f: f(acc), [double, increment]) Both of the above work fine: >>> double_and_increment(1) 3 Now, the question is, is there tooling in the standard library that would allow achieving the composition without any user-defined lambdas, regular functions, or classes. The first intuition is to replace the lambda acc, f: f(acc) definition in the functools.reduce call with operator.call, but that unfortunately takes the arguments in the reverse order: >>> (lambda acc, f: f(acc))(1, str) # What we want to replace. >>> '1' >>> import operator >>> operator.call(str, 1) # Incorrect argument order. >>> '1' I have a hunch that using functools.reduce is still the way to accomplish the composition, but for the life of me I can't figure out a way to get rid of the user-defined lambda. Few out-of-the-box methods that got me close: import functools, operator # Curried form, can't figure out how to uncurry. functools.partial(operator.methodcaller, '__call__')(1)(str) # The arguments needs to be in the middle of the expression, which does not work. operator.call(*reversed(operator.attrgetter('args')(functools.partial(functools.partial, operator.call)(1, str)))) Have looked through all the existing questions, but they are completely different and rely on using user-defined functions and/or lambdas.
As mentioned in the other answer of mine I don't agree that the test suite discovered by @AKX should be considered as part of the standard library per the OP's rules. As it turns out, while researching for an existing function to modify for my other answer, I found that there is this helper function _int_to_enum in the signal module that perfectly implements operator.call for a callable with a single argument, but with parameters reversed, exactly how the OP wants it, and is available since Python 3.5: def _int_to_enum(value, enum_klass): """Convert a numeric value to an IntEnum member. If it's not a known member, return the numeric value itself. """ try: return enum_klass(value) except ValueError: return value So we can simply repurpose/abuse it: from signal import _int_to_enum as rcall from functools import reduce, partial def increment(x): return x + 1 def double(x): return x * 2 double_and_increment = partial(reduce, rcall, [double, increment]) print(double_and_increment(1)) This outputs: 3 Demo: here
7
3
77,410,704
2023-11-2
https://stackoverflow.com/questions/77410704/pylance-not-working-autocomplete-for-dynamically-instantiated-classes
from typing import Literal, overload, TypeVar, Generic, Type import enum import abc import typing class Version(enum.Enum): Version1 = 1 Version2 = 2 Version3 = 3 import abc from typing import Type class Machine1BaseConfig: @abc.abstractmethod def __init__(self, *args, **kwargs) -> None: pass class Machine1Config_1(Machine1BaseConfig): def __init__(self, fueltype, speed) -> None: self.fueltype = fueltype self.speed = speed class Machine1Config_2(Machine1BaseConfig): def __init__(self, speed, weight) -> None: self.speed = speed self.weight = weight class Machine1FacadeConfig: @classmethod def get_version(cls, version: Version) -> Type[typing.Union[Machine1Config_1, Machine1Config_2]]: config_map = { Version.Version1: Machine1Config_1, Version.Version2: Machine1Config_2, Version.Version3: Machine1Config_2, } return config_map[version] class Machine2BaseConfig: @abc.abstractmethod def __init__(self, *args, **kwargs) -> None: pass class Machine2Config_1(Machine2BaseConfig): def __init__(self, gridsize) -> None: self.gridsize = gridsize class Machine2Config_2(Machine2BaseConfig): def __init__(self, loadtype, duration) -> None: self.loadtype = loadtype self.duration = duration class Machine2FacadeConfig: @classmethod def get_version(cls, version: Version) -> Type[typing.Union[Machine2Config_1, Machine2Config_2]]: config_map = { Version.Version1: Machine2Config_1, Version.Version2: Machine2Config_1, Version.Version3: Machine2Config_2, } return config_map[version] class Factory: def __init__(self, version: Version) -> None: self.version = version @property def Machine1Config(self): return Machine1FacadeConfig.get_version(self.version) @property def Machine2Config(self): return Machine2FacadeConfig.get_version(self.version) factory_instance = Factory(Version.Version1) machine1_config_instance = factory_instance.Machine1Config() machine2_config_instance = factory_instance.Machine2Config() In the provided Python code, the Factory class is used to instantiate configuration objects for two different types of machines (Machine1 and Machine2) based on a specified version. The problem is when using Pylance/Pyright with Visual Studio Code, I'm experiencing issues with autocomplete not correctly suggesting parameters for dynamically instantiated classes (Machine1Config and Machine2Config) in a factory design pattern. How can I improve my code to enable more accurate and helpful autocompletion suggestions by Pylance for these dynamically determined types? I have thought that this should somehow work with @overload decorater but I can't wrap my head around it how to quite implement it. Furthermore currently with the type hint Type[typing.Union[Machine1Config_1, Machine1Config_2]] Pylance suggests all key word arguments of Machine1Config_1 and Machine1Config_2, so fueltype, speed, weight. If I leave this type hint away there is no autocompletion at all.
Looking at the factory, there is no way to tell which of Type[typing.Union[Machine2Config_1, Machine2Config_2]] will be returned when calling Machine1FacadeConfig.get_version(self.version) in isolation. As the facade and the factory are extremely coupled anyways, I would suggest combining these into a single utility, where the types for version and configs can be more tightly coupled. You can declare a generic class for factory and provide a helper function which returns an instance of that factory where the version and config types have been bound together. The helper function would be overloaded for the different combinations. _Config1 = TypeVar("_Config1", Machine1Config_1, Machine1Config_2) _Config2 = TypeVar("_Config2", Machine2Config_1, Machine2Config_2) class _Factory(Generic[_Config1, _Config2]): def __init__(self, config1: Type[_Config1], config2: Type[_Config2]): self._config1 = config1 self._config2 = config2 @property def Machine1Config(self) -> Type[_Config1]: return self._config1 @property def Machine2Config(self) -> Type[_Config2]: return self._config2 @overload def Factory(version: Literal[Version.Version1]) -> _Factory[Machine1Config_1, Machine2Config_1]: ... @overload def Factory(version: Literal[Version.Version2]) -> _Factory[Machine1Config_1, Machine2Config_2]: ... @overload def Factory(version: Literal[Version.Version3]) -> _Factory[Machine1Config_2, Machine2Config_2]: ... def Factory(version: Version) -> _Factory: config_map1 = { Version.Version1: Machine1Config_1, Version.Version2: Machine1Config_1, Version.Version3: Machine1Config_2, } config_map2 = { Version.Version1: Machine2Config_1, Version.Version2: Machine2Config_2, Version.Version3: Machine2Config_2, } return _Factory(config_map1[version], config_map2[version]) factory_instance = Factory(Version.Version1) machine1_config_instance = factory_instance.Machine1Config machine2_config_instance = factory_instance.Machine2Config Example screenshot from vscode:
3
2
77,413,013
2023-11-2
https://stackoverflow.com/questions/77413013/how-to-staple-apple-notarization-tickets-manually-e-g-under-linux
Recently (as of 2023-11-01) Apple has changed their notarization process. I took the opportunity to drop Apple's own tools for this process (notarytool) and switch to a Python-based solution using their documented Web API for notarization This works great and has the additional bonus, that I can now notarize macOS apps from linux (in the context of CI, I can provision linux runners much faster than macOS runners). hooray. Since this went so smooth, I thought about moving more parts of my codesigning process to linux, and the obvious next step is find a solution for stapling the notarization tickets into application, replacing xcrun stapler staple MyApp.app With the help of -vv and some scraps of online documentation, it turns out that it is very simple to obtain the notarization ticket if you know the code directory hash (CDhash) of your application. the following will return a JSON-object containing (among other things) the base64-encoded notarization ticket, which just has to be decoded and copied into the .app bundle for stapling: cdhash=8d817db79d5c07d0deb7daf4908405f6a37c34b4 curl -X POST -H "Content-Type: application/json" \ --data "{ \"records\": { \"recordName\": \"2/2/${cdhash}\" }}" \ https://api.apple-cloudkit.com/database/1/com.apple.gk.ticket-delivery/production/public/records/lookup \ | jq -r ".records[0] | .fields | .signedTicket | .value" So, the only thing that is still missing for my stapler replacement is a way to obtain the code directory hash for a given application. On macOS (with the XCode tools installed), I can get this hash with codesign -d -vvv MyApp.app, but this obviously only works if I have the codesign binary at hand. I've found a couple of python wrappers for stapling tickets, but all of them just call xcrun stapler staple under the hood. This is not what I want. So my question is: How can I extract the code directory hash (CDhash) from a macOS application, without using macOS specific tools? (That is: How are CDhashes generated? I haven't found any documentation on this) I would very much like to use use Python for this task. Ideally, such a solution would be cross-platform (so I can use it on macOS and Linux, and probably others as well).
How can I extract the code directory hash (CDhash) from a macOS application, without using macOS specific tools? The CDhash of an app is the CDhash of the main executable in Contents/MacOS as identified in Contents/Info.plist Each hash is stored at the end of the binary segment for each architecture in an XML statement. It can be grepped out. The embedded cdhash is encoded in base64. The first one is for intel, the second for apple silicon: % grep -i -a -A3 'cdhashes' myApp.app/Contents/MacOS/mainexec | sed -n '4p;9p' | cut -f 3 HPhKLQv1j2SFYTmIgyUi/L6B9Yo= TVNDrCQEL9A/DMWVmphntZAq7kc= % printf "HPhKLQv1j2SFYTmIgyUi/L6B9Yo=" | base64 -d | hexdump -v -e '/1 "%02x" ' && echo "" 1cf84a2d0bf58f6485613988832522fcbe81f58a % printf "TVNDrCQEL9A/DMWVmphntZAq7kc=" | base64 -d | hexdump -v -e '/1 "%02x" ' && echo "" 4d5343ac24042fd03f0cc5959a9867b5902aee47 Compared with the cdhash as reported by codesign: % codesign -dvvv -a arm64 myApp.app Executable=myApp.app/Contents/MacOS/mainexec Identifier=com.mycompany.myApp Format=app bundle with Mach-O universal (x86_64 arm64) CodeDirectory v=20500 size=92199 flags=0x10000(runtime) hashes=2870+7 location=embedded Hash type=sha256 size=32 CandidateCDHash sha256=4d5343ac24042fd03f0cc5959a9867b5902aee47 CandidateCDHashFull sha256=4d5343ac24042fd03f0cc5959a9867b5902aee4725c5a75775cd711aae76b709 Hash choices=sha256 CMSDigest=4d5343ac24042fd03f0cc5959a9867b5902aee4725c5a75775cd711aae76b709 CMSDigestType=2 Launch Constraints: None CDHash=4d5343ac24042fd03f0cc5959a9867b5902aee47
4
1
77,438,553
2023-11-7
https://stackoverflow.com/questions/77438553/pydantic-validation-error-input-should-be-a-valid-dictionary-or-instance
I am trying to validate the latitude and longitude: from pydantic import BaseModel, Field from pydantic.dataclasses import dataclass @dataclass(frozen=True) class Location(BaseModel): longitude: float = Field(None, ge=-180, le=180) latitude: float = Field(None, ge=-90, le=90) Location(longitude=1.0, latitude=1.0) When I run this locally I get the following error: Input should be a valid dictionary or instance of Location [type=model_type, input_value=ArgsKwargs((), {'longitud...: 1.0, 'latitude': 1.0}), input_type=ArgsKwargs] For further information visit https://errors.pydantic.dev/2.4/v/model_type I have already tried to follow the Pydantic documentation which is given by the error but unfortunately is not clear and does not really give any information in regards of what to change and what to do differently. I also tried to create a Python dictionary to see if the error could be solved but it still gave the same error. Honestly, I have no idea of what to do next and how to solve this problem as I have tested it in several files (including in the same file to ensure it had nothing to do with any imports). Any ideas of why the error occurs? Update Without the (BaseModel) when running the an instance of the dataclass in the same file it works. Now, I am creating different unit tests to test if the validators work correctly. Please check the following code snippet: When creating a unit test and running a such test: def test_when_location_is_not_in_range_then_print_exception(self): invalid_location = Location(latitude=200.0, longitude=300.0) with self.assertRaises(ValueError) as context: location = Location(latitude=invalid_location.latitude, longitude=invalid_location.longitude) self.assertEqual( str(context.exception), "Location is out of range", "Expected exception message: 'Location is out of range'" ) Before running the test, a warning appears: Unexpected Argument After running the test, a new error appears: longitude Input should be less than or equal to 180 [type=less_than_equal, input_value=300.0, input_type=float] For further information visit https://errors.pydantic.dev/2.4/v/less_than_equal latitude Input should be less than or equal to 90 [type=less_than_equal, input_value=200.0, input_type=float] For further information visit https://errors.pydantic.dev/2.4/v/less_than_equal The error encountered before does not appear anymore.
I guess you're using dataclass from pydantic.dataclasses. In that case, don't inherit from BaseModel from pydantic import Field from pydantic.dataclasses import dataclass @dataclass(frozen=True) class Location: longitude: float = Field(None, ge=-180, le=180) latitude: float = Field(None, ge=-90, le=90) Location(longitude=1.0, latitude=1.0) Read more at https://docs.pydantic.dev/latest/concepts/dataclasses/ Regarding the update to your question - the validation errors come from invalid_location = Location(latitude=200.0, longitude=300.0) You've limited the valid range of values for the latitude and longitude attributes, hence the errors when values provided do not adhere to those requirements.
3
4
77,433,205
2023-11-6
https://stackoverflow.com/questions/77433205/how-to-install-mysqlclient-in-a-python3-slim-docker-image-without-bloating-the
I'm using python:3-slim Docker image and want to use the mysqlclient package from Pypi but getting the following error from RUN pip install mysqlclient command: ... Collecting mysqlclient Downloading mysqlclient-2.2.0.tar.gz (89 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.5/89.5 kB 2.5 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─> [27 lines of output] /bin/sh: 1: pkg-config: not found /bin/sh: 1: pkg-config: not found Trying pkg-config --exists mysqlclient Command 'pkg-config --exists mysqlclient' returned non-zero exit status 127. Trying pkg-config --exists mariadb Command 'pkg-config --exists mariadb' returned non-zero exit status 127. Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 341, in run_setup exec(code, locals()) File "<string>", line 154, in <module> File "<string>", line 48, in get_config_posix File "<string>", line 27, in find_package_name Exception: Can not find valid pkg-config name. Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─> See above for output. My Dockerfile looks like: FROM python:3.12-slim RUN pip install mysqlclient I tried installing the build dependencies by adding the following to the Dockerfile above RUN pip install mysqlclient: RUN apt-get install python3-dev default-libmysqlclient-dev build-essential but this took the Docker image to over 700MB. I also tried replacing the line above with the following to remove the build dependencies after install mysqlclient: RUN apt-get update && \ apt-get dist-upgrade && \ apt-get install -y pkg-config default-libmysqlclient-dev \ build-essential libmariadb-dev && \ pip install mysqlclient && \ apt-get purge -y pkg-config default-libmysqlclient-dev build-essential but the resulting image was still over 500MB in size and took a lot of precious CI/CD build time as it installs and then uninstalls the build tools. How do I install mysqlclient without bloating my docker image with build dependencies?
Use a Multi-Stage build Dockerfile: FROM python:3.12 AS python-build RUN pip install mysqlclient FROM python:3.12-slim COPY --from=python-build /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages RUN apt-get update && apt-get install -y libmariadb3 This first 'stage' uses a full-fat python:3.12 image, which has the necessary build tools to install and compile mysqlclient. As is typical with Python, the mysqlclient package along with its dependencies are installed in /usr/local/lib/python*/site-packages. Then the build starts again, this time using python:3.12-slim image. The COPY references the first image and copies just the installed Python packages including mysqlclient. The final line installs libmariadb3, which is required by mysqlclient, otherwise you'll receive the following error: ImportError: libmariadb.so.3: cannot open shared object file: No such file or directory Your image size should now be a little over 190MB. You'll find it'll build much quicker too.
4
6
77,436,620
2023-11-7
https://stackoverflow.com/questions/77436620/different-behaviour-of-re-search-function-in-python
I have come across a different behaviour of search function in regex which made me think that there is an implicit \b anchor in the pattern. Is this the case? text = "bowl" print(re.search(r"b|bowl", text)) # first alteration in this pattern works print(re.search(r"o|bowl", text)) # but first alteration won't work here print(re.search(r"w|bowl", text)) # nor here print(re.search(r"l|bowl", text)) # nor here print(re.search(r"bo|bowl", text)) # first alteration in this pattern works print(re.search(r"bow|bowl", text)) # first alteration in this pattern works OUTPUT <re.Match object; span=(0, 1), match='b'> <re.Match object; span=(0, 4), match='bowl'> <re.Match object; span=(0, 4), match='bowl'> <re.Match object; span=(0, 4), match='bowl'> <re.Match object; span=(0, 2), match='bo'> <re.Match object; span=(0, 3), match='bow'> I have researched that if this was the case but I couldn't find any explanation.
I'm not a regex expert, so I'll use simple words to describe what happens internally. search works from left to right, and the | patterns too. Also search is different from match and moves forward to try to find the pattern across the string, not just at start. Take this: re.search(r"o|bowl", text) So if o pattern is tested against, since matcher is on b character of the input string, it doesn't match, and the code tries the second pattern. If it failed, it would skip to next character (since all match possibilities are exhausted) and would match o, but since it matches, it doesn't happen: bowl characters are consumed. If you try: re.search("o|bar", text) then o will be matched. Note that it's not specific to python. That's how a correct regex engine works. If you want the alternate behaviour you could write: re.search("o", text) or re.search("bar", text)
4
2
77,436,994
2023-11-7
https://stackoverflow.com/questions/77436994/what-is-the-effect-of-loc-in-a-dataframe
If I have this minimal reproducible example import pandas as pd df = pd.DataFrame({"A":[12, 4, 5, None, 1], "B":[7, 2, 54, 3, None], "C":[20, 16, 11, 3, 8], "D":[14, 3, None, 2, 6]}) index_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5'] df.index = index_ print(df) # Option 1 result = df[['A', 'D']] print(result) # Option 2 result = df.loc[:, ['A', 'D']] print(result) What is the effect on using loc or not. The results are quite similar. I ask this in preparation for a more complex question in which I have been instructed to use loc.
The difference is that df[['A', 'D']] is a weak reference to df (here on pandas 2.1.2). result1 = df[['A', 'D']] print(result1._is_copy) #<weakref at 0x7f34261b69d0; to 'DataFrame' at 0x7f34260e9590> result2 = df.loc[:, ['A', 'D']] print(result2._is_copy) # None In both cases, this is not a view: print(result1._is_view, result2._is_view) # False False This behavior has changed with the pandas versions. Is this important? It depends what you want to do. In most cases no. The first approach can however, in specific cases, trigger SettingWithCopyWarning: result1 = df[['A', 'D']] s1 = result1['A'] s1[:] = 1 # SettingWithCopyWarning: # A value is trying to be set on a copy of a slice from a DataFrame result2 = df.loc[:, ['A', 'D']] s2 = result2['A'] s2[:] = 1 # no Warning
2
7
77,435,651
2023-11-7
https://stackoverflow.com/questions/77435651/linear-programming-optimization-using-linprog
I am trying to solve this problem using linprog from scipy.optimize. A salad is any combination of the following ingredients: (1) tomato, (2) lettuce, (3) spinach, (4) carrot, and (5) oil. Each salad must contain: (A) at least 15 grams of protein, (B) at least 2 and at most 6 grams of fat, (C) at least 4 grams of carbohydrates, (D) at most 100 milligrams of sodium. Furthermore, (E) you do not want your salad to be more than 50% greens by mass. The nutritional contents of these ingredients (per 100 grams) are Solve a linear program which makes the salad with the fewest calories under the nutritional constraints. I think I might be doing something wrong with my constraints, any advice is welcome. linprog(c=21, 16, 371, 346, 884], A_ub=[[0.85, 0.162, 12.78, 8.39, 0], [0.33, 0.02, 1.58, 1.39, 0], [4.64, 2.37, 74.69, 80.70, 0], [9, 8, 7, 508, 0], [0, 0, 0, 0, 0]] , b_ub=[15, 2, 4,100,50], bounds=[(0, None), (0, None), (0, None), (0, None), (0, None)])
As you have few variables and constraints, we can write your constraints like this: # (A) protein, at least (*) 15 <= 0.85*x[tomato] + 1.62*x[lettuce] + 12.78*x[spinach] + 8.39*x[carrot] + 0.0*x[oil] # (B1) fat, at least (*) 2 <= 0.33*x[tomato] + 0.2*x[lettuce] + 1.58*x[spinach] + 1.39*x[carrot] + 100.0*x[oil] # (B2) fat, at most 0.33*x[tomato] + 0.2*x[lettuce] + 1.58*x[spinach] + 1.39*x[carrot] + 100.0*x[oil] <= 6 # (C) carbo, at least (*) 4 <= 4.64*x[tomato] + 2.37*x[lettuce] + 74.69*x[spinach] + 80.7*x[carrot] + 0.0*x[oil] # (D) sodium, at most 9.0*x[tomato] + 8.0*x[lettuce] + 7.0*x[spinach] + 508.2*x[carrot] + 0.0*x[oil] <= 100 (*) However, as @AirSquid wrote, you can't pass lower bounds for constraints with linprog, you have to change the sense of the constraints to set upper bounds. # (A) protein, at least -> change inequation sens -0.85*x[tomato] + -1.62*x[lettuce] + -12.78*x[spinach] + -8.39*x[carrot] + -0.0*x[oil] <= -15 # (B1) fat, at least -> change inequation sense -0.33*x[tomato] + -0.2*x[lettuce] + -1.58*x[spinach] + -1.39*x[carrot] + -100.0*x[oil] <= -2 # (B2) fat, at most 0.33*x[tomato] + 0.2*x[lettuce] + 1.58*x[spinach] + 1.39*x[carrot] + 100.0*x[oil] <= 6 # (C) carbo, at least -> change inequation sense -4.64*x[tomato] + -2.37*x[lettuce] + -74.69*x[spinach] + -80.7*x[carrot] + -0.0*x[oil] <= -4 # (D) sodium, at most 9.0*x[tomato] + 8.0*x[lettuce] + 7.0*x[spinach] + 508.2*x[carrot] + 0.0*x[oil] <= 100 (E) you do not want your salad to be more than 50% greens by mass. It means: # (E) green mass x[lettuce] + x[spinach] <= 0.5*(x[tomato] + x[lettuce] + x[spinach] + x[carrot] + x[oil]) You have to simplify this expression to extract coefficients: # (E) green mass -0.5*x[tomato] + 0.5*x[lettuce] + 0.5*x[spinach] + -0.5*x[carrot] + -0.5*x[oil] <= 0 Now you are ready to create c, A_ub and b_ub parameters: c = [21, 16, 371, 346, 884] A_ub = [[-0.85, -1.62, -12.78, -8.39, -0.0], [-0.33, -0.2, -1.58, -1.39, -100.0], [0.33, 0.2, 1.58, 1.39, 100.0], [-4.64, -2.37, -74.69, -80.7, -0.0], [9.0, 8.0, 7.0, 508.2, 0.0], [-0.5, 0.5, 0.5, -0.5, -0.5]] b_ub = [-15, -2, 6, -4, 100, 0] bounds = [(0, None), (0, None), (0, None), (0, None), (0, None)]) linprog(c=c, A_ub=A_ub, b_ub=b_ub, bounds=bounds) Output: message: Optimization terminated successfully. (HiGHS Status 7: Optimal) success: True status: 0 fun: 232.5146989957854 x: [ 5.885e+00 5.843e+00 4.163e-02 0.000e+00 0.000e+00] nit: 3 lower: residual: [ 5.885e+00 5.843e+00 4.163e-02 0.000e+00 0.000e+00] marginals: [ 0.000e+00 0.000e+00 0.000e+00 1.292e+03 8.681e+02] upper: residual: [ inf inf inf inf inf] marginals: [ 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00] eqlin: residual: [] marginals: [] ineqlin: residual: [ 0.000e+00 1.176e+00 2.824e+00 4.026e+01 0.000e+00 0.000e+00] marginals: [-3.159e+01 -0.000e+00 -0.000e+00 -0.000e+00 -2.414e+00 -3.174e+01] mip_node_count: 0 mip_dual_bound: 0.0 mip_gap: 0.0
2
4
77,428,847
2023-11-6
https://stackoverflow.com/questions/77428847/is-there-a-way-to-use-a-conditional-kernel-in-opencv-that-only-changes-pixels-on
I want to use a kernel that performs a pixel operation based on a conditional expression. Let's say I have this grayscale image (6x6 resolution): and I use a 3x3 pixel kernel, how would I change the value of the centre kernel pixel (centre) IF AND ONLY IF the centre pixel is the local minimum or maximum within the 3x3 kernel? For example, say I wanted to set the centre kernel pixel to the average value of the surrounding 8 pixels, like this: Is there a way to do this with OpenCV? EDIT: another more detailed example GIF - 9 passes implementing my example: This was produced in Excel using the following formula (not the relative cell references - they show the kernel shape of 3x3 around the focus 'picell': =IF(OR(C55=MIN(B54:D56),C55=MAX(B54:D56)),(SUM(B54:D56)-C55)/8,C55) Here is the top left corner of the table with the source values for the first pass (these values control the cell colour): This table refers to another source table. Each frame in the GIF is the next calculated colour table. There are 3 tables of formulae in between each image frame. Here is more context:
You asked: [...] how would I change the value of the centre kernel pixel (centre) IF AND ONLY IF the centre pixel is the local minimum or maximum within the 3x3 kernel? For example, say I wanted to set the centre kernel pixel to the average value of the surrounding 8 pixels [...] I'll demonstrate a few things first. I'll work with small arrays, 16 by 16. I'll show them enlarged so you can stare at the pixels conveniently. When I talk about these images, I mean the 16x16 data, not the visualizations in this post. Let's start with random noise because that is what you first presented. noise = np.random.randint(0, 256, size=(16, 16), dtype=np.uint8) Now you need to know about morphology operations. Erosion and dilation are calculating the local minimum/maximum value. local_max_values = cv.dilate(noise_img, None, iterations=1) local_min_values = cv.erode(noise_img, None, iterations=1) What's that good for? You can compare pixel values. If a pixel is equal to the local extremum, it must be a local extremum. It's not unique because let's say two adjacent pixels have the same low/high value. They're both extrema. Let's compare: is_min = (noise_img == local_min_values) is_max = (noise_img == local_max_values) is_extremum = is_min | is_max Those are masks. They're binary, boolean. You can use them either for indexing, or for multiplication. You can imagine what happens when you multiply elementwise by 0 or 1, or do that with an inverted mask. I'll demonstrate indexing but first I'll need the local averages. averaged = cv.blur(noise_img, (3, 3)) Now I can make a copy of the input (or I could work on it directly) and then overwrite all the extremal pixels with the average values at those positions. denoised = noise_img.copy() denoised[is_extremum] = averaged[is_extremum] Yes, this calculates the average for all pixels, even if you don't need it. Practically, you wouldn't save any time by calculating only some of the averages. If you switch back and forth between this and the source image, you'll see local extrema being erased. Other pixels that used to be "second place" have now become extrema. Another round of this will progressively smooth the entire picture until everything is quite flat.
2
3
77,434,196
2023-11-6
https://stackoverflow.com/questions/77434196/is-there-a-way-to-return-all-rows-where-only-one-column-is-not-null
I have a dataframe that I'd like to break up into logical sub-dataframes. The most logical way to do this, given how the data is, is to select rows from the original dataframe where only one of the columns is not null (i.e. df.column.notnull() is True). Is there a shorthand for this or do I need to check each other column for every column that is not null?
Create a mask using .sum(axis=1) on the result of df.notnull() and checking if equal to 1: df[df.notnull().sum(axis=1).eq(1)] With some sample data: import numpy as np import pandas as pd df = pd.DataFrame( {'A': [1, np.nan, np.nan, np.nan], 'B': [np.nan, 2, np.nan, np.nan], 'C': [np.nan, np.nan, 3, np.nan], 'D': [np.nan, np.nan, 4, np.nan]} ) df[df.notnull().sum(axis=1).eq(1)] returns A B C D 0 1.0 NaN NaN NaN 1 NaN 2.0 NaN NaN from the original DataFrame A B C D 0 1.0 NaN NaN NaN 1 NaN 2.0 NaN NaN 2 NaN NaN 3.0 4.0 3 NaN NaN NaN NaN
2
4
77,412,601
2023-11-2
https://stackoverflow.com/questions/77412601/how-to-configure-qdrant-data-persistence-and-reload
I'm trying to build an app with streamlit that uses Qdrant python client. to run the qdrant, im just using: docker run -p 6333:6333 qdrant/qdrant I have wrapped the client in something like this: class Vector_DB: def __init__(self) -> None: self.collection_name = "__TEST__" self.client = QdrantClient("localhost", port=6333,path = "/home/Desktop/qdrant/qdrant.db") but i'm getting this error: Storage folder /home/Desktop/qdrant/qdrant.db is already accessed by another instance of Qdrant client. If you require concurrent access, use Qdrant server instead. I suspect that streamlit is creating multiple instances of this class, but, if i try to load the db from one snapshot, like: class Vector_DB: def __init__(self) -> None: self.client = QdrantClient("localhost", port=6333) self.client.recover_snapshot(collection_name = "__TEST__",location = "http://localhost:6333/collections/__TEST__/snapshots/__TEST__-8742423504815750-2023-10-30-12-04-14.snapshot") it works. Seems like i'm missing something important on how to configure it. What is the properly way of setting Qdrant, to store some embeddings, turn off the machine, and reload it?
You mention using the Qdrant server, to which you'd like to connect with the Python client. There are two problems in your above question, let me go over both of them: 1. Persist data in Qdrant server: A Qdrant server stores its data inside the Docker container. Docker containers are immutable however, which means that they don't hold data across restarts. To persist data you must specify a mount. Qdrant will then persist data on the mount instead of in the immutable container. You could configure a mount using the -v flag like this1: docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant Data is automatically persisted and reloaded when you stop or restart the Qdrant container. You don't have to take extra measures for this. 2. Qdrant server versus local mode: Qdrant supports two operating modes. The Qdrant server and local mode. You're using the Qdrant server through Docker. The Python client also supports local mode which is an in-memory implementation intended for testing. To use a Qdrant server you must specify its location (URL)2. You've already specified "localhost", perfect if hosting the Qdrant server on your local machine. To use local mode you can either specify ":memory:" or provide a path to persist data3. Right now you've specified parameters for both. Instead you must stick with one. You can update your client initialization to this: class Vector_DB: def __init__(self) -> None: self.collection_name = "__TEST__" self.client = QdrantClient("localhost", port=6333)
2
4
77,432,905
2023-11-6
https://stackoverflow.com/questions/77432905/why-does-dataclass-favour-repr-over-str
Given the following code, I think output (A) makes sense since __str__ takes precedence over __repr__ however I am a bit confused about output (B) why does it favour __repr__ over __str__ and is there a way to make the class use the __str__ rather than the __repr__ of Foo without defining __str__ for Bar? @dataclass() class Foo(): def __repr__(self) -> str: return "Foo::__repr__" def __str__(self) -> str: return "Foo::__str__" @dataclass() class Bar(): foo: Foo f = Foo() b = Bar(f) print(f) # (A) Outputs: Foo::__str__ print(b) # (B) Outputs: Bar(foo=Foo::__repr__) I stumbled upon this because I saw that my custom __str__ of Foo was not being used by Bar in fact even if I remove the __repr__ output (B) will still be Bar(foo=Foo()) instead of Bar(foo=Foo::__str__).
It is normal for the repr to be preferred to str when rendering a representation within a container of some sort, e.g.: >>> print(f) Foo::__str__ >>> print([f]) [Foo::__repr__] It is necessary for repr to be unambiguous, for example to see the difference between numbers and strings here: >>> from dataclasses import dataclass >>> @dataclass ... class D: ... s: str ... n: int ... >>> D('1', 1) D(s='1', n=1) I think you should not override Foo.__repr__ in that way, because you're straying from the "unwritten rule" that eval(repr(obj)) == obj.
2
4
77,432,108
2023-11-6
https://stackoverflow.com/questions/77432108/how-does-memory-management-in-python-work-for-integers
The result in the two following examples is different: EXAMPLE 1 a = 845 b = int("8"+"4"+"5") print(a == b) # True print(a is b) # False EXAMPLE 2 a = 845 b = 840+5 print(a == b) # True print(a is b) # True How can it be explained? Why in the first case the same integer is kept in two different memory slots in the pile? I expected the result to be the same (True) in both cases.
Another question provides good context on this issue, but this is a bit different to the cases mentioned there (but it comes down to the same thing in the end). Even though only integers between -5 and 256 are cached and return True for is checks, the same happens for hardcoded integers, since they are compiled as constants and are not allocated. Using compiler explorer makes this a bit easier to understand. First example can also be written as a = 845 b = int("845") print(a == b) # True print(a is b) # False We can see in https://godbolt.org/z/zM3haedb9 that string concatenation is done in compile-time, but int is called in the end again, which creates a new integer and since it is not in the cached range, a new object is created, making is return False. For second example, addition is done in compile team, making it equivalent to a = 845 b = 845 print(a == b) # True print(a is b) # True Which with the help of the other question makes complete sense. You can check out https://godbolt.org/z/xKacd5oh4 to see that after compilation, both a and b are created the same way.
3
3
77,431,520
2023-11-6
https://stackoverflow.com/questions/77431520/how-do-i-read-the-next-line-after-finding-a-variable-in-a-text-file-using-python
I am trying to make an app for electric vehicle drivers and i'm using a text file to store the data the way it works is i have the name of the electric vehicle and the the line under the name contains the miles it can get per 1%, i've got it so it can find the specific car but i can't find the range of the vehicle using that number. cars.txt MG MG4 EV Long Range 2.25 BMW iX1 xDrive30 2.3 Kia Niro EV 2.4 Tesla Model Y Long Range Dual Motor 2.7 BMW i4 eDrive40 3.2 code with open('cars.txt', 'r')as cars: check = input("Enter full name of car: ") car = cars.read() percentage = cars.readline() if check in car: print("Found") total = range print(percentage) this is what i have but every time it finds the car it won't find the range after it.
You can do the following: target_car = "Kia Niro EV" with open("temp.txt") as f: for line in f: if line.rstrip() == target_car: range_ = float(next(f)) break else: range_ = "Not Found" print(f"range is: {range_}") f is a consumable iterator. You iterate over it until you find your car, then the next item in that iterator is what you're looking for. Also note that you don't store the whole file in the memory in case you're dealing with a huge file. (In that case why wouldn't you use a proper database?)
3
1
77,429,177
2023-11-6
https://stackoverflow.com/questions/77429177/pandas-group-or-pivot-table-by-a-column-horizonally-rather-than-vertically
I have got data that looks like this data = [['01/01/2000', 'aaa', 101, 102], ['01/02/2000', 'aaa', 201, 202], ['01/01/2000', 'bbb', 301, 302], ['01/02/2000', 'bbb', 401, 402],] df = pd.DataFrame(data, columns=['date', 'id', 'val1', 'val2']) df date id val1 val2 01/01/2000 aaa 101 102 01/02/2000 aaa 201 202 01/01/2000 bbb 301 302 01/02/2000 bbb 401 402 I would like this data to be transformed to look like this - where it's grouped horizonally by the id column aaa bbb date val1 val2 val1 val2 01/01/2000 101 102 301 302 01/02/2000 201 202 401 402 Closest i have gotten so far is: df.set_index(['date', 'id']).unstack(level=1), but this does not quite do it: val1 val2 id aaa bbb aaa bbb date 01/01/2000 101 301 102 302 01/02/2000 201 401 202 402
Add DataFrame.swaplevel with DataFrame.sort_index: out = (df.set_index(['date', 'id']) .unstack(level=1) .swaplevel(0,1, axis=1) .sort_index(axis=1)) print (out ) id aaa bbb val1 val2 val1 val2 date 01/01/2000 101 102 301 302 01/02/2000 201 202 401 402 Or use DataFrame.melt with DataFrame.pivot and DataFrame.sort_index: out = (df.melt(['date', 'id']) .pivot(index='date', columns=['id', 'variable'], values='value') .sort_index(axis=1))
2
2
77,428,302
2023-11-6
https://stackoverflow.com/questions/77428302/is-there-any-cool-way-to-express-if-x-is-none-x-self-x-in-python-class
I'm just studying python OOP, and truly confused when to use self and not. especially when I want to make a method that defaultly get object instance input and also wanna make it work as a normal method that can get the input of custom parameters, I get somewhat bothersome to type if x is None: x = self.x for all the parameters of the method. Example is as follows. from dataclasses import dataclass @dataclass class PlusCalculator(): x: int = 1 y: int = 2 result: int = None def plus(self, x=None, y=None): if x is None: x = self.x ## the bothersome part if y is None: y = self.y result = x + y self.result = result ## also confusing...is it good way? return result pc = PlusCalculator() print(pc.plus()) print(pc.plus(4,5)) Is there any good way to use instance variables as default value of function parameter??
A conditional expression is readable, fast, and intuitive- x = self.x if x is None else x Re: Is there any good way to use instance variables as default value of function parameter?? Regarding the setting of self.result- This should be avoided unless you need to access it as an instance variable later. As such you can simplify this to: @dataclass class PlusCalculator(): ... def plus(self, x=None, y=None): x = self.x if x is None else x y = self.y if y is None else y return x + y
2
1
77,427,048
2023-11-5
https://stackoverflow.com/questions/77427048/lazily-load-files-at-random-from-large-directory
I have about a million files in my directory, and their number is likely to grow. For machine learning, I would like to randomly sample from those files without replacement. How can I do this very quickly? os.listdir(path) is too slow for me.
I have about a million files in my directory ... os.listdir(path) is too slow for me. This is the core of your problem, and it's solved by a technique I've generally heard referred to as bucketing your files, though a web search for this doesn't seem particularly helpful. Bucketing is generally used by programs that need to store a large number of files that don't have any particular structure - for example, all the media files (such as images) in a MediaWiki instance (the software that runs Wikipedia). Here's the Stack Overflow logo on Wikipedia: https://upload.wikimedia.org/wikipedia/commons/0/02/Stack_Overflow_logo.svg See that 0/02 in the URL? That's the bucket. All files in Wikipedia will be hashed by some algorithm - for example sha256, though it won't necessarily be this - and 02 will be the first two hex digits of that hash. (The 0 before the slash is just the first digit of 02; in this case it's used as a second level of bucketing.) If MediaWiki just stored every single file in one massive directory, it'd be very slow to access the files in that directory, because although OS folders can hold arbitrarily many files, they just aren't designed to hold more than a few thousand or so. By hashing the contents of the file, you get what looks like a random string of hex digits unique to that file, and if you then put all the files that start with the same first two hex digits (like 02 in a folder called 02, you get 256 folders (one for each possible value of the first two hex digits), and critically, each of those 256 folders contains a roughly equal number of files. When you're trying to look up particular files, like MediaWiki is, you obviously need to know the hash to get to the file, if you store it in this way. But in your case, you just want to load random files. So this will work just as well: Hash all your files and bucket them (possibly with additional levels, e.g. you might want files like 12/34/filename.ext, so that you have 65,536 buckets). You can use things like hashlib or command-line tools like sha256sum to obtain file hashes. You don't need to rename the files, as long as you group them into directories based on the first few hex digits of their hashes. Now, each time you want a random file, choose a random bucket (and possibly random sub-buckets, if you're using additional levels), then choose a random file within that bucket. Doing that will be a lot faster than using listdir on a directory with a million files and then choosing randomly among those. Note: I'm just using MediaWiki as an example here because I'm familiar with a few of its internals; lots of software products do similar things.
2
2
77,422,087
2023-11-4
https://stackoverflow.com/questions/77422087/error-when-trying-to-find-2nd-maximum-value-in-a-list
I am trying to write code for finding the second maximum value of a list. I tried it like this: arr = map(int, input().split()) lista = list(arr) max_value = lista[0] run = lista[0] for i in lista: if max_value < i: max_value = i for j in lista: if run < j and run < max_value: run = j print(run) And the second maximum value and the maximum value comes out to be the same. What is the mistake in my program?
The issue TLDR run = lista[0] ... if run < j and run < max_value: Should be v run = min(lista) ... v if run < j and j < max_value: How to find the Issue 1. Make the code simpler Comprehension instead of map - list constructor succession We can reduce the boilerplate of firsts lines by using a comprehension instead of a succession of map and list constructors: arr = map(int, input().split()) lista = list(arr) become: lista = [int(n) for n in input().split()] Ask the max to Python, don't do it yourself The next block of your logic is here to find the max of the list. You don't need to write this code yourself, you can ask Python. This will reduce the complexity of your code, making the issue/bug easier to spot. max_value = lista[0] for i in lista: if max_value < i: max_value = i become: max_value = max(lista) Rename variables To help our brain understand what happens, let's clarify variables names: lista = [int(n) for n in input().split()] max_value = max(lista) run = lista[0] for j in lista: if run < j and run < max_value: run = j print(run) become: user_input = [int(n) for n in input().split()] max_value = max(user_input) second_max = user_input[0] for current_value in user_input: if second_max < current_value and second_max < max_value: second_max = current_value print(second_max) 2. Find the issue Now, the code is smaller, easier to understand, with less place for error. If there is an error, we don't have to look at many places. What's wrong? second_max should be the 2nd maximum value, but it's not. What could cause that? An update of previous_value that shouldn't happen. So this is where the issue might happen. if second_max < current_value and second_max < max_value: second_max = current_value The attribution is correct, the condition should be wrong. This seems correct second_max < current_value since we want to update only if second_max is lower than current_value (meaning that current might be the true second_max value OR equal to max_value. So we need another condition: current_value shouldn't be the max_value, otherwise, second_max might be set to the max_value. And, we look at the second condition: second_max < max_value. Here is our mistake. Let's fix the condition, since it's current_value that should be lower than max_value. Also, the initial value of second_max need to be set at the minimum value in case the first value is the max. user_input = [int(n) for n in input().split()] max_value = max(user_input) second_max = min(user_input) for current_value in user_input: if second_max < current_value and current_value < max_value: second_max = current_value print(second_max) # 55 Done. Alternative: Use set, sorted and index If you want the second maximum value in a list, it is easier to sort a de-duplicated list and print the element at index 1 of the list e.g. the second element. Step by Step Example without_duplicates = {int(n) for n in input().split()} ordered_without_duplicates = sorted(without_duplicates, reverse=True) print(ordered_without_duplicates[1]) Oneliner Example print(sorted({int(n) for n in input().split()}, reverse=True)[1]) With heapq print(heapq.nlargest(2, {int(n) for n in input().split()})[1])
4
7
77,425,573
2023-11-5
https://stackoverflow.com/questions/77425573/how-can-i-randomize-existing-byte-array
I create an array of bytes. array = bytearray(random.randint(1, 8192)) # Now, how can I randomize array's elements? Now how can I randomize each elements of the array? Just like, // with Java var array = new byte[ThreadLocalRandom.current().nextInt(1, 8192)]; ThreadLocalRandom.current().nextBytes(array);
def randomize(array): array[:] = random.randbytes(len(array)) Behaves like your answer's but is ~100 times faster. Time with an average-length array (4096 bytes): 23.0 Β± 0.3 ΞΌs randomize_Kelly 2191.5 Β± 11.6 ΞΌs randomize_Jin Python: 3.11.4 (main, Sep 9 2023, 15:09:21) [GCC 13.2.1 20230801] Attempt This Online!
2
2
77,418,891
2023-11-3
https://stackoverflow.com/questions/77418891/is-there-a-way-to-disable-a-nested-context-manager
I have a question on how to disable a nested context manager. I have an inside context manager: class cast: def __init__(self, enabled: bool = True, dtype) -> None: self.prev = False self.enabled = enabled self.dtype = dtype def __enter__(self) -> None: self.prev = is_cast_enabled() set_cast_enabled(self.enabled) set_cast_dtype(self.dtype) def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: set_cast_enabled(self.prev) User can call the APIs to cast a region of code to a specific data type, with cast(dtype=numpy.float16): c = a + b with this context manager, self.enabled=True and the cast is done. For example, here if both a and b are in float32, the output will be float16 since the cast is done. However, I wanted to a make another context manager, with no_cast, So with no_cast: with cast(dtype=numpy.float16): c = a + b So that no matter user set the enabled to either True or False, the nested context manager won't work. Is that doable? I wanted to know how to disable a nested context manager.
The no_cast context manager can redefine cast so it doesn't do anything while in this context. class no_cast: def __enter__(self): global cast self.cast = cast cast = no_cast.do_nothing return self def __exit__(self, *args): global cast cast = self.cast return True def do_nothing(*args): pass
2
3
77,424,888
2023-11-5
https://stackoverflow.com/questions/77424888/matplotlib-edgecolors-coloring-0-0-valued-data-points
I'm plotting a 3d bar plot for an array using matplotlib. I need to add an edgecolor to the bars. However, the edgecolor is coloring the 0.0 values data points in black. Is there a way to not color these data points? I'm trying to do edgecolors=none through a loop when values are 0.0. However this doesn't seem to help. Any help appreciated. The script I use to plot, from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = Axes3D(fig) lx= len(r[0]) ly= len(r[:,0]) xpos = np.arange(0,lx,1) ypos = np.arange(0,ly,1) xpos, ypos = np.meshgrid(xpos+0.25, ypos+0.25) xpos = xpos.flatten() ypos = ypos.flatten() zpos = np.zeros(lx*ly) cs = ['r', 'g', 'b', 'y', 'c'] * ly edgecolor_store = [] dx = 0.5 * np.ones_like(zpos) dy = 0 * np.ones_like(zpos) #dz = zpos dz = r.flatten() for value in dz: if value == 0.0: edgecolor_store.append('none') if value > 0.0: edgecolor_store.append('black') mask_dz = dz == 0 # SO:60111736, 3d case #print(dz) ax.bar3d(xpos,ypos,zpos, dx, dy,dz,color=cs,zsort='average',alpha=0.5,edgecolors=edgecolor_store) Sample data I use, [ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 0. , 0. , 0. ],[ 0. , 0. , 20.7, 0. , 0. ],[ 0.6, 0. , 0.1, 0. , 0.2],[ 0. , 24.8, 0. , 46.7, 0. ],[ 0. , 0. , 99.7, 17.1, 99.3],[ 0. , 12.8, 98.6, 0. , 6.7],[ 0.2, 0. , 0. , 12.6, 0. ] The output plot I get is,
I feel that you've already tried the correct solution: not plot the unwanted bars. You need a mask. Btw, never ever compare floats with ==. mask = ~np.isclose(dz, 0.0) Then, plot data filtered by this mask ax.bar3d(xpos[mask],ypos[mask],zpos[mask], dx[mask], dy[mask],dz[mask],color=cs[:mask.sum()],zsort='average',alpha=0.5,edgecolors='black') Note that you no longer need edgecolor. And that my way of computing cs (starting from your too long cs, and truncating it to the exact needed size) is not optimal. Also note that the 4 remaining black lines are not 0, but very small (0.1, 0.2, 0.2 and 0.6) values.
3
2
77,424,774
2023-11-5
https://stackoverflow.com/questions/77424774/finding-the-last-row-that-meets-conditions-of-a-mask
This is my dataframe: df = pd.DataFrame({'a': [20, 21, 333, 444], 'b': [20, 20, 20, 20]}) I want to create column c by using this mask: mask = (df.a >= df.b) And I want to get the last row that meets this condition and create column c. The output that I want looks like this: a b c 0 20 20 NaN 1 21 20 NaN 2 333 20 NaN 3 444 20 x I tried the code below but it didn't work: df.loc[mask.cumsum().gt(1) & mask, 'c'] = 'x'
For a mask to flag the last value satisfying a condition, use duplicated() by keeping last. We know that mask consists of at most 2 values (True/False). If we can create another mask that flags the last occurrences these values as True, then we can chain it with mask itself for the desired mask. This is accomplished by ~mask.duplicated(keep='last') because mask.duplicated(keep='last') flags duplicates as True except for the last occurrence, so its negation gives us what we want. df = pd.DataFrame({'a': [20, 21, 333, 444], 'b': [20, 20, 20, 20]}) mask = (df.a >= df.b) df['c'] = pd.Series('x', df.index).where(mask & ~mask.duplicated(keep='last')) If you want to slice/assign, then you can use this chained mask as well. df.loc[mask & ~mask.duplicated(keep='last'), 'c'] = 'x' A shorter version of @mandy8055's answer is to call idxmax() to get the index of the highest cum sum (although this is showing a FutureWarning on pandas 2.1.0). As pointed out by @mozway, this works as long as there's at least one True value in mask. df.loc[mask.cumsum().idxmax(), 'c'] = 'x'
7
5
77,424,685
2023-11-5
https://stackoverflow.com/questions/77424685/how-do-i-change-a-variable-inside-a-function-ever-iteration-of-a-loop
I'm trying to have a boolean variable flip (True becomes False, False becomes True), and this variable is inside of a function. However, I have an issue where I either have to assign the variable inside the function (thus having the variable reset to what I assigned it to inside the function), or don't do that, which causes an UnboundLocalError. Any help on this would be great. I was creating a function, click(), which flips a boolean every iteration of a loop def click(): alternate = True #not doing this will cause an error alternate = not alternate #flipping the variable and then having a loop run using the function while True: click() print(alternate) #constantly prints False while what I wanted was to print True, False, True, etc.
python does not have pass by reference. it has mutable and immutable variables. so if your variable is mutable you need to return the value each time and reset it like this: def click(alternate): alternate = not alternate #flipping the variable return alternate and: alternate = True while True: alternate=click(alternate ) print(alternate) #constantly prints False
3
3
77,424,193
2023-11-4
https://stackoverflow.com/questions/77424193/output-of-function-using-np-reciprocal-changes-based-on-print-or-further-unrela
Whilst computing SVD I encountered strange behaviour in relation to np.reciprocal. Under certain conditions (i.e. additional steps on an unrelated variable or printing a variable) the output changes for some reason. The following is a simplified version of the code. import numpy as np from numpy.linalg import eig from math import sqrt def ComputeSVD(A): m, n = A.shape eigenvalues, eigenvectors = eig(A.T @ A) Sigma = np.zeros((m,n)) np.fill_diagonal(Sigma, np.sqrt(eigenvalues)) print(Sigma) U = np.zeros((m,m)) for i in range(len(eigenvalues)): U[:,i] = A @ (eigenvectors[:,i]) return Sigma def PseudoInverse(A): S = ComputeSVD(A) S_inv = np.reciprocal(S, where= (S != 0)) return S_inv A = np.array([[1.0, 1.0, 1.0], [1.0, 2.0, 3.0], [1.0, 4.0, 9.0]]) print(PseudoInverse(A)) The output is incorrect for the non-diagonal entries: [[10.64956309 0. 0. ] [ 0. 1.2507034 0. ] [ 0. 0. 0.15015641]] [[ 0.09390057 -0.74904542 0.64830637] [-0.34457359 0.79955008 -0.73997748] [-0.92878374 0.32438956 6.65972239]] However when removing the parts computing related to U import numpy as np from numpy.linalg import eig from math import sqrt def ComputeSVD(A): m, n = A.shape eigenvalues, eigenvectors = eig(A.T @ A) Sigma = np.zeros((m,n)) np.fill_diagonal(Sigma, np.sqrt(eigenvalues)) print(Sigma) return (Sigma) def PseudoInverse(A): #same as before A = np.array([[1.0, 1.0, 1.0], [1.0, 2.0, 3.0], [1.0, 4.0, 9.0]]) print(PseudoInverse(A)) Now the Output is correct: [[10.64956309 0. 0. ] [ 0. 1.2507034 0. ] [ 0. 0. 0.15015641]] [[0.09390057 0. 0. ] [0. 0.79955008 0. ] [0. 0. 6.65972239]] Even stranger is that when we leave the parts relating to U but, before we invert the non-zero entries we, print(S) the issue is again rectified. import numpy as np from numpy.linalg import eig from math import sqrt def ComputeSVD(A): #Same as first time def PseudoInverse(A): S = ComputeSVD(A) print(S) S_inv = np.reciprocal(S, where= (S != 0)) return S_inv A = np.array([[1.0, 1.0, 1.0], [1.0, 2.0, 3.0], [1.0, 4.0, 9.0]]) print(PseudoInverse(A)) Now the correct output is produced again. (see the second output)
The line S_inv = np.reciprocal(S, where=(S != 0)) creates an uninitialized array and fills the entries where the condition S != 0 is True with reciprocals of the corresponding elements of S. All other elements are left unchanged, so they may be zeroes, but may also have some random values. In order to fix this, use the out argument to initialize an array where the result will be stored: S_inv = np.reciprocal(S, where=(S != 0), out=np.zeros_like(S))
2
2
77,423,535
2023-11-4
https://stackoverflow.com/questions/77423535/fill-pandas-column-forward-iteratively-but-without-using-iteration
I have a pandas data frame with a column where a condition is met based on other elements in the data frame (not shown). Additionally, I have a column that extends the validness one row further with the following rule: If a valid row is followed directly by ExtendsValid, that row is also valid, even if the underlying valid condition doesnt apply. Continue filling valid forward as long as ExtendsValid is 1 I have illustrated the result in column "FinalValid" (desired result. Doesnt have to be a new column, can also fill Valid forward). Note that rows 8 and 9 in the example also become valid. Also note that row 13 does NOT result in FinalValid, because you need a preceding valid row. Preceding valid row can be Valid or an extended valid row. So far if I had a problem like that I did a cumbersome multi-step process: Create a new column for when "Valid" or "ExtendsValid" is true Create a new column marking the start point for each "sub-series" (a consecutive set of ones) Number each sub-series fillna using "group by" for each sub series I can provide sample code but I am really looking for a totally different, more efficient approach, which of course must be non-iterating as well. Any ideas would be welcome. # Valid ExtendsValid FinalValid 1 0 0 0 2 1 0 1 3 0 0 0 4 0 0 0 5 1 0 1 6 0 0 0 7 1 0 1 8 0 1 1 9 0 1 1 10 0 0 0 11 1 0 1 12 0 0 0 13 0 1 0 14 0 0 0
IIUC, you want to ffill the 1s only if there is an uninterrupted series of 1s starting on Valid and eventually continuing on ExtendsValid. For this you can use a groupby.cummin: df['FinalValid'] = ( (df['Valid']|df['ExtendsValid']) .groupby(df['Valid'].cumsum()) .cummin() ) Output: NB. I slightly modified the input on row 3 to better illustrate the logic. # Valid ExtendsValid FinalValid 0 1 0 0 0 1 2 1 0 1 2 3 0 0 0 3 4 0 1 0 4 5 1 0 1 5 6 0 0 0 6 7 1 0 1 7 8 0 1 1 8 9 0 1 1 9 10 0 0 0 10 11 1 0 1 11 12 0 0 0
3
1
77,422,410
2023-11-4
https://stackoverflow.com/questions/77422410/manipulate-the-element-before-finding-sum-of-higher-elements-in-the-row
I have asked about finding sum of higher elements in the row/column and got really good answer. However this approach does not allow me to manipulate current element. My input dataframe is something like this: array([[-1, 7, -2, 1, 4], [ 6, 3, -3, 5, 1]]) Basically, I would like to have a output matrix which shows me for each element how many values are higher in the given row and column, like this: array([[3, 0, 4, 2, 1], [0, 2, 4, 1, 3]], dtype=int64) scipy ranked function really works well here. (Thanks to @Tom) the tricky part is here since this matrix is correlation matrix and scores are between -1 and 1, I would like to add one middle step (normalization factor) before counting higher values: If the element is negative, add +3 to that element and then count how many values are higher If the element is positive, subtract -3 from that element and then count how many values are higher in the row. e.g.: first element of row is negative we add +3 and then row would be 2 7 -2 1 4 -> sum of the higher values from that element is 2 second element of row is positive we subtract -3 and then row would be -1 4 -2 1 4 -> sum of the higher values from that element is 0 ... so we do this normalization for each row and row-wise desired output would be: 2 0 2 3 1 1 3 4 2 3 I don't want to use loop for that because since the matrix is 11kx12k, it takes so much time. If I use ranked with lamda, than instead of doing for each element, It adds and subtracts in the same time to the all row values, which It is not what I want. corr = np.array([[-1, 7, -2, 1, 4], [ 6, 3, -3, 5, 1]]) def element_wise_bigger_than(x, axis): return x.shape[axis] - rankdata(x, method='max', axis=axis) ld = lambda t: t + 3 if t<0 else t-3 f = np.vectorize(ld) element_wise_bigger_than(f(corr), 1)
A possible solution, based on numba and numba prange to parallelize the for loop: from numba import jit, prange, njit, set_num_threads import numpy as np @njit(parallel=True) def get_horizontal(a): z = np.zeros((a.shape[0], a.shape[1]), dtype=np.int32) for i in prange(a.shape[0]): for j in range(a.shape[1]): aux = a[i, j] if a[i, j] < 0: a[i, j] += 3 elif a[i, j] > 0: a[i, j] -= 3 else: pass z[i, j] = (a[i, j] < a[i, :]).sum() a[i, j] = aux return z a = np.array([[-1, 7, -2, 1, 4], [ 6, 3, -3, 5, 1]]) set_num_threads(6) # to use only 6 threads get_horizontal(a) Runtime: By using the following array, a = np.random.randint(-10, 10, size=(11000, 12000)) the runtime, on my computer, is less than 1 minute. Output: array([[2, 0, 2, 3, 1], [1, 3, 4, 2, 3]], dtype=int32)
3
1
77,422,076
2023-11-4
https://stackoverflow.com/questions/77422076/putting-contributions-of-continuous-values-in-a-discrete-2d-grid-based-on-dista
I have a numpy array containing coordinates of points (in 3D, but I am have started off by trying the method in 1D and 2D first) that I would like to fit in a discrete grid. However, I do not want to just move the points to the grid pixel which they are closest to, but rather put on each pixel a weighted value which depends on how far from that pixel the point actually is. For example, if in 1D I have a point x = 15.4, in my discrete space this point would contribute 60% to pixel 15 and 40% to pixel 16. I have managed to do this in 1D and the code is as follows: # Make test data N = 1000 data = np.random.normal( loc = 10, scale = 2, size = N ) # set data range, binsize, and coordinates xmin = 0 xmax = 20 xbinsize = 0.5 # define the bin centres xbincenters = np.arange( xmin+xbinsize*0.5, xmax, xbinsize ) # transform data and floor it to get the closest pixel in the discrete grid data_t = data/xbinsize - 0.5 bin_indices = np.floor(data_t).astype(int) # calculate the difference between continuous coordinate and closest pixel dx = data_t - bin_indices # get data range in bin indices index_min = np.ceil(xmin/xbinsize - 0.5).astype(int) index_max = np.ceil(xmax/xbinsize - 0.5).astype(int) # do the interpolation minlength = index_max - index_min output_array = np.bincount( bin_indices, weights = (1-dx), minlength = minlength ) +\ np.bincount( bin_indices+1, weights = dx, minlength = minlength ) # finally plot a histogram of the continuous values against output_array fig,ax = plt.subplots() ax.hist(data, bins=np.arange(0,20,0.5)) ax.plot( xbincenters, output_array, color = 'r' ) plt.show() comparison plot I have been trying to generalise this to 2D but to no avail, since np.bincount() only works on 1D arrays. Up until trying to find the nearest pixels, I feel like the code should be similar, as all the operations above can be broadcasted to a 2D array. In this case, we need to distribute any coordinate to the nearest 4 pixels, depending on a difference dx and now another difference dy (which, if you used the above code for an array of shape (1000, 2) would just be the 1st and second column of the array dx respectively). I tried unraveling my bin_indices and dx arrays to use np.bincount() on them, but at this point I am not sure if that is correct or what operations to do on the unraveled arrays. Would generalising this problem in 2D (and later in 3D) need a different kind of approach? Any help would be appreciated, thank you in advance!
So, if I get it correctly, you did "by hand" all the interpolation job (there are probably some code to do that somewhere, but can't think of any right now), and use bincount just to increase the output array (because output_array[indices] += weight wouldn't have worked, indeed, if indices contain repetitions) Then, you could just modify indices to target yourself a 2D arrays. Using a width and a height, and using indicesy*width+indicesx as indices. And then "unravel" the array once finished. Like this import numpy as np import matplotlib.pyplot as plt N = 1000 # Big array for performance test datax = np.random.normal( loc = 10, scale = 2, size = N ) datay = np.random.normal( loc = 10, scale = 2, size = N ) # Or small one for result test (comment if not wanted) datax = np.array([15.2, 1.5, 15.3]) datay = np.array([5.1, 10.5, 15.9]) # set data range, binsize, and coordinates xmin = 0 xmax = 20 xbinsize = 0.5 ymin = 0 ymax = 20 ybinsize = 0.5 # define the bin centres xbincenters = np.arange( xmin+xbinsize*0.5, xmax, xbinsize ) ybincenters = np.arange( ymin+ybinsize*0.5, ymax, ybinsize ) # transform data and floor it to get the closest pixel in the discrete grid datax_t = datax/xbinsize - 0.5 datay_t = datay/ybinsize - 0.5 bin_indicesx = np.floor(datax_t).astype(int) bin_indicesy = np.floor(datay_t).astype(int) # calculate the difference between continuous coordinate and closest pixel dx = datax_t - bin_indicesx dy = datay_t - bin_indicesy # get data range in bin indices index_minx = np.ceil(xmin/xbinsize - 0.5).astype(int) index_miny = np.ceil(ymin/ybinsize - 0.5).astype(int) index_maxx = np.ceil(xmax/xbinsize - 0.5).astype(int) index_maxy = np.ceil(ymax/ybinsize - 0.5).astype(int) # do the interpolation minlengthx = index_maxx - index_minx minlengthy = index_maxy - index_miny minlength=minlengthx*minlengthy flat_output_array = np.bincount( bin_indicesy*minlengthx+bin_indicesx, weights = (1-dx)*(1-dy), minlength = minlength ) +\ np.bincount( bin_indicesy*minlengthx+bin_indicesx+1, weights = dx*(1-dy), minlength = minlength ) +\ np.bincount( (bin_indicesy+1)*minlengthx+bin_indicesx, weights = (1-dx)*dy, minlength = minlength ) +\ np.bincount( (bin_indicesy+1)*minlengthx+bin_indicesx+1, weights = dx*dy, minlength = minlength ) output_array=flat_output_array.reshape(-1, minlengthx) # finally plot a histogram of the continuous values against output_array plt.imshow(output_array) plt.show()
3
1
77,422,457
2023-11-4
https://stackoverflow.com/questions/77422457/what-is-the-reason-that-child-class-does-not-inherit-doc-property-method
I am a bit confused about the difference in behavior between __doc__ and other methods: # python 3.10.13 class Parent: def __init__(self, doc): self._doc = doc def __doc__(self): return self._doc def __other__(self): return self._doc class Child(Parent): pass >>> print(Parent("test").__doc__()) test >>> print(Child("test").__doc__) None # TypeError if call __doc__() since it is not defined >>> print(Child("test").__other__()) test But it does not work as property either: # python 3.10.13 class Parent: def __init__(self, doc): self._doc = doc @property def __doc__(self): return self._doc @property def __other__(self): return self._doc class Child(Parent): pass >>> print(Parent("test").__doc__) test >>> print(Child("test").__doc__) None >>> print(Child("test").__other__) test This is mentioned and somewhat discussed here @22750354, but I need help finding out why the Child class does not inherit the method/property of __doc__. And are there other methods like __doc__ in terms of inheritance? Thank you.
As a first approximation1, when you write obj.a Python attempts to lookup a by checking whether obj has an attributed named a, then checking whether obj's class (obj.__class__) has an attribute a, then recursively checking each parent class (obj.__class__.__mro__) for an attribute a. If at any point, an attribute named a is found, that then that object gets returned as the result. This is what is inheritance means in Python. Child classes don't copy their parents, they just delegate attribute lookups to them. Now, whenever you define a class as class Child(Parent): ... that class object always gets assigned a __doc__ attribute: the docstring if one was provided, or None otherwise. >>> Child.__doc__ is None True Because of this, any lookups for the attribute __doc__ will only ever propagate up to the class level before a matching object is found. Attribute lookup will never reach the stage of checking the parent classes. 1The complete attribute lookup process is much more complicated. See 3.3.2. Customizing attribute access from the Python Data Model docs for the bigger picture.
3
4
77,415,312
2023-11-3
https://stackoverflow.com/questions/77415312/qcombobox-list-popup-display-in-fusion-style
I am using PyQt5. I want to know how to make my QComboBox open with Β±10 items instead of the full screen. This only happens with the fusion style applied. Is there any way I can make this behave with a small drop down instead? I have tried to use setMaxVisibleItems(5), but it didn't make a difference. Here is what it is looking like now:
As pointed out in QTBUG-89037, there's an undocumented stylesheet property that can be used to change the behaviour of the popup: setStyleSheet('QComboBox {combobox-popup: 0}') A value of 0 will show the normal scrollable list-view with a maximum number of visible items, whilst a value of 1 shows the humungous menu. However, in some circumstances, setting a stylesheet may have undesirable side-effects, since it could potentially interfere with the default behaviour of QStyle. So an alternative solution might be to use a proxy-style, and reimplement its styleHint method so that it returns either 0 or 1 for QStyle.SH_ComboBox_Popup. Both of these solutions work with both Qt5 and Qt6, and will also work for styles such as "cleanlooks" as well as "fusion". Here's a simple demo that implements both solutions: from PyQt5 import QtWidgets # from PyQt6 import QtWidgets class Window(QtWidgets.QWidget): def __init__(self): super().__init__() self.combo = QtWidgets.QComboBox() self.combo.addItems(f'Item-{i:02}' for i in range(1, 50)) layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.combo) class ProxyStyle(QtWidgets.QProxyStyle): def styleHint(self, hint, option, widget, data): if hint == QtWidgets.QStyle.StyleHint.SH_ComboBox_Popup: return 0 return super().styleHint(hint, option, widget, data) if __name__ == '__main__': app = QtWidgets.QApplication(['Combo Test']) app.setStyle('fusion') # app.setStyle('cleanlooks') # style = ProxyStyle(app.style()) # app.setStyle(style) app.setStyleSheet('QComboBox {combobox-popup: 0}') window = Window() window.setGeometry(600, 100, 200, 75) window.show() app.exec()
2
2
77,421,030
2023-11-4
https://stackoverflow.com/questions/77421030/how-to-generate-the-uml-diagram-from-the-python-code
I have this code repo I created manual UML which look like this: I am trying to auto generate the UML via pyreverse: pyreverse -o png -p ShoppingCart ./mainService.py Format png is not supported natively. Pyreverse will try to generate it using Graphviz... Unfortunately, it gives me blank diagram. What can I do to get the project's classes in the diagram? This is the file structure: . β”œβ”€β”€ Entity β”‚ β”œβ”€β”€ Apple.py β”‚ β”œβ”€β”€ Buy1Get1FreeApple.py β”‚ β”œβ”€β”€ Buy3OnPriceOf2Orange.py β”‚ β”œβ”€β”€ Offer.py β”‚ β”œβ”€β”€ Orange.py β”‚ β”œβ”€β”€ Product.py β”‚ └── ShoppingCart.py β”œβ”€β”€ Enum β”‚ └── ProductType.py └── mainService.py
In short Assuming the installation of pyreverse and graphviz is correct, all you need to do is to package your project adding some emplty __init__py files in each folder. Alternatively, you'd hjhave to add all the modules manually in the command line. More details - step by step About the error message Assuming everything is installed correctly, your command line should give you the warning message, which is absolutely normal: Format png is not supported natively. Pyreverse will try to generate it using Graphviz... What's wrong? The diagram will stay empty because you tell pyreverse to analyse a single file and there is no class defined in that file. If you'd add manually the different modules to be analysed: pyreverse -o png -p ShoppingCart mainService.py Entity\Apple.py Enum\ProductType.py Entity\Orange.py Entity\ShoppingCart.py you would then very well obtain a rudimentary diagram: If you would add the options -AS upfront you'd get all ancestors in the project and all associated classes recursively: How to package your project? This is cumbersome. Fortunately, there's little missing to package your project. For a full reference, you may look here. But in short, it is sufficient to add an empty __init__.py file in your project folder, and each subfolder where you store the modules: . β”œβ”€β”€ Entity β”‚ β”œβ”€β”€ __init__.py <<===== add this empty file β”‚ β”œβ”€β”€ Apple.py β”‚ β”œβ”€β”€ Buy1Get1FreeApple.py β”‚ β”œβ”€β”€ Buy3OnPriceOf2Orange.py β”‚ β”œβ”€β”€ Offer.py β”‚ β”œβ”€β”€ Orange.py β”‚ β”œβ”€β”€ Product.py β”‚ └── ShoppingCart.py β”œβ”€β”€ Enum β”‚ β”œβ”€β”€ __init__.py <<===== add this empty file β”‚ └── ProductType.py β”œβ”€β”€ __init__.py <<===== add this empty file └── mainService.py You will then be able to run the simpler command line: pyreverse -AS -o png -p ShoppingCart . and obtain this magnificient diagram: The packaging helps python and pyreverse to understand that these are not files to be analysed in isolation, but in the context of a package made of subpackages, etc.
4
7
77,420,330
2023-11-4
https://stackoverflow.com/questions/77420330/how-to-retain-sqlalchemy-model-after-adding-row-number
I'm try to filter rows in some method so I need the output model to be of the same type as input model to the sqlAlchemy query. I followed this answer https://stackoverflow.com/a/38160409/1374078 . However would it be possible to get the original model, so that I can access the model's methods by name? e.g. row.foo_field, otherwise getting generic row type > type(row) <class 'sqlalchemy.engine.row.Row'>
Assuming that your code looks something like this*: with orm.Session(engine) as s: row_number_column = ( sa.func.row_number() .over(partition_by=User.name, order_by=sa.desc(User.id)) .label('row_number') ) q = sa.select(User) q = q.add_columns(row_number_column) for row in s.execute(q): print(row) Then the results look like this: (<__main__.User object at 0x7fb6e2fe5580>, 1) (<__main__.User object at 0x7fb6e2fe5670>, 1) (<__main__.User object at 0x7fb6e2fe56a0>, 1) (<__main__.User object at 0x7fb6e2fe56d0>, 1) (<__main__.User object at 0x7fb6e2fe5700>, 1) As you can see, each result row is a tuple of (user_instance, row_number) so you can access the user instance as row[0], or unpack during iteration: for user, row in s.execute(q): print(user, row_number) * This is a SQLAlchemy 2.0-style query, but the legacy Query style query in the Q&A linked in the question produces the same result.
2
2
77,420,886
2023-11-4
https://stackoverflow.com/questions/77420886/end-of-first-sequence-of-nans-in-numpy-array
I have a two dimensional numpy array where some rows may have nans. I want to select the occurrence or absence of nans in rows of these arrays as per the following prescription: If a row does not start with a nan, then the result for that array will be -1. If a row starts with a nan, then the result will be the index of the last nan in the continuous unbroken sequence of nans which started at the beginning of that row. What is the most optimal way of doing this? In my actual work, I will be dealing with numpy arrays with millions of rows. As an example lets consider the below array: import numpy as np arr = np.array([[1,11,np.nan,111,1111], [np.nan, np.nan, np.nan, 2, 22], [np.nan, np.nan, 3, 33, np.nan], [4, np.nan, np.nan, 44, 444], [np.nan, 5, 55, np.nan, 555], [np.nan, np.nan, np.nan, np.nan, np.nan]]) Here the expected result will be result = [-1, 2, 1, -1, 0, 4]. Below is a successful code that I have tried. But, I would like a more optimal solution. result = [] for i in range(arr.shape[0]): if np.isnan(arr[i])[0] == False: result += [-1] elif np.all(np.isnan(arr[i])): result += [arr.shape[1]-1] else: result += [np.where(np.isnan(arr[i]) == False)[0][0] - 1]
You can add a column of non-nan with hstack, check which values are nan with isnan and get the position of the first non-nan with argmin: out = np.isnan(np.hstack([arr, np.ones((arr.shape[0], 1))])).argmin(axis=1)-1 Or without concatenation and using where to fix the case in which all values are nan: tmp = np.isnan(arr) out = np.where(tmp.all(axis=1), arr.shape[1], tmp.argmin(axis=1))-1 Output: out = array([-1, 2, 1, -1, 0, 4])
6
4
77,419,949
2023-11-3
https://stackoverflow.com/questions/77419949/adding-key-pair-values-into-a-dict-missing
I have been trying to add key values of a list into a dict whereas the key is the amount of times X is repeated in the list, and the value is X itself. my_list = ["apple", "cherry", "apple", "potato", "tomato", "apple"] my_grocery = {} while True: try: prompt = input().upper().strip() my_list.append(prompt) except EOFError: my_list_unique = sorted(list(set(my_list))) for _ in my_list_unique: my_grocery[my_list.count(_)] = _ #print(f'{my_list.count(_)} {_}') print(my_grocery) break The expected output was: {3: APPLE, 1: CHERRY, 1: POTATO, 1: TOMATO} The the actual output received was: {3: 'APPLE', 1: 'TOMATO'} Does anyone have any idea why is that
you couldn't have duplicated keys in dict, in your case it's "1", you can use Counter for vise versa key-value savings occurrences of each product type from collections import Counter my_list = [] while True: try: prompt = input().strip() if not prompt: break my_list.append(prompt) except EOFError: break item_counts = Counter(my_list) print(item_counts) Counter({'tomato': 2, 'apple': 2, 'mango': 1})
3
2
77,419,705
2023-11-3
https://stackoverflow.com/questions/77419705/all-combinations-of-elements-in-a-vector-in-a-larger-vector
I have the following input vector. ['a','b','c'] I want to list all possible combinations. There are three restrictions: The values have to be inserted into an output vector of six positions. One given value from the input vector can only occur once in the output vector. The order of the values has to be the same in the output vector as in the input vector. Two permissions: Positions can be left empty. (Not shown here:) The input vector can have more than one of any given value (such as ['a','b','a','c']) Given this input vector above, the only valid outputs are the following (I might have missed one or two but you get the idea): [' ',' ',' ','a','b','c'], [' ',' ','a',' ','b','c'], [' ','a',' ',' ','b','c'], ['a',' ',' ',' ','b','c'], [' ',' ','a','b',' ','c'], [' ','a','b',' ',' ','c'], ['a','b',' ',' ',' ','c'], [' ','a',' ','b',' ','c'], ['a',' ','b',' ',' ','c'], ['a',' ',' ','b',' ','c'], [' ',' ','a','b','c',' '], [' ','a','b','c',' ',' '], ['a','b','c',' ',' ',' '], [' ','a',' ','b','c',' '], ['a',' ','b','c',' ',' '], ['a',' ','b',' ','c',' '] My first idea was to generate circa 6! vectors where the vectors are random combinations of the possible values, including the empty value: [abc ]|[abc ]...[abc ] and then remove all vectors where a/b/c occur more than they do in the input vector and the order of abc is not the same as in the input vector. But this brute force measure would take ages. I'll have to do this a lot, and for input vectors and output vectors of varying sizes.
Choosing 3 out of the 6 indices and placing the elements there: from itertools import combinations v = ['a','b','c'] n = 6 for I in combinations(range(n), len(v)): out = [' '] * n for i, out[i] in zip(I, v): pass print(out)
3
3
77,412,120
2023-11-2
https://stackoverflow.com/questions/77412120/abstractmethod-returns-a-type-of-self
I'm having some problems with mypy. I have an abstract class and a class that inherits from it: from __future__ import annotations from abc import abstractmethod, ABC from typing import Union class Base(ABC): @abstractmethod def the_method(self, a_class: Union[Base, float, int]) -> None: ... @abstractmethod def other_method(self) -> None: ... class MyClass(Base): def __init__(self, something: str = "Hello") -> None: self.something = something def the_method(self, a_class: Union[MyClass, float, int]) -> None: print(a_class) def other_method(self) -> None: print(self.something) I am aware of the Liskov substitution principle. However MyClass is a type of Base since it inherits from it. But mypy still raises an error: main.py:21: error: Argument 1 of "the_method" is incompatible with supertype "Base"; supertype defines the argument type as "Base | float | int" [override] main.py:21: note: This violates the Liskov substitution principle main.py:21: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides Found 1 error in 1 file (checked 1 source file) What am I doing wrong?
Base.the_method accepts Base, so subclasses need to accept at least Base too. If I have a class Foo(Base), that should be accepted. At the moment, it's not accepted by MyClass.the_method. Liskov substitution works in the opposite direction for method parameters and return types - superclasses are OK for parameters, subclasses are OK for return types. You have lots of options. For example, you could make MyClass.the_method accept Base class MyClass(Base): def the_method(self, a_class: Union[Base, float, int]) -> None: ... or (and there's a discussion in the comments about whether this next example also violates LSP ... either way, mypy is overly permissive with this) change Base to accept instances of the enclosing class from typing import Self class Base(ABC): @abstractmethod def the_method(self, a_class: Union[Self, float, int]) -> None: ...
3
5