question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
72,880,426 | 2022-7-6 | https://stackoverflow.com/questions/72880426/how-to-get-list-of-entities-in-python-wikidata | I need to get all the info about some writers in wikidata For example - https://www.wikidata.org/wiki/Q39829 My code from wikidata.client import Client client = Client() entity = client.get('Q39829', load=True) # Spouse spouse_prop = client.get('P26') spouse = entity[spouse_prop] print('Mother: ', spouse.label) # Children child_prop = client.get('P40') child = entity[child_prop] print('Children: ', dir(child)) print(child.label) Problem The problem is with the properties with the list of values For example, he has 3 children And in the result I have only one name Question How can I get the list of all children? | From the docs for Wikidata: Although it implements Mapping[EntityId, object], it [entity] actually is multidict. See also getlist() method. Which means you can do: >>> [c.label for c in entity.getlist(child_prop)] [m'Joe Hill', m'Owen King', m'Naomi King'] | 4 | 5 |
72,801,333 | 2022-6-29 | https://stackoverflow.com/questions/72801333/how-to-pass-url-as-a-path-parameter-to-a-fastapi-route | I have created a simple API using FastAPI, and I am trying to pass a URL to a FastAPI route as an arbitrary path parameter. from fastapi import FastAPI app = FastAPI() @app.post("/{path}") def pred_image(path:str): print("path",path) return {'path':path} When I test it, it doesn't work and throws an error. I am testing it this way: http://127.0.0.1:8000/https://raw.githubusercontent.com/ultralytics/yolov5/master/data/images/zidane.jpg | Option 1 You could simply use Starlette's path convertor to capture arbitrary paths. As per Starlette documentation, path returns the rest of the path, including any additional / characters. However, if your URL includes query parameters as well, you should append the query string to the path, as shown below: from fastapi import FastAPI, Request app = FastAPI() @app.get('/{_:path}') async def pred_image(request: Request): url = request.url.path[1:] if not request.url.query else request.url.path[1:] + "?" + request.url.query return {'url': url} or: @app.get('/{full_path:path}') async def pred_image(full_path: str, request: Request): url = full_path if not request.url.query else full_path + "?" + request.url.query return {'url': url} or, using split() to get the whole URL after the 3rd occurence of / character: @app.get('/{_:path}') async def pred_image(request: Request): url = request.url._url.split('/', 3)[-1] return {'url': url} Test using the link below: http://127.0.0.1:8000/https://www.google.com/search?q=my+query Please note that the URL above will be automatically encoded by the web browser (as URLs can only contain ASCII characters), meaning that before sending the request, any special characters will be converted to other reserved ones, using the % sign followed by a hexadecimal pair. Hence, behind the scenes the request URL would look like this: http://127.0.0.1:8000/https%3A%2F%2Fwww.google.com%2Fsearch%3Fq%3Dmy%2Bquery If, for instance, you would like to test the endpoint through other clients, such as Python requests lib, you should then encode the URL yourself before sending the request. You can do that using Python's urllib.parse.quote() function, as shown below. Note that the quote() function considers / characters safe by default, meaning that any / characters won't be encoded. Hence, in this case, you should set the safe parameter to '' (i.e., empty string), in order to encode / characters as well. Test using Python requests: import requests from urllib.parse import quote base_url = 'http://127.0.0.1:8000/' path_param = 'https://www.google.com/search?q=my+query' url = base_url + quote(path_param, safe='') r = requests.get(url) print(r.json()) Output: {'url': 'https://www.google.com/search?q=my+query'} Test using HTML <form>: If you would like to test the above by passing the URL through an HTML <form>, instead of manually typing it after the base URL on your own, please have a look at Option 3 of this answer, which demonstrates how to convert a form <input> element into a path parameter on <form> submission. The encoding of the URL takes place behind the scences when the <form> is submitted, likely using functions such as encodeURIComponent() or encodeURI(); hence, there is no need for you to apply any techniques, in order to quote the URL beforehand, such as in Python requests earlier. Option 2 As @luk2302 mentioned in the comments section, your client (i.e., end user, javascript, etc.) needs to encode the URL. The encoded URL, however, as provided by @luk2302 does not seem to work, leading to a "detail": "Not Found" error. The reason is simply because when FastAPI decodes the complete request URL (i.e., request.url), any %2F characters are converted back to /, and hence, it recognizes those forward slashes as path separators and attempts to find a matching API route. Thus, you would need to encode/decode the URL twice, in order to work. On server side, you can decode the URL (twice) as follows: from urllib.parse import unquote @app.get('/{path}') async def pred_image(path: str): return {'url': unquote(unquote(path))} Test using the link below: http://127.0.0.1:8000/https%253A%252F%252Fwww.google.com%252Fsearch%253Fq%253Dmy%252Bquery Test using Python requests: import requests from urllib.parse import quote base_url = 'http://127.0.0.1:8000/' path_param = 'https://www.google.com/search?q=my+query' url = base_url + quote(quote(path_param, safe=''), safe='') r = requests.get(url) print(r.json()) Option 3 Use a query parameter instead, as shown below: @app.get('/') async def pred_image(url: str): return {'url': url} Test using the link below: http://127.0.0.1:8000/?url=https://www.google.com/search?q=my+query If, again, you would like to use Python requests lib to test the endpoint above, have a look at the example below. Note that since the image URL is now being sent as part of the query string (i.e., as a query parameter), requests will take care of the URL encoding; hence, there is no need for using the quote() function this time. Test using Python requests: import requests base_url = 'http://127.0.0.1:8000/' params = {'url': 'https://www.google.com/search?q=my+query'} r = requests.get(base_url, params=params) print(r.json()) Option 4 Since your endpoint seems to accept POST requests, you might consider having the client send the image URL in the body of the request, instead of passing it as a path parameter. Please have a look at the answers here, here and here, as well as FastAPI's documentation, on how to do that. Note: If you are testing this by typing the aforementioned URLs into the address bar of a web browser, then keep using @app.get() routes, as when you type a URL in the address bar of your web browser, it performs a GET request. If, however, you need this to work with POST requests, you will have to change the endpoint's decorator to @app.post() (as shown in your question); otherwise, you will come across 405 "Method Not Allowed" error (see here for more details on such errors). Finally, the endpoints in the examples above have been defined with async def, but depending on the operations that may take place inside the endpoints of your application, you may need to define them with normal def, or use other solutions, when blocking operations need to be performed. Hence, I would suggest having a look at this answer to better understand these concepts and when to use async def/def. | 6 | 14 |
72,815,386 | 2022-6-30 | https://stackoverflow.com/questions/72815386/python-get-the-source-code-of-the-line-that-called-me | Using the python inspect module, in a function, I would like to get the source code of the line that called that function. So in the following situation: def fct1(): # Retrieve the line that called me and extract 'a' return an object containing name='a' a = fct1() I would like to retrieve the string "a = fct1()" in fct1 All I can do so far is to retrieve the code of the whole module with : code = inspect.getsource(sys._getframe().f_back) Please note that fct1() can be called many times in the main module. Eventually, what I want is to retrieve the variable name "a" which is easy if I can get s = "a = fct1()" in fct1() : a_name = s.split("=")[0].strip() | A really dumb solution would be to capture a stack trace and take the 2nd line: import traceback def fct1(): stack = traceback.extract_stack(limit=2) print(traceback.format_list(stack)[0].split('\n')[1].strip()) # prints "a = fct1()" return None a = fct1() @jtlz2 asked for it in a decorator import traceback def add_caller(func): def wrapper(*args, **kwargs): stack = traceback.extract_stack(limit=2) func(*args, caller=traceback.format_list(stack)[0].split('\n')[1].strip(), **kwargs) return wrapper @add_caller def fct1(caller): print(caller) fct1() And it does work. UPDATE Here's a functional version (which needed limit=3 for the caller of the caller): import traceback def source_line_of_caller(): """Return the Python source code line that called your function.""" stack = traceback.extract_stack(limit=3) return traceback.format_list(stack)[0].split('\n')[1].strip() def _tester(): assert "_tester() # whoa, comments too" == source_line_of_caller() _tester() # whoa, comments too | 7 | 7 |
72,868,550 | 2022-7-5 | https://stackoverflow.com/questions/72868550/how-to-perform-an-elementwise-maximum-of-two-columns-in-a-python-polars-expressi | How can I calculate the elementwise maximum of two columns in Polars inside an expression? Polars version = 0.13.31 Problem statement as code: import polars as pl import numpy as np df = pl.DataFrame({ "a": np.arange(5), "b": np.arange(5)[::-1] }) # Produce a column with the values [4, 3, 2, 3, 4] using df.select([ ... ]).alias("max(a, b)") Things I've tried Polars claims to support numpy universal functions (docs), which includes np.maximum which does what I'm asking for. However when I try that I get an error. df.select([ np.maximum(pl.col("a"), pl.col("b")).alias("max(a, b)") ]) # TypeError: maximum() takes from 2 to 3 positional arguments but 1 were given There appears to be no Polars builtin for this, there is pl.max but this returns only the single maximum element in an array. Using .map() my_df.select([ pl.col(["a", "b"]).map(np.maximum) ]) # PanicException Current workaround I'm able to do this using the following snippet however I want to be able to do this inside an expresion as it's much more convenient. df["max(a, b)"] = np.maximum(df["a"], df["b"]) | You can use .max_horizontal() df = pl.select( a = pl.int_range(0, 5), b = pl.int_range(0, 5).reverse(), ) shape: (5, 2) βββββββ¬ββββββ β a β b β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 0 β 4 β β 1 β 3 β β 2 β 2 β β 3 β 1 β β 4 β 0 β βββββββ΄ββββββ df.with_columns( pl.max_horizontal('a', 'b').alias('max(a, b)') ) shape: (5, 3) βββββββ¬ββββββ¬ββββββββββββ β a β b β max(a, b) β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββββββββ‘ β 0 β 4 β 4 β β 1 β 3 β 3 β β 2 β 2 β 2 β β 3 β 1 β 3 β β 4 β 0 β 4 β βββββββ΄ββββββ΄ββββββββββββ The API also contains other horizontal functions. | 5 | 14 |
72,821,244 | 2022-6-30 | https://stackoverflow.com/questions/72821244/polars-get-grouped-rows-where-column-value-is-maximum | So consider this snippet import polars as pl df = pl.DataFrame({'class': ['a', 'a', 'b', 'b'], 'name': ['Ron', 'Jon', 'Don', 'Von'], 'score': [0.2, 0.5, 0.3, 0.4]}) df.group_by('class').agg(pl.col('score').max()) This gives me: shape: (2, 2) βββββββββ¬ββββββββ β class β score β β --- β --- β β str β f64 β βββββββββͺββββββββ‘ β a β 0.5 β β b β 0.4 β βββββββββ΄ββββββββ But I want the entire row of the group that corresponded to the maximum score. I can do a join with the original dataframe like sdf = df.group_by('class').agg(pl.col('score').max()) sdf.join(df, on=['class', 'score']) To get shape: (2, 3) βββββββββ¬ββββββββ¬βββββββ β class β score β name β β --- β --- β --- β β str β f64 β str β βββββββββͺββββββββͺβββββββ‘ β a β 0.5 β Jon β β b β 0.4 β Von β βββββββββ΄ββββββββ΄βββββββ Is there any way to avoid the join and include the name column as part of the groupby aggregation? | You can use a sort_by expression to sort your observations in each group by score, and then use the last expression to take the last observation. For example, to take all columns: df.group_by('class').agg( pl.all().sort_by('score').last(), ) shape: (2, 3) βββββββββ¬βββββββ¬ββββββββ β class β name β score β β --- β --- β --- β β str β str β f64 β βββββββββͺβββββββͺββββββββ‘ β a β Jon β 0.5 β β b β Von β 0.4 β βββββββββ΄βββββββ΄ββββββββ Edit: using over If you have more than one observation that is the max, another easy way to get all rows is to use over. For example, if your data has two students in class b ('Von' and 'Yvonne') who tied for highest score: df = pl.DataFrame( { "class": ["a", "a", "b", "b", "b"], "name": ["Ron", "Jon", "Don", "Von", "Yvonne"], "score": [0.2, 0.5, 0.3, 0.4, 0.4], } ) df shape: (5, 3) βββββββββ¬βββββββββ¬ββββββββ β class β name β score β β --- β --- β --- β β str β str β f64 β βββββββββͺβββββββββͺββββββββ‘ β a β Ron β 0.2 β β a β Jon β 0.5 β β b β Don β 0.3 β β b β Von β 0.4 β β b β Yvonne β 0.4 β βββββββββ΄βββββββββ΄ββββββββ df.filter(pl.col('score') == pl.col('score').max().over('class')) shape: (3, 3) βββββββββ¬βββββββββ¬ββββββββ β class β name β score β β --- β --- β --- β β str β str β f64 β βββββββββͺβββββββββͺββββββββ‘ β a β Jon β 0.5 β β b β Von β 0.4 β β b β Yvonne β 0.4 β βββββββββ΄βββββββββ΄ββββββββ | 9 | 13 |
72,868,256 | 2022-7-5 | https://stackoverflow.com/questions/72868256/chromedrivermanager-install-doesnt-work-webdriver-manager | I tried code below in TEST.py:32 print("ChromeDriverManager().install() :", ChromeDriverManager().install()) [WDM] - ====== WebDriver manager ====== 2022-07-05 19:49:04,445 INFO ====== WebDriver manager ====== Traceback (most recent call last): File "d:\Python\PYTHONWORKSPACE\repo\Auto-booking-master\src\TEST.py", line 32, in <module> print("ChromeDriverManager().install() :", ChromeDriverManager().install()) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\chrome.py", line 38, in install driver_path = self._get_driver_path(self.driver) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\core\manager.py", line 29, in _get_driver_path binary_path = self.driver_cache.find_driver(driver) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\core\driver_cache.py", line 95, in find_driver driver_version = driver.get_version() File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\core\driver.py", line 43, in get_version self.get_latest_release_version() File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\drivers\chrome.py", line 37, in get_latest_release_version self.browser_version = get_browser_version_from_os(self.chrome_type) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\webdriver_manager\core\utils.py", line 152, in get_browser_version_from_os cmd_mapping = { KeyError: 'google-chrome' Please help me. | May be your web driver manager is outdated. Uninstall your web driver manager by... pip uninstall webdriver_manager ...then again install... pip install webdriver_manager Install webdriver-manager: pip install webdriver-manager, pip install selenium And then: # selenium 4 from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager option = webdriver.ChromeOptions() option.add_argument("start-maximized") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=option) driver.get('https://www.google.com/') WebDriverManager | 7 | 16 |
72,859,535 | 2022-7-4 | https://stackoverflow.com/questions/72859535/pytest-print-traceback-immediately-in-live-log-not-at-the-end-in-summary | Given test.py import logging log = logging.getLogger("mylogger") def test(): log.info("Do thing 1") log.info("Do thing 2") raise Exception("This is deeply nested in the code") log.info("Do thing 3") assert True def test_2(): log.info("Nothing interesting here") When I run pytest --log-cli-level NOTSET test.py I get the output ======================= test session starts ======================== platform win32 -- Python 3.9.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: C:\work\scraps\python-test-scraps collected 2 items test.py::test_1 -------------------------- live log call --------------------------- INFO mylogger:test.py:6 Do thing 1 INFO mylogger:test.py:7 Do thing 2 FAILED [ 50%] test.py::test_2 -------------------------- live log call --------------------------- INFO mylogger:test.py:13 Nothing interesting here PASSED [100%] ============================= FAILURES ============================= ______________________________ test_1 ______________________________ def test_1(): log.info("Do thing 1") log.info("Do thing 2") > raise Exception("This is deeply nested in the code") E Exception: This is deeply nested in the code test.py:8: Exception ------------------------ Captured log call ------------------------- INFO mylogger:test.py:6 Do thing 1 INFO mylogger:test.py:7 Do thing 2 ===================== short test summary info ====================== FAILED test.py::test_1 - Exception: This is deeply nested in the code =================== 1 failed, 1 passed in 0.07s ==================== I find this output format very confusing, especially for bigger tests suits. When I follow the log and see FAILED, then I have to jump to the second half of the log and search for the corresponding FAILURE or ERROR. I'd like to alter pytest's output, so that the (so called) summary at the end is printed immediately when the according events happen, so that the output can be read chronologically without having to jump around. So far, I only found a way to disable the summary. pytest --log-cli-level NOTSET --no-summary test.py prints ... ======================= test session starts ======================== platform win32 -- Python 3.9.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: C:\work\scraps\python-test-scraps collected 2 items test.py::test_1 -------------------------- live log call --------------------------- INFO mylogger:test.py:6 Do thing 1 INFO mylogger:test.py:7 Do thing 2 FAILED [ 50%] test.py::test_2 -------------------------- live log call --------------------------- INFO mylogger:test.py:13 Nothing interesting here PASSED [100%] =================== 1 failed, 1 passed in 0.06s ==================== This is somewhat better, but also worse, because the trace is missing. Is there an official way to print the trace inside the live log call? | I don't know if it's "official" or not, but I did find this answer in the Pytest dev forums, recommending the use of pytest-instafail. It's a maintained plugin, so it might do what you're looking to do. It met my needs; now I can see tracebacks in the live-log output, and not have to wait till the end of a test suite to see what went wrong. To print logs, fails and errors only once and in the original order, run ... pytest --log-cli-level NOTSET --instafail --show-capture no --no-summary --instafail enables the plugin. pytest loads it by default, but the plugin itself is written to do nothing by default. --show-capture no disables printing the logs after an error occurred. Without this, you would print the logs twice in case of errors: Once before the error (because of --log-cli-level NOTSET) and once after the error (like pytest usually does). That the plugin still prints the traceback first, and captured logs after that, goes against your goals to print everything in the original order. But since you were already using --log-cli, the combination with --show-capture no should be sufficient for your use-case. | 4 | 4 |
72,842,597 | 2022-7-2 | https://stackoverflow.com/questions/72842597/why-is-aexit-not-fully-executed-when-it-has-await-inside | This is the simplified version of my code: main is a coroutine which stops after the second iteration. get_numbers is an async generator which yields numbers but within an async context manager. import asyncio class MyContextManager: async def __aenter__(self): print("Enter to the Context Manager...") return self async def __aexit__(self, exc_type, exc_value, exc_tb): print(exc_type) print("Exit from the Context Manager...") await asyncio.sleep(1) print("This line is not executed") # <------------------- await asyncio.sleep(1) async def get_numbers(): async with MyContextManager(): for i in range(30): yield i async def main(): async for i in get_numbers(): print(i) if i == 1: break asyncio.run(main()) And the output is: Enter to the Context Manager... 0 1 <class 'asyncio.exceptions.CancelledError'> Exit from the Context Manager... I have two questions actually: From my understanding, AsyncIO schedules a Task to be called soon in the next cycle of the event loop and gives __aexit__ a chance to execute. But the line print("This line is not executed") is not executed. Why is that? Is it correct to assume that if we have an await statement inside the __aexit__, the code after that line is not going to execute at all and we shouldn't rely on that for cleaning? Output of the help() on async generators shows that: | aclose(...) | aclose() -> raise GeneratorExit inside generator. so why I get <class 'asyncio.exceptions.CancelledError'> exception inside the __aexit__ ? * I'm using Python 3.10.4 | This is not specific to __aexit__ but to all async code: When an event loop shuts down it must decide between cancelling remaining tasks or preserving them. In the interest of cleanup, most frameworks prefer cancellation instead of relying on the programmer to clean up preserved tasks later on. This kind of shutdown cleanup is a separate mechanism from the graceful unrolling of functions, contexts and similar on the call stack during normal execution. A context manager that must also clean up during cancellation must be specifically prepared for it. Still, in many cases it is fine not to be prepared for this since many resources fail safe by themselves. In contemporary event loop frameworks there are usually three levels of cleanup: Unrolling: The __aexit__ is called when the scope ends and might receive an exception that triggered the unrolling as an argument. Cleanup is expected to be delayed as long as necessary. This is comparable to __exit__ running synchronous code. Cancellation: The __aexit__ may receive a CancelledError1 as an argument or as an exception on any await/async for/async with. Cleanup may delay this, but is expected to proceed as fast as possible. This is comparable to KeyboardInterrupt cancelling synchronous code. Closing: The __aexit__ may receive a GeneratorExit as an argument or as an exception on any await/async for/async with. Cleanup must proceed as fast as possible. This is comparable to GeneratorExit closing a synchronous generator. To handle cancellation/closing, any async code β be it in __aexit__ or elsewhere β must expect to handle CancelledError or GeneratorExit. While the former may be delayed or suppressed, the latter should be dealt with immediately and synchronously2. async def __aexit__(self, exc_type, exc_value, exc_tb): print("Exit from the Context Manager...") try: await asyncio.sleep(1) # an exception may arrive here except GeneratorExit: print("Exit stage left NOW") raise except asyncio.CancelledError: print("Got cancelled, just cleaning up a few things...") await asyncio.sleep(0.5) raise else: print("Nothing to see here, taking my time on the way out") await asyncio.sleep(1) Note: It is generally not possible to exhaustively handle these cases. Different forms of cleanup may interrupt one another, such as unrolling being cancelled and then closed. Handling cleanup is only possible on a best effort basis; robust cleanup is achieved by fail safety, for example via transactions, instead of explicit cleanup. Cleanup of asynchronous generators in specific is a tricky case since they can be cleaned up by all cases at once: Unrolling as the generator finishes, cancellation as the owning task is destroyed or closing as the generator is garbage collected. The order at which the cleanup signals arrive is implementation dependent. The proper way to address this is not to rely on implicit cleanup in the first place. Instead, every coroutine should make sure that all its child resources are closed before the parent exits. Notably, an async generator may hold resources and needs closing. async def main(): # create a generator that might need cleanup async_iter = get_numbers() async for i in async_iter: print(i) if i == 1: break # wait for generator clean up before exiting await async_iter.aclose() In recent versions, this pattern is codified via the aclosing context manager. from contextlib import aclosing async def main(): # create a generator and prepare for its cleanup async with aclosing(get_numbers()) as async_iter: async for i in async_iter: print(i) if i == 1: break 1The name and/or identity of this exception may vary. 2While it is possible to await asynchronous things during GeneratorExit, they may not yield to the event loop. A synchronous interface is advantageous to enforce this. | 14 | 8 |
72,813,575 | 2022-6-30 | https://stackoverflow.com/questions/72813575/when-to-use-query-over-loc-in-pandas-dataframe | Question What is the correct or best way to query a pandas DataFrame? Is it depending on the use case or can you say "always use .query()" or "never use .query()"? My primary concern is robustness or error-proof-ness of the code, but of course performance is also relevant. In this post the query method is stated to be robust and preferred over the other methods, do you agree? Should I always use .query()? DataFrame.query() function in pandas is one of the robust methods to filter the rows of a pandas DataFrame object. And it is preferable to use the DataFrame.query() function to select or filter the rows of the pandas DataFrame object instead of the traditional and the commonly used indexing method. Background I recently came across the .query() method and started to use it quite frequently for convenience and because I thought this was the way to do it properly. Then I read these two posts (the content is not essential for this question, I just want to show what made me think about it): apply, the Convenience Function you Never Needed and How to deal with SettingWithCopyWarning in Pandas? In the post about SettingWithCopyWarning different methods like .loc and .at are mentioned, but not .query(). This made me wonder whether .query() is really used. (I thought I start a new question rather than posting this in the comments). It might also not have been relevant for that specific problem, but it made me wonder none the less. The post about "apply - the convenience function..." made me wonder whether .query() is also a convenience function you never need. The documentation mentions the following use case: query() Use Cases A use case for query() is when you have a collection of DataFrame objects that have a subset of column names (or index levels/names) in common. You can pass the same query to both frames without having to specify which frame youβre interested in querying Edit: fixed the link to .apply() question. | I don't think there is a hard answer for this question. To answer your question regarding should you always use query, the simple answer is no. The query method uses eval behind the scenes, which makes it less performant. So when should you use query? You should use query if the condition you're trying to filter is incredibly specific and involves multiple columns. While the most panda-esque way of filtering is using loc, there are times when chaining loc after loc gets out of hand. The decision to use the query method should be based on readbility and performance. If you're using loc over and over again, you may wish to revise it using a simple query string. However, if the switch makes your code less performant, and you are working with mission-critical data, you should sacrifice a little bit of readability over performance. | 7 | 6 |
72,804,395 | 2022-6-29 | https://stackoverflow.com/questions/72804395/adding-timedelta-to-local-datetime-unexpected-behaviour-accross-dst-shift | I just stumbled accross this surprising behaviour with Python datetimes while creating datetimes accross DST shift. Adding a timedelta to a local datetime might not add the amount of time we expect. import datetime as dt from zoneinfo import ZoneInfo # Midnight d0 = dt.datetime(2020, 3, 29, 0, 0, tzinfo=ZoneInfo("Europe/Paris")) # datetime.datetime(2020, 3, 29, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Paris')) d0.isoformat() # '2020-03-29T00:00:00+01:00' # Before DST shift d1 = d0 + dt.timedelta(hours=2) # datetime.datetime(2020, 3, 29, 2, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Paris')) d1.isoformat() # '2020-03-29T02:00:00+01:00' # After DST shift d2 = d0 + dt.timedelta(hours=3) # datetime.datetime(2020, 3, 29, 3, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Paris')) d2.isoformat() # '2020-03-29T03:00:00+02:00' # Convert to UCT d1u = d1.astimezone(dt.timezone.utc) # datetime.datetime(2020, 3, 29, 1, 0, tzinfo=datetime.timezone.utc) d2u = d2.astimezone(dt.timezone.utc) # datetime.datetime(2020, 3, 29, 1, 0, tzinfo=datetime.timezone.utc) # Compute timedeltas d2 - d1 # datetime.timedelta(seconds=3600) d2u - d1u # datetime.timedelta(0) I agree d1 and d2 are the same, but shouldn't d2 be '2020-03-29T04:00:00+02:00', then? d3 = d0 + dt.timedelta(hours=4) # datetime.datetime(2020, 3, 29, 4, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Paris')) Apparently, when adding a timedelta (ex. 3 hours) to a local datetime, it is added regardless of the timezone and the delta between the two datetimes (in real time / UTC) is not guaranteed to be that timedelta (i.e. it may be 2 hours due to DST). This is a bit of a pitfall. What is the rationale? Is this documented somewhere? | The rationale is : timedelta arithmetic is wall time arithmetic. That is, it includes the DST transition hours (or excludes, depending on the change). See also P. Ganssle's blog post on the topic . An illustration: import datetime as dt from zoneinfo import ZoneInfo # Midnight d0 = dt.datetime(2020, 3, 29, 0, 0, tzinfo=ZoneInfo("Europe/Paris")) for h in range(1, 4): print(h) print(d0 + dt.timedelta(hours=h)) print((d0 + dt.timedelta(hours=h)).astimezone(ZoneInfo("UTC")), end="\n\n") 1 2020-03-29 01:00:00+01:00 2020-03-29 00:00:00+00:00 # as expected, 1 hour added 2 2020-03-29 02:00:00+01:00 # that's a non-existing datetime... 2020-03-29 01:00:00+00:00 # looks normal 3 2020-03-29 03:00:00+02:00 2020-03-29 01:00:00+00:00 # oops, 3 hours timedelta is only 2 hours actually! Need more confusion? Use naive datetime. Given that the tz of my machine (Europe/Berlin) has the same DST transitions as the tz used above: d0 = dt.datetime(2020, 3, 29, 0, 0) for h in range(1, 4): print(h) print(d0 + dt.timedelta(hours=h)) print((d0 + dt.timedelta(hours=h)).astimezone(ZoneInfo("UTC")), end="\n\n") 1 2020-03-29 01:00:00 # 1 hour as expected 2020-03-29 00:00:00+00:00 # we're on UTC+1 2 2020-03-29 02:00:00 # ok 2 hours... 2020-03-29 00:00:00+00:00 # wait, what?! 3 2020-03-29 03:00:00 2020-03-29 01:00:00+00:00 | 6 | 3 |
72,795,799 | 2022-6-29 | https://stackoverflow.com/questions/72795799/how-to-solve-403-error-with-flask-in-python | I made a simple server using python flask in mac. Please find below the code. from flask import Flask app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def hello(): print("request received") return "Hello world!" if __name__ == "__main__": app.run(debug=True) I run it using python3 main.py command. While calling above API on url http://localhost:5000/ from Postman using GET / POST method, it always returns http 403 error. Python version : 3.8.9 OS : Mac OS 12.4 Flask : 2.1.2 | Mac OSX Monterey (12.x) currently uses ports 5000 and 7000 for its Control centre hence the issue. Try running your app from port other than 5000 and 7000 use this: if __name__ == "__main__": app.run(port=8000, debug=True) and then run your flask file, eg: app.py python app.py You can also run using the flask command line interface using this command provided you have set the environment variable necessary for flask CLI. flask run --port 8000 You can also turn off AirPlay Receiver in the Sharing via System Preference. Related discussion here: https://developer.apple.com/forums/thread/682332 Update(November 2022): Mac OSX Ventura(13.x) still has this problem and is fixed with the change in default port as mentioned above. | 50 | 112 |
72,871,480 | 2022-7-5 | https://stackoverflow.com/questions/72871480/when-should-a-static-method-be-a-function | I am writing a class for an image processing algorithm which has some methods, and notably a few static methods. My IDE keeps telling me to convert static methods to function which leads me to the following question: When should a static method be turned into a function? When shouldn't it? | There are no set rules in python regarding this decision, but there are style-guides defined e.g. by companies that look to solve the ambiguity of when to use what. One popular example of this would be the Google Python Style Guide: Never use staticmethod unless forced to in order to integrate with an API defined in an existing library. Write a module level function instead. My guess is, that your IDE follows this stance of a hard no against the staticmethod. If you decide, that you still want to use staticmethods, you can try to disable the warning by adding # noqa as a comment on the line where the warning is shown. Or you can look in your IDE for a setting to disable this kind of warning globally. But this is only one opinion. There are some, that do see value in using staticmethods (staticmethod considered beneficial, Why Python Developers Should Use @staticmethod and @classmethod), and there are others that argue against the usage of staticmethods (Thoughts On @staticmethod Usage In Python, @staticmethod considered a code smell) Another quote that is often cited in this discussion is from Guido van Rossum (creator of Python): Honestly, staticmethod was something of a mistake -- I was trying to do something like Java class methods but once it was released I found what was really needed was classmethod. But it was too late to get rid of staticmethod. I have compiled a list of arguments that I found, without any evaluation or order. Pro module-level function: Staticmethod lowers the cohesion of the class it is in as it is not using any of the attributes the class provides. To call the staticmethod any other module needs to import the whole class even if you just want to use that one method. Staticmethod binds the method to the namespace of the class which makes it longer to write SomeWhatDescriptiveClassName.method instead of method and more work to refactor code if you change the class. Easier reuse of method in other classes or contexts. The call signature of a staticmethod is the same as that of a classmethod or instancemethod. This masks the fact that the staticmethod does not actually read or modify any object information especially when being called from an instance. A module-level function makes this explicit. Pro staticmethod: Being bound by an API your class has to work in, it can be the only valid option. Possible usage of polymorphism for the method. Can overwrite the staticmethod in a subclass to change behaviour. Grouping a method directly to a class it is meant to be used with. Easier to refactor between classmethod, instancemethod and staticmethod compared to module-level functions. Having the method under the namespace of the class can help with reducing possible namespace-collisions inside your module and reducing the namespace of your module overall. As I see it, there are no strong arguments for or against the staticmethod (except being bound by an API). So if you work in an organisation that provides a code standard to follow, just do that. Else it comes down to what helps you best to structure your code for maintainability and readability, and to convey the message of what your code is meant to do and how it is meant to be used. | 7 | 12 |
72,794,483 | 2022-6-29 | https://stackoverflow.com/questions/72794483/pytest-alembic-initialize-database-with-async-migrations | The existing posts didn't provide a useful answer to me. I'm trying to run asynchronous database tests using Pytest (db is Postgres with asyncpg), and I'd like to initialize my database using my Alembic migrations so that I can verify that they work properly in the meantime. My first attempt was this: @pytest.fixture(scope="session") async def tables(): """Initialize a database before the tests, and then tear it down again""" alembic_config: config.Config = config.Config('alembic.ini') command.upgrade(alembic_config, "head") yield command.downgrade(alembic_config, "base") which didn't actually do anything at all (migrations were never applied to the database, tables not created). Both Alembic's documentation & Pytest-Alembic's documentation say that async migrations should be run by configuring your env like this: async def run_migrations_online() -> None: """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ connectable = engine async with connectable.connect() as connection: await connection.run_sync(do_run_migrations) await connectable.dispose() asyncio.run(run_migrations_online()) but this doesn't resolve the issue (however it does work for production migrations outside of pytest). I stumpled upon a library called pytest-alembic that provides some built-in tests for this. When running pytest --test-alembic, I get the following exception: got Future attached to a different loop A few comments on pytest-asyncio's GitHub repository suggest that the following fixture might fix it: @pytest.fixture(scope="session") def event_loop() -> Generator: loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() but it doesn't (same exception remains). Next I tried to run the upgrade test manually, using: async def test_migrations(alembic_runner): alembic_runner.migrate_up_to("revision_tag_here") which gives me alembic_runner.migrate_up_to("revision_tag_here") venv/lib/python3.9/site-packages/pytest_alembic/runner.py:264: in run_connection_task return asyncio.run(run(engine)) RuntimeError: asyncio.run() cannot be called from a running event loop However this is an internal call by pytest-alembic, I'm not calling asyncio.run() myself, so I can't apply any of the online fixes for this (try-catching to check if there is an existing event loop to use, etc.). I'm sure this isn't related to my own asyncio.run() defined in the alembic env, because if I add a breakpoint - or just raise an exception above it - the line is actually never executed. Lastly, I've also tried nest-asyncio.apply(), which just hangs forever. A few more blog posts suggest to use this fixture to initialize database tables for tests: async with engine.begin() as connection: await connection.run_sync(Base.metadata.create_all) which works for the purpose of creating a database to run tests against, but this doesn't run through the migrations so that doesn't help my case. I feel like I've tried everything there is & visited every docs page, but I've got no luck so far. Running an async migration test surely can't be this difficult? If any extra info is required I'm happy to provide it. | I got this up and running pretty easily with the following env.py - the main idea here is that the migration can be run synchronously import asyncio from logging.config import fileConfig from alembic import context from sqlalchemy import engine_from_config from sqlalchemy import pool from sqlalchemy.ext.asyncio import AsyncEngine config = context.config if config.config_file_name is not None: fileConfig(config.config_file_name) target_metadata = mymodel.Base.metadata def run_migrations_online(): connectable = context.config.attributes.get("connection", None) if connectable is None: connectable = AsyncEngine( engine_from_config( context.config.get_section(context.config.config_ini_section), prefix="sqlalchemy.", poolclass=pool.NullPool, future=True ) ) if isinstance(connectable, AsyncEngine): asyncio.run(run_async_migrations(connectable)) else: do_run_migrations(connectable) async def run_async_migrations(connectable): async with connectable.connect() as connection: await connection.run_sync(do_run_migrations) await connectable.dispose() def do_run_migrations(connection): context.configure( connection=connection, target_metadata=target_metadata, compare_type=True, ) with context.begin_transaction(): context.run_migrations() run_migrations_online() then I added a simple db init script init_db.py from alembic import command from alembic.config import Config from sqlalchemy.ext.asyncio import create_async_engine __config_path__ = "/path/to/alembic.ini" __migration_path__ = "/path/to/folder/with/env.py" cfg = Config(__config_path__) cfg.set_main_option("script_location", __migration_path__) async def migrate_db(conn_url: str): async_engine = create_async_engine(conn_url, echo=True) async with async_engine.begin() as conn: await conn.run_sync(__execute_upgrade) def __execute_upgrade(connection): cfg.attributes["connection"] = connection command.upgrade(cfg, "head") then your pytest fixture can look like this conftest.py ... @pytest_asyncio.fixture(autouse=True) async def migrate(): await migrate_db(conn_url) yield ... Note: I don't scope my migrate fixture to the test session, I tend to drop and migrate after each test. | 8 | 13 |
72,796,594 | 2022-6-29 | https://stackoverflow.com/questions/72796594/attributeerror-module-httpcore-has-no-attribute-synchttptransport | While importing googletrans I am getting this error: AttributeError: module 'httpcore' has no attribute 'SyncHTTPTransport | googletrans==3.0.0 use very old httpx (0.13.3) and httpcore version. You just need to update httpx and httpcore to latest version and go to googletrans source directory in Python310/Lib/site-packages. In the file client.py, fix 'httpcore.SyncHTTPTransport' to 'httpcore.AsyncHTTPProxy'. And done, perfect. Even, Async, a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets. if you got error 'Nonetype'...group. Try: pip install googletrans==4.0.0-rc1 | 5 | 20 |
72,814,364 | 2022-6-30 | https://stackoverflow.com/questions/72814364/django-tenants-python-shell-with-specific-tenant | I want to use "./manage.py shell" to run some Python commands with a specific tenant, but the code to do so is quite cumbersome because I first have to look up the tenant and then use with tenant_context(tenant)): and then write my code into this block. I thought that there should be a command for that provided by django-tenants, but there isn't. | I've just looked at this myself and this will work, where tenant1 is your chosen tenant: python3 manage.py tenant_command shell --schema=tenant1 | 8 | 10 |
72,876,146 | 2022-7-5 | https://stackoverflow.com/questions/72876146/handling-gil-when-calling-python-lambda-from-c-function | The question Is pybind11 somehow magically doing the work of PyGILState_Ensure() and PyGILState_Release()? And if not, how should I do it? More details There are many questions regarding passing a python function to C++ as a callback using pybind11, but I haven't found one that explains the use of the GIL with pybind11. The documentation is pretty clear about the GIL: [...] However, when threads are created from C (for example by a third-party library with its own thread management), they donβt hold the GIL, nor is there a thread state structure for them. If you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned third-party library), you must first register these threads with the interpreter by creating a thread state data structure, then acquiring the GIL, and finally storing their thread state pointer, before you can start using the Python/C API. I can easily bind a C++ function that takes a callback: py::class_<SomeApi> some_api(m, "SomeApi"); some_api .def(py::init<>()) .def("mode", &SomeApi::subscribe_mode, "Subscribe to 'mode' updates."); With the corresponding C++ function being something like: void subscribe_mode(const std::function<void(Mode mode)>& mode_callback); But because pybind11 cannot know about the threading happening in my C++ implementation, I suppose it cannot handle the GIL for me. Therefore, if mode_callback is called by a thread created from C++, does that mean that I should write a wrapper to SomeApi::subscribe_mode that uses PyGILState_Ensure() and PyGILState_Release() for each call? This answer seems to be doing something similar, but still slightly different: instead of "taking the GIL" when calling the callback, it seems like it "releases the GIL" when starting/stopping the thread. Still I'm wondering if there exists something like py::call_guard<py::gil_scoped_acquire>() that would do exactly what I (believe I) need, i.e. wrapping my callback with PyGILState_Ensure() and PyGILState_Release(). | In general pybind11 tries to do the Right Thing and the GIL will be held when pybind11 knows that it is calling a python function, or in C++ code that is called from python via pybind11. The only time that you need to explicitly acquire the GIL when using pybind11 is when you are writing C++ code that accesses python and will be called from other C++ code, or if you have explicitly dropped the GIL. std::function wrapper The wrapper for std::function always acquires the GIL via gil_scoped_acquire when the function is called, so your python callback will always be called with the GIL held, regardless which thread it is called from. If gil_scoped_acquire is called from a thread that does not currently have a GIL thread state associated with it, then it will create a new thread state. As a side effect, if nothing else in the thread acquires the thread state and increments the reference count, then once your function exits the GIL will be released by the destructor of gil_scoped_acquire and then it will delete the thread state associated with that thread. If you're only calling the function once from another thread, this isn't a problem. If you're calling the callback often, it will create/delete the thread state a lot, which probably isn't great for performance. It would be better to cause the thread state to be created when your thread starts (or even easier, start the thread from Python and call your C++ code from python). | 11 | 13 |
72,844,458 | 2022-7-3 | https://stackoverflow.com/questions/72844458/mysql-to-mongodb-data-migration | We know that MongoDB has two ways of modeling relationships between relations/entities, namely, embedding and referencing (see difference here). Let's say we have a USER database with two tables in mySQL named user and address. An embedded MongoDB document might look like this: { "_id": 1, "name": "Ashley Peacock", "addresses": [ { "address_line_1": "10 Downing Street", "address_line_2": "Westminster", "city": "London", "postal_code": "SW1A 2AA" }, { "address_line_1": "221B Baker Street", "address_line_2": "Marylebone", "city": "London", "postal_code": "NW1 6XE" } ] } Whereas in a referenced relation, 2 SQL tables will make 2 collections in MongoDB which can be migrated by this apporoach using pymongo. How can we directly migrate MySQL data as an embedded document using python? Insights about about Pseudo code and performance of algorithm will be highly useful. Something that comes to my mind is creating views by performing joins in MySQL. But in that case we will not be having the structure of children document inside a parent document. | Denormalization First, for canonical reference, the question of "embedded" vs. "referenced" data is called denormalization. Mongo has a guide describing when you should denormalize. Knowing when and how to denormalize is a very common hang-up when moving from SQL to NoSQL and getting it wrong can erase any performance benefits you might be looking for. I'll assume you've got this figured out since you seem set on using an embedded approach. MySQL to Mongo Mongo has a great Python tutorial you may want to refer to. First join your user and address tables. It will look something like this: | _id | name | address_line_1 | address_line_2 | ... | 1 | Ashley Peacock | 10 Downing Street ... | ... | 1 | Ashley Peacock | 221B Baker Street ... | ... | 2 | Bob Jensen | 343 Main Street ... | ... | 2 | Bob Jensen | 1223 Some Ave ... | ... ... Then iterate over the rows to create your documents and pass them to pymongo insert_one. Using upsert=True with insert_one will insert a new document if a matching one is not found in the database, or update an existing document if it is found. Using $push appends the address data to the array field addresses in the document. With this setup, insert_one will automatically handle duplicates and append addresses based on matching _id fields. See the docs for more details: from pymongo import MongoClient client = MongoClient(port=27017) db = client.my_db sql_data = [] # should have your SQL table data # depending on how you got this into python, you will index with a # field name or a number, e.g. row["id"] or row[0] for row in sql_data: address = { "address_line_1": row["address_line_1"], "address_line_2": row["address_line_2"], "city": row["city"], "postal_code": row["postal_code"], } db.users.update_one( {"_id": row["_id"]}, {"name": row["name"], "$push": {"addresses": address}}, upsert=True, ) | 5 | 4 |
72,827,460 | 2022-7-1 | https://stackoverflow.com/questions/72827460/creating-google-credentials-object-for-google-drive-api-without-loading-from-fil | I am trying to create a google credentials object to access google drive. I have the tokens and user data stored in a session thus I am trying to create the object without loading them from a credentials.json file. I am handling the authentication when a user first logs in the web app and storing the tokens inside the user session, the session has a time out of 24 hours same as the default access token thus after 24 hours the user is requested to log in again so a new session with a valid access token is created. So my idea is to reuse the access token to limit the amount of log ins and improve user experience. This is a small piece of code on how I'm trying to create the google credentials object from oauth2client.client import GoogleCredentials access_token = request.session['access_token'] gCreds = GoogleCredentials( access_token, os.getenv('GOOGLE_CLIENT_ID'), os.getenv('GOOGLE_CLIENT_SECRET'), refresh_token=None, token_expiry=None, token_uri=GOOGLE_TOKEN_URI, user_agent='Python client library', revoke_uri=None) build('drive', 'v3', credentials = gCred) Whenever I try to run this code I get the following error: Insufficient Permission: Request had insufficient authentication scopes.". Details: "[{'domain': 'global', 'reason': 'insufficientPermissions', 'message': 'Insufficient Permission: Request had insufficient authentication scopes.'}]" { "error": { "errors": [ { "domain": "usageLimits", "reason": "dailyLimitExceededUnreg", "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.", "extendedHelp": "https://code.google.com/apis/console" } ], "code": 403, "message": "Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup." } } | The object was build correctly, there was no issue with building the object this way access_token = request.session['access_token'] gCreds = GoogleCredentials( access_token, os.getenv('GOOGLE_CLIENT_ID'), os.getenv('GOOGLE_CLIENT_SECRET'), refresh_token=None, token_expiry=None, token_uri=GOOGLE_TOKEN_URI, user_agent='Python client library', revoke_uri=None) build('drive', 'v3', credentials = gCred) The problem was caused at an earlier stage, when registering the scopes oauth = OAuth(config) oauth.register( name='google', server_metadata_url='', client_kwargs={ 'scope': 'openid email profile https://www.googleapis.com/auth/drive.readonly' } ) | 5 | 1 |
72,862,224 | 2022-7-4 | https://stackoverflow.com/questions/72862224/stripe-metadata-working-properly-but-not-showing-up-on-stripe-dashboard | I've implemented Stripe checkout on a Django app and it's all working correctly except that it's not showing up on the Stripe Dashboard, even though it's showing in the event data on the same page. Have I formatted it incorrectly or am I overlooking something obvious? This is how I added meta data: checkout_session = stripe.checkout.Session.create( payment_method_types=['card'], line_items = line_itemz, metadata={ "payment_type":"schedule_visit", "visit_id":visit_id }, mode='payment', success_url= 'http://localhost:8000/success', cancel_url = 'http://localhost:8000/cancel',) Here is a screenshot of the Metadata section empty, but in the events the Metadata is there as it should be: Again I can access the metadata every where else but would like it to show up on the dashboard so my team can more easily access that information. | The metadata field you set is for Checkout Session alone, but not on Payment Intent (which is the Dashboard page you are at). To have metadata shown at the Payment Intent, I'd suggest also setting payment_intent_data.metadata [0] in the request when creating a Checkout Session. For example, session = stripe.checkout.Session.create( success_url="https://example.com/success", cancel_url="https://example.com/cancel", line_items=[ { "price": "price_xxx", "quantity": 1, }, ], mode="payment", metadata={ "payment_type": "schedule_visit", "visit_id": "123" }, payment_intent_data={ "metadata": { "payment_type": "schedule_visit", "visit_id": "123" } } ) [0] https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data-metadata | 5 | 12 |
72,876,190 | 2022-7-5 | https://stackoverflow.com/questions/72876190/rolling-windows-in-pandas-how-to-wrap-around-with-datetimeindex | I have a DataFrame where the index is a DatetimeIndex with a daily frequency. It contains 365 rows, one for each day of the year. When computing rolling sums, the first few elements are always NaN (as expected), but I'd like them to have actual values. For example, if using a rolling window of 3 samples, the value for Jan 1 should be the sum of Dec 30, Dec 31, and Jan 1. Similarly, the value for Jan 2 should be the sum of Dec 31, Jan 1, and Jan 2. I have looked into all parameters of the rolling function in Pandas, and could not find anything that would provide this wrapping. Any help would be appreciated. The code below is a minimal example illustrating the behavior of rolling. import numpy as np import pandas as pd fake_data = pd.DataFrame(index=pd.date_range('2022-1-1', '2022-12-31', freq='D'), data=np.random.random(365)) rolling_fake_data = fake_data.rolling(3).sum() | You're basically asking for a circular data object which has no start or end. Not sure that exists! The best work-around I can think of is to repeat the end of the series before the beginning. n = 3 rolling_fake_data = ( pd.concat([fake_data[-n:], fake_data]) ).rolling(n).sum()[n:] # Test assert(rolling_fake_data.loc["2022-01-01", 0] == fake_data.loc[["2022-12-30", "2022-12-31", "2022-01-01"], 0].sum()) | 5 | 2 |
72,874,936 | 2022-7-5 | https://stackoverflow.com/questions/72874936/how-to-split-a-row-into-two-rows-in-python-based-on-delimiter-in-python | I have the following input file in csv A,B,C,D 1,2,|3|4|5|6|7|8,9 11,12,|13|14|15|16|17|18,19 How do I split column C right in the middle into two new rows with additional column E where the first half of the split get "0" in Column E and the second half get "1" in Column E? A,B,C,D,E 1,2,|3|4|5,9,0 1,2,|6|7|8,9,1 11,12,|13|14|15,19,0 11,12,|16|17|18,19,1 Thank you | Here's how to do it without Pandas: import csv with open("input.csv", newline="") as f_in, open("output.csv", "w", newline="") as f_out: reader = csv.reader(f_in) header = next(reader) # read header header += ["E"] # modify header writer = csv.writer(f_out) writer.writerow(header) for row in reader: a, b, c, d = row # assign 4 items for each row c_items = [x.strip() for x in c.split("|") if x.strip()] n_2 = len(c_items) // 2 # halfway index c1 = "|" + "|".join(c_items[:n_2]) c2 = "|" + "|".join(c_items[n_2:]) writer.writerow([a, b, c1, d, 0]) # 0 & 1 will be converted to str on write writer.writerow([a, b, c2, d, 1]) | 4 | 3 |
72,873,986 | 2022-7-5 | https://stackoverflow.com/questions/72873986/functions-in-sas | I have a function that converts a value in one format to another. It's analogous to converting Fahrenheit to Celsius for example. Quite simply, the formula is: l = -log(20/x) I am inheriting SAS code from a colleague that has the following hardcoded for a range of values of x: "if x= 'a' then x=l;" which is obviously tedious and limited in scope. How best could I convert this to a function that could be called in a SAS script? I previously had it in Python as: def function(x): l = -np.log10(20/float(x)) return l and then would simply call the function. Thank you for your help - I'm adusting from Python to SAS and trying to figure out how to make the switch. | If you are interested in writing your own functions, as Joe said, proc fcmp is one way to do it. This will let you create functions that behave like SAS functions. It's about as analogous to Python functions as you'll get. It takes a small bit of setup, but it's really nice in that the functions are all saved in a SAS dataset that can be transferred from environment to environment. The below code creates a function called f() that does the same as the Python function. proc fcmp outlib=work.funcs.log; function f(x); l = log10(20/x); return(l); endfunc; run; options cmplib=work.funcs; This is doing three things: Creating a function called f() that takes one input, x Saving the function in a dataset called work.funcs that holds all functions Labeling all the functions under the package log Don't worry too much about the label. It's handy if you have many different function packages you want, for example: time, dates, strings, etc. It's helpful for organization, but it is a required label. Most of the time I just do work.funcs.funcs. options cmplib=work.funcs says to load the dataset funcs which holds all of your functions of interest. You can test your function below: data test; l1 = f(1); l2 = f(2); l10 = f(10); run; Output: l1 l2 l10 1.3010299957 1 0.3010299957 Also, SAS does have a Python interface. If you're more comfortable programming in Python, take a look at SASPy to get all the benefits of both SAS and Python. | 4 | 5 |
72,873,362 | 2022-7-5 | https://stackoverflow.com/questions/72873362/how-to-rotate-90-deg-of-2d-array-inside-3d-array | I have a 3D array consist of 2D arrays, I want to rotate only the 2D arrays inside the 3D array without changing the order, so it will become a 3D array consist of rotated 3D arrays. For example, I have a 3D array like this. foo = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) print(foo) >>> array([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]]) foo.shape >>> (2, 2, 3) I want to rotate it into this. rotated_foo = np.array([[[4, 1], [5, 2], [6, 3]], [[10, 7], [11, 8], [12, 9]]]) print(rotated_foo) >>> array([[[ 4, 1], [ 5, 2], [ 6, 3]], [[10, 7], [11, 8], [12, 9]]]) rotated_foo.shape >>> (2, 3, 2) I've tried it using numpy's rot90 but I got something like this. rotated_foo = np.rot90(foo) print(rotated_foo) >>> array([[[ 4, 5, 6], [10, 11, 12]], [[ 1, 2, 3], [ 7, 8, 9]]]) rotated_foo.shape >>> (2, 2, 3) | You can use numpy.rot90 by setting axes that you want to rotate. foo = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) rotated_foo = np.rot90(foo, axes=(2,1)) print(rotated_foo) Output: array([[[ 4, 1], [ 5, 2], [ 6, 3]], [[10, 7], [11, 8], [12, 9]]]) | 4 | 2 |
72,865,725 | 2022-7-5 | https://stackoverflow.com/questions/72865725/how-to-copy-both-folder-and-files-in-python | I am aware that using shutil.copy(src,dst) copies files and shutil.copytree(src,dst) is used for copying directories. Is there any way so I don't have to differentiate between copying folders and copying files? Tysm | You might want to take a look to this topic, where the same question was answered. https://stackoverflow.com/a/1994840/17595642 Functions can be written to do so. Here is the one implemented in the other topic : import shutil, errno def copyanything(src, dst): try: shutil.copytree(src, dst) except OSError as exc: # python >2.5 if exc.errno in (errno.ENOTDIR, errno.EINVAL): shutil.copy(src, dst) else: raise | 5 | 5 |
72,831,076 | 2022-7-1 | https://stackoverflow.com/questions/72831076/how-can-i-use-a-sequence-of-numbers-to-predict-a-single-number-in-tensorflow | I am trying to build a machine learning model which predicts a single number from a series of numbers. I am using a Sequential model from the keras API of Tensorflow. You can imagine my dataset to look something like this: Index x data y data 0 np.ndarray(shape (1209278,) ) numpy.float32 1 np.ndarray(shape (1211140,) ) numpy.float32 2 np.ndarray(shape (1418411,) ) numpy.float32 3 np.ndarray(shape (1077132,) ) numpy.float32 ... ... ... This was my first attempt: I tried using a numpy ndarray which contains numpy ndarrays which finally contain floats as my xdata, so something like this: array([ array([3.59280851, 3.60459062, 3.60459062, ..., 4.02911493]) array([3.54752101, 3.56740332, 3.56740332, ..., 4.02837855]) array([3.61048168, 3.62152741, 3.62152741, ..., 4.02764217]) ]) My y data is a numpy ndarray containing floats, which looks something like this array([2.9864411, 3.0562437, ... , 2.7750807, 2.8712902], dtype=float32) But when I tried to train the model using model.fit() it yields this error: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray). I was able to solve this error by asking a question related to this: How can I have a series of numpy ndarrays as the input data to train a tensorflow machine learning model? My latest attempt: Because Tensorflow does not seem to be able to convert a ndarray of ndarrays to a tensor, I tried to convert my x data to a list of ndarrays like this: [ array([3.59280851, 3.60459062, 3.60459062, ..., 4.02911493]) array([3.54752101, 3.56740332, 3.56740332, ..., 4.02837855]) array([3.61048168, 3.62152741, 3.62152741, ..., 4.02764217]) ] I left my y data untouched, so as a ndarray of floats. Sadly my attempt of using a list of ndarrays instead of a ndarray of ndarrays yielded this error: ValueError: Data cardinality is ambiguous: x sizes: 1304593, 1209278, 1407624, ... y sizes: 46 Make sure all arrays contain the same number of samples. As you can see, my x data consists of arrays which all have a different shape. But I don't think that this should be a problem. Question: My guess is that Tensorflow tries to use my list of arrays as multiple inputs. Tensorflow fit() documentation But I don't want to use my x data as multiple inputs. Easily said I just want my model to predict a number from a sequence of numbers. For example like this: array([3.59280851, 3.60459062, 3.60459062, ...]) => 2.8989773 array([3.54752101, 3.56740332, 3.56740332, ...]) => 3.0893357 ... How can I use a sequence of numbers to predict a single number in Tensorflow? EDIT Maybe I should have added that I want to use a RNN, especially a LSTM. I have had a look at the Keras documentation, and in their simplest example they are using a Embedding layer. But I don't really know what to do. All in all I think that my question ist pretty general and should be easy to answer if you know how to tackle this problem, unlike me. Thanks in advance! | Try something like this: import numpy as np import tensorflow as tf # add additional dimension for lstm layer x_train = np.asarray(train_set["x data"].values))[..., None] y_train = np.asarray(train_set["y data"]).astype(np.float32) model = tf.keras.Sequential() model.add(tf.keras.layers.LSTM(units=32)) model.add(tf.keras.layers.Dense(units=1)) model.compile(loss="mean_squared_error", optimizer="adam", metrics="mse") model.fit(x=x_train,y=y_train,epochs=10) Or with a ragged input for different sequence lengths: x_train = tf.ragged.constant(train_set["x data"].values[..., None]) # add additional dimension for lstm layer y_train = np.asarray(train_set["y data"]).astype(np.float32) model = tf.keras.Sequential() model.add(tf.keras.layers.Input(shape=[None, x_train.bounding_shape()[-1]], batch_size=2, dtype=tf.float32, ragged=True)) model.add(tf.keras.layers.LSTM(units=32)) model.add(tf.keras.layers.Dense(units=1)) model.compile(loss="mean_squared_error", optimizer="adam", metrics="mse") model.fit(x=x_train,y=y_train,epochs=10) Or: x_train = tf.ragged.constant([np.array(list(v))[..., None] for v in train_set["x data"].values]) # add additional dimension for lstm layer | 6 | 3 |
72,866,716 | 2022-7-5 | https://stackoverflow.com/questions/72866716/pandas-how-to-get-the-postition-number-of-row-of-a-value | I have my pandas dataframe and i need to find the index of a certain value. But the thing is, this df does from an other one where i had to cut some part using the df.loc, so it ends up like : index value 1448 31776 1449 32088 1450 32400 1451 32712 1452 33024 Let's say i need to find the index of the value '32400', with the function df.index[df.value == 32400] i get 1450, but what I want is 2. Would you know how i could get this value? | First idea is create default index starting by 0 by DataFrame.reset_index with drop=True: df = df.reset_index(drop=True) idx = df.index[df.value == 32400] print (idx) Int64Index([2], dtype='int64') Or use numpy - e.g. by numpy.where with condition for positions of match: idx = np.where(df.value == 32400)[0] print (idx) [2] Or use numpy.argmax - it working well if match at least one value: idx = np.argmax(df.value == 32400) print (idx) 2 idx = np.argmax(df.value == 10) print (idx) 0 | 4 | 4 |
72,863,564 | 2022-7-5 | https://stackoverflow.com/questions/72863564/subtract-last-timestamp-from-first-timestamp-for-each-id-in-pandas-dataframe | I have a dataframe (df) with the following structure: retweet_datetime tweet_id tweet_datetime 2020-04-24 03:33:15 85053699 2020-04-24 02:28:22 2020-04-24 02:43:35 85053699 2020-04-24 02:28:22 2020-04-18 04:24:03 86095361 2020-04-18 00:06:01 2020-04-18 00:19:08 86095361 2020-04-18 00:06:01 2020-04-18 00:18:38 86095361 2020-04-18 00:06:01 2020-04-18 00:07:08 86095361 2020-04-18 00:06:01 The retweet_datetime is sorted from latest to newest retweets. I'd like to create two new columns as follows: tweet_lifetime1: the difference between the last retweet time and the first retweet time, i.e., for each tweet_id: last retweet_datetime - first retweet_datetime tweet_lifetime2: the difference between the last retweet time and tweet creation time (tweet_datetime) Update For example, for the tweet id: "86095361": tweet_lifetime1 = 2020-04-18 04:24:03 - 2020-04-18 00:07:08 (04:16:55) tweet_lifetime2 = 2020-04-18 04:24:03 - 2020-04-18 00:06:01 (04:18:02) The expected output df: retweet_datetime tweet_id tweet_datetime lifetime1 lifetime2 2020-04-24 03:33:15 85053699 2020-04-24 02:28:22 00:49:40 01:04:53 2020-04-18 04:24:03 86095361 2020-04-18 00:06:01 04:16:55 04:18:02 I've seen several similar posts, but they mostly subtract consecutive rows. For example, I can subtract the time difference between each retweet_datetimes for each tweet id as follows: df2 = df.assign(delta = df.groupby('tweet_id')['retweet_datetime'].diff()) | Use named aggregation with subtract column with Series.sub, DataFrame.pop is used for drop column tmp after processing: df1 = (df.groupby('tweet_id', as_index=False) .agg(retweet_datetime=('retweet_datetime','first'), tmp = ('retweet_datetime','last'), tweet_datetime = ('tweet_datetime','last'))) df1['lifetime1'] = df1['retweet_datetime'].sub(df1.pop('tmp')) df1['lifetime2'] = df1['retweet_datetime'].sub(df1['tweet_datetime']) print (df1) tweet_id retweet_datetime tweet_datetime lifetime1 \ 0 85053699 2020-04-24 03:33:15 2020-04-24 02:28:22 0 days 00:49:40 1 86095361 2020-04-18 04:24:03 2020-04-18 00:06:01 0 days 04:16:55 lifetime2 0 0 days 01:04:53 1 0 days 04:18:02 If need format HH:MM:SS use: def f(x): ts = x.total_seconds() hours, remainder = divmod(ts, 3600) minutes, seconds = divmod(remainder, 60) return ('{:02d}:{:02d}:{:02d}').format(int(hours), int(minutes), int(seconds)) df1['lifetime1'] = df1['retweet_datetime'].sub(df1.pop('tmp')).apply(f) df1['lifetime2'] = df1['retweet_datetime'].sub(df1['tweet_datetime']).apply(f) print (df1) tweet_id retweet_datetime tweet_datetime lifetime1 lifetime2 0 85053699 2020-04-24 03:33:15 2020-04-24 02:28:22 00:49:40 01:04:53 1 86095361 2020-04-18 04:24:03 2020-04-18 00:06:01 04:16:55 04:18:02 | 3 | 4 |
72,850,849 | 2022-7-4 | https://stackoverflow.com/questions/72850849/need-help-speeding-up-numpy-code-that-finds-number-of-coincidences-between-two | I am looking for some help speeding up some code that I have written in Numpy. Here is the code: def TimeChunks(timevalues, num): avg = len(timevalues) / float(num) out = [] last = 0.0 while last < len(timevalues): out.append(timevalues[int(last):int(last + avg)]) last += avg return out ### chunk i can be called by out[i] ### NumChunks = 100000 t1chunks = TimeChunks(t1, NumChunks) t2chunks = TimeChunks(t2, NumChunks) NumofBins = 2000 CoincAllChunks = 0 for i in range(NumChunks): CoincOneChunk = 0 Hist1, something1 = np.histogram(t1chunks[i], NumofBins) Hist2, something2 = np.histogram(t2chunks[i], NumofBins) Mask1 = (Hist1>0) Mask2 = (Hist2>0) MaskCoinc = Mask1*Mask2 CoincOneChunk = np.sum(MaskCoinc) CoincAllChunks = CoincAllChunks + CoincOneChunk Is there anything that can be done to improve this to make it more efficient for large arrays? To explain the point of the code in a nutshell, the purpose of the code is simply to find the average "coincidences" between two NumPy arrays, representing time values of two channels (divided by some normalisation constant). This "coincidence" occurs when there is at least one time value in each of the two channels in a certain time interval. For example: t1 = [.4, .7, 1.1] t2 = [0.8, .9, 1.5] There is a coincidence in the window [0,1] and one coincidence in the interval [1, 2]. I want to find the average number of these "coincidences" when I break down my time array into a number of equally distributed bins. So for example if: t1 = [.4, .7, 1.1, 2.1, 3, 3.3] t2 = [0.8, .9, 1.5, 2.2, 3.1, 4] And I want 4 bins, the intervals I'll consider are ([0,1], [1,2], [2,3], [3,4]). Therefore the total coincidences will be 4 (because there is a coincidence in each bin), and therefore the average coincidences will be 4. This code is an attempt to do this for large time arrays for very small bin sizes, and as a result, to make it work I had to break down my time arrays into smaller chunks, and then for-loop through each of these chunks. I've tried making this as vectorized as I can, but it still is very slow... Any ideas what can be done to speed it up further? Any suggestions/hints will be appreciated. Thanks!. | This is 17X faster and more correct using a custom made numba_histogram function that beats the generic np.histogram. Note that you are computing and comparing histograms of two different series separately, which is not accurate for your purpose. So, in my numba_histogram function I use the same bin edges to compute the histograms of both series simultaneously. We can still optimize it even further if you provide more precise details about the algorithm. Namely, if you provide specific details about the parameters and the criteria on which you decide that two intervals coincide. import numpy as np from numba import njit @njit def numba_histogram(a, b, n): hista, histb = np.zeros(n, dtype=np.intp), np.zeros(n, dtype=np.intp) a_min, a_max = min(a[0], b[0]), max(a[-1], b[-1]) for x, y in zip(a, b): bin = n * (x - a_min) / (a_max - a_min) if x == a_max: hista[n - 1] += 1 elif bin >= 0 and bin < n: hista[int(bin)] += 1 bin = n * (y - a_min) / (a_max - a_min) if y == a_max: histb[n - 1] += 1 elif bin >= 0 and bin < n: histb[int(bin)] += 1 return np.sum( (hista > 0) * (histb > 0) ) @njit def calc_coincidence(t1, t2): NumofBins = 2000 NumChunks = 100000 avg = len(t1) / NumChunks CoincAllChunks = 0 last = 0.0 while last < len(t1): t1chunks = t1[int(last):int(last + avg)] t2chunks = t2[int(last):int(last + avg)] CoincAllChunks += numba_histogram(t1chunks, t2chunks, NumofBins) last += avg return CoincAllChunks Test with 10**8 arrays: t1 = np.arange(10**8) + np.random.rand(10**8) t2 = np.arange(10**8) + np.random.rand(10**8) CoincAllChunks = calc_coincidence(t1, t2) print( CoincAllChunks ) # 34793890 Time: 24.96140170097351 sec. (Original) # 34734897 Time: 1.499996423721313 sec. (Optimized) | 5 | 3 |
72,852,853 | 2022-7-4 | https://stackoverflow.com/questions/72852853/how-to-efficiently-find-pairs-of-numbers-where-the-square-of-one-equals-the-cube | I need to find the pairs (i,j) and number of pairs for a number N such that the below conditions are satisfied: 1 <= i <= j <= N and also i * i * i = j * j. For example, for N = 50, number of pairs is 3 i.e., (1,1), (4,8), (9,27). I tried the below function code but it takes too much time for a large number like N = 10000 or more: def compute_pairs(N): pair = [] for i in range (1, N): for j in range (i, N): print( 'j =', j) if i*i*i == j*j: new_pair = (i,j) pair.append(new_pair) print(pair) return len(pair) | Let k be the square root of some integer i satisfying i*i*i == j*j, where j is also an integer. Since k is the square root of an integer, k*k is integer. From the equation, we can solve that j is equal to k*k*k, so that is also an integer. Since k*k*k is an integer and k*k is an integer, it follows by dividing these two that k is rational. But k is the square root of an integer, so it must be either an integer or irrational. Since it is rational, it must be an integer. Since k is an integer, all the solutions are simply (k*k, k*k*k) for integer k. Since we will iterate over k>=1, we know that k*k <= k*k*k, i.e. i <= j, so we don't have to worry about that. We just have to stop iterating when k*k*k reaches N. from itertools import count # unbounded range; we will `break` when appropriate def compute_pairs(N): result = [] for k in count(1): i, j = k*k, k*k*k if j >= N: break result.append((i, j)) return result This runs almost instantaneously even for N of 100000, no C-level optimization needed. | 3 | 7 |
72,860,558 | 2022-7-4 | https://stackoverflow.com/questions/72860558/import-module-error-when-building-a-docker-container-with-python-for-aws-lambda | I'm trying to build a Docker container that runs Python code on AWS Lambda. The build works fine, but when I test my code, I get the following error: {"errorMessage": "Unable to import module 'function': No module named 'utils'", "errorType": "Runtime.ImportModuleError", "stackTrace": []} I basically have two python scripts in my folder, function.py and utils.py, and I import some functions from utils.py and use them in function.py. Locally, it all works fine, but when I build the Container and test it with the following curl command, I get the above error. Test curl command: curl --request POST \ --url http://localhost:9000/2015-03-31/functions/function/invocations \ --header 'Content-Type: application/json' \ --data '{"Input": 4}' Here's my dockerfile: FROM public.ecr.aws/lambda/python:3.7 WORKDIR / COPY . . COPY function.py ${LAMBDA_TASK_ROOT} COPY requirements.txt . RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}" CMD [ "function.lambda_handler" ] What I read in related Stackoverflow questions is to try importing the functions from utils.py in another way, I've tried changing from utils import * to from .utils import all, I changed the WORKDIR in my Dockerfile, and I put the utils file in a separate utils folder and tried importing this way: from utils.utils import *. I have also tried running the code in Python 3.8. Here's my folder structure: Here's my folder structure Does anyone know what I'm doing wrong? | The Dockerfile statement COPY . . copies all files to the working directory, which given your previous WORKDIR, is /. To resolve the Python import issue, you need to move the Python module to the right directory: COPY utils.py ${LAMBDA_TASK_ROOT} | 7 | 2 |
72,825,203 | 2022-7-1 | https://stackoverflow.com/questions/72825203/how-to-set-up-properly-package-data-in-setup-py | I am facing an error like No such file or directory.... Could anyone help me with this out? I'd really appreciate it! This is the directory: And this is the setup.py code: from setuptools import setup setup( name='raw-microsoft-wwi', version='1.0.0', package_dir={"":"src"}, packages=[ "raw_microsoft_wwi", "raw_microsoft_wwi.configuration", "raw_microsoft_wwi.reader", "raw_microsoft_wwi.writer", "raw_microsoft_wwi.transformer" ], package_data = {"src/raw_microsoft_wwi/queries": ["*.sql"]}, include_package_data = True, install_requires=[ "delta-spark==1.2.0", "pyspark==3.2.1", "python-environ==0.4.54" ], zip_safe=False ) | Setuptools expects package_data to be a dictionary mapping from Python package names to a list of file name patterns. This means that the keys in the dictionary should be similar to the values you listed in the packages configuration (instead of paths). I think you can try the following: Add the missing raw_microsoft_wwi.queries, raw_microsoft_wwi.queries.incremental_table_queries, raw_microsoft_wwi.queries.regular_tables_queries (etc) to the packages list (effectivelly all these folders are considered packages by Python, even if they don't have a __init__.py file). Replace package_data, with something similar to: package_data = {"raw_microsoft_wwi.queries": ["**/*.sql"]} More information is available in https://setuptools.pypa.io/en/latest/userguide/datafiles.html. | 4 | 5 |
72,858,984 | 2022-7-4 | https://stackoverflow.com/questions/72858984/mkl-service-package-failed-to-import-therefore-intelr-mkl-initialization-ensu | When I go to run a python code directly through the terminal it gives me this error, I've already tried to reinstall numpy and it didn't work! And I tried to install mlk service returns the same error. Can someone help me ? UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init Traceback (most recent call last): File "c:\Users\teste.user\Desktop\Project-python\teste.py", line 4, in <module> import pandas as pd File "C:\Users\teste.user\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "C:\Users\teste.user\Anaconda3\python.exe" * The NumPy version is: "1.21.5" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found. | Can be solved by resetting package configuration by force reinstall of numpy. conda install numpy --force-reinstall | 8 | 8 |
72,858,633 | 2022-7-4 | https://stackoverflow.com/questions/72858633/detect-mouse-scroll | How do I detect Mouse scroll up and down in python using pygame? I have created a way to detect it, but it doesn't give me any information about which way I scrolled the mouse as well as being terrible at detecting mouse scrolls where only 1 in 20 are getting detected. for event in pygame.event.get(): if event.type == pygame.MOUSEWHEEL: print("Mouse Scroll Detected.") Any other ways I can detect mouse scrolls? | The MOUSEWHEEL event object has x and y components (see pygame.event module). These components indicate the direction in which the mouse wheel was rotated (for horizontal and vertical wheel): for event in pygame.event.get(): if event.type == pygame.MOUSEWHEEL: print(event.x, event.y) | 9 | 10 |
72,851,576 | 2022-7-4 | https://stackoverflow.com/questions/72851576/corner-detection-in-opencv | I was trying to detect all the corners in the image using harris corner detection in opencv(python). But due to the thickness of the line , I am getting multiple corners in a single corner . Is there something I can do to make this right. code import numpy as np import cv2 as cv filename = 'Triangle.jpg' img = cv.imread(filename) gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY) gray = np.float32(gray) dst = cv.cornerHarris(gray,2,3,0.04) #result is dilated for marking the corners, not important dst = cv.dilate(dst,None) # Threshold for an optimal value, it may vary depending on the image. img[dst>0.01*dst.max()]=[0,0,255] cv.imshow('dst',img) if cv.waitKey(0) & 0xff == 27: cv.destroyAllWindows() | If your expectation is to obtain a single corner point at every line intersection, then the following is a simple approach. Current scenario: # visualize the corners mask = np.zeros_like(gray) mask[dst>0.01*dst.max()] = 255 In the above, there are many (supposedly) corner points close to each other. Approach: The idea now is to preserve only one point that is in close proximity to each other, while discarding the rest. To do so, I calculate the distance of each corner to every other and keep those that exceed a threshold. # storing coordinate positions of all points in a list coordinates = np.argwhere(mask) coor_list = coordinates.tolist() # points beyond this threshold are preserved thresh = 20 # function to compute distance between 2 points def distance(pt1, pt2): (x1, y1), (x2, y2) = pt1, pt2 dist = math.sqrt( (x2 - x1)**2 + (y2 - y1)**2 ) return dist # coor_list_2 = coor_list.copy() # iterate for every 2 points i = 1 for pt1 in coor_list: for pt2 in coor_list[i::1]: if(distance(pt1, pt2) < thresh): # to avoid removing a point if already removed try: coor_list_2.remove(pt2) except: pass i+=1 # draw final corners img2 = img.copy() for pt in coor_list_2: img2 = cv2.circle(img2, tuple(reversed(pt)), 3, (0, 0, 255), -1) Suggestions: To get more accurate result you can try finding the mean of all the points within a certain proximity. These would coincide close to the intersection of lines. | 5 | 2 |
72,854,648 | 2022-7-4 | https://stackoverflow.com/questions/72854648/generic-detection-of-subimages-in-images-with-opencv | Disclaimer: I'm a computer vision rookie. I have seen a lot of stack overflow posts of how to find a specific sub-image in a larger image. My usecase is a bit different since I don't want it to be specific and I'm not sure how I can do this (if it's even possible, but I have a feeling it should). I have a large datasets of images, of sometimes, some of this images are a combination of two or more other images of the dataset. I'd like to automatically crop theses "combinations" to isolate the sub-images. So the tasks would be to process each image of the dataset, and check if there are abnormal boundaries that could mean the image is a combination. Example using great stock images: What I've tried: I've seen that houghs transform could be used for line detection in images but I'm couldn't achieve anything using this. | Use cv.Sobel or other derivative kernel (with cv.filter2D) to find edges Sum along pixel columns to score each for "edginess". np.sum or np.mean do that. Some thresholding and np.argsort and indexing to find best candidates. cut (take slices out of array) edge map: edgy plot: slices to take: array([[ 0, 578], [ 578, 1135], [1135, 1136]], dtype=int64) Pictures: complete notebook: https://gist.github.com/crackwitz/e1ba1ce7a6fba446288275d91f66261c | 4 | 3 |
72,852,694 | 2022-7-4 | https://stackoverflow.com/questions/72852694/why-do-i-get-the-loop-of-ufunc-does-not-support-argument-0-of-type-numpy-ndarra | First, I used np.array to perform operations on multiple matrices, and it was successful. import numpy as np import matplotlib.pyplot as plt f = np.array([[0.35, 0.65]]) e = np.array([[0.92, 0.08], [0.03, 0.97]]) r = np.array([[0.95, 0.05], [0.06, 0.94]]) d = np.array([[0.99, 0.01], [0.08, 0.92]]) c = np.array([[0, 1], [1, 0]]) D = np.sum(f@(e@r@d*c)) u = f@e I = np.sum(f@(e*np.log(e/u))) print(D) print(I) Outcome: 0.14538525 0.45687371996485304 Next, I tried to plot the result using one of the elements in the matrix as a variable, but an error occurred. import numpy as np import matplotlib.pyplot as plt t = np.arange(0.01, 0.99, 0.01) f = np.array([[0.35, 0.65]]) e = np.array([[1-t, t], [0.03, 0.97]]) r = np.array([[0.95, 0.05], [0.06, 0.94]]) d = np.array([[0.99, 0.01], [0.08, 0.92]]) c = np.array([[0, 1], [1, 0]]) D = np.sum(f@(e@r@d*c)) u = f@e I = np.sum(f@(e*np.log(e/u))) plt.plot(t, D) plt.plot(t, I) plt.show() It shows the error below: AttributeError Traceback (most recent call last) AttributeError: 'numpy.ndarray' object has no attribute 'log' The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) <ipython-input-14-0856df964382> in <module>() 10 11 u = f@e ---> 12 I = np.sum(f@(e*np.log(e/u))) 13 14 plt.plot(t, D) TypeError: loop of ufunc does not support argument 0 of type numpy.ndarray which has no callable log method There was no problem with the following code, so I think there was something wrong with using np.array. import numpy as np import matplotlib.pyplot as plt t = np.arange(0.01, 0.99, 0.01) y = np.log(t) plt.plot(t, y) plt.show() Any idea for this problem? Thank you very much. | You can't create a batch of matrices e from the variable t using the construct e = np.array([[1-t, t], [0.03, 0.97]]) as this would create a ragged array due to [1-t, t] and [0.03, 0.97] having different shapes. Instead, you can create e by repeating [0.03, 0.97] to match the shape of [1-t, t], then stack them together as follows. t = np.arange(.01, .99, .01) # shape (98,) _t = np.stack([t, 1-t], axis=1) # shape (98, 2) e = np.array([[.03, .97]]) # shape (1, 2) e = np.repeat(e, len(ts), axis=0) # shape (98, 2) e = np.stack([_t, e], axis=1) # shape (98, 2, 2) After this, e will be a batch of 2x2 matrices array([[[0.01, 0.99], [0.03, 0.97]], [[0.02, 0.98], [0.03, 0.97]], [[0.03, 0.97], [0.03, 0.97]], [[0.04, 0.96], [0.03, 0.97]], ... Finally, expand other variables in the batch dimension to take advantage of numpy broadcast to batch the calculation f = np.array([[0.35, 0.65]])[None,:] # shape (1,1,2) r = np.array([[0.95, 0.05], [0.06, 0.94]])[None,:] # shape (1,2,2) d = np.array([[0.99, 0.01], [0.08, 0.92]])[None,:] # shape (1,2,2) c = np.array([[0, 1], [1, 0]])[None,:] # shape (1,2,2) and only sum across the last axis to get per-matrix result. D = np.sum(f@(e@r@d*c), axis=-1) # shape (98, 1) u = f@e I = np.sum(f@(e*np.log(e/u)), axis=-1) # shape (98, 1) | 4 | 2 |
72,847,737 | 2022-7-3 | https://stackoverflow.com/questions/72847737/sqlalchemy-session-commit-exception-handing-with-rollback | I am trying to figure out the preferred way to manage session if exceptiion occurs while doing operation or committing the session. But a few examples in the document confuses me a little bit. The doc says to frame out a commit/rollback block, you can do something like this: # verbose version of what a context manager will do with Session(engine) as session: session.begin() try: session.add(some_object) session.add(some_other_object) except: session.rollback() raise else: session.commit() While this example includes the commit() call in the try block: Session = sessionmaker(bind=engine) session = Session() session.begin() try: item1 = session.query(Item).get(1) item2 = session.query(Item).get(2) item1.foo = 'bar' item2.bar = 'foo' session.commit() except: session.rollback() raise I wonder: what's the difference between these two patterns? which is the preferred way for managing transaction? The doc says both example above can be succinctly illustrated with session.begin() context manager, so does it handle the exception occurs while commit() ? Possible related answes: Here is a answer that exludes commit() from the try block, but the explanation in the comments doesn't seem sufficient to me. And this answer mentions about why including commit() in try block. | Both examples accomplish nearly the same thing. The transaction will be committed if there is no error, and rolled back if there is an error. The difference is that if there is an exception on the commit itself, the second example will roll back. It's uncommon to get an exception on the commit, but it's possible. | 4 | 3 |
72,846,862 | 2022-7-3 | https://stackoverflow.com/questions/72846862/unable-to-assert-length-of-list-with-pytest | I am learning Python and I'm using Pytest to check my code as I learn. Here is some sample code I have running: str = "I love pizza" str_list = list(str) print(str_list) print(len(str_list)) With expected result printed to stdout: ['I', ' ', 'l', 'o', 'v', 'e', ' ', 'p', 'i', 'z', 'z', 'a'] 12 But if I run this test: def create_list_from_string(): str = "I love pizza" str_list = list(str) assert 123 == len(str_list) I cannot get the assert to fail. I have other tests in the file that pass when expected and fail if I purposely edit them to make them fail. So I think I have Pytest set up correctly. I know Python uses indentation for code blocks, and I verified all the indentations are 4 spaces and there's no trailing tabs or spaces. I also know that assert is not broken and I'm making some kind of newbie mistake. Thanks! | Try making method name and test file starts with test_ | 3 | 6 |
72,845,828 | 2022-7-3 | https://stackoverflow.com/questions/72845828/priority-of-tuple-unpacking-with-inline-if-else | Apologies in advance for the obscure title. I wasn't sure how to phrase what I encountered. Imagine that you have a title of a book alongside its author, separated by -, in a variable title_author. You scraped this information from the web so it might very well be that this item is None. Obviously you would like to separate the title from the author, so you'd use split. But in case title_author is None to begin with, you just want both title and author to be None. I figured that the following was a good approach: title_author = "In Search of Lost Time - Marcel Proust" title, author = title_author.split("-", 1) if title_author else None, None print(title, author) # ['In Search of Lost Time ', ' Marcel Proust'] None But to my surprise, title now was the result of the split and author was None. The solution is to explicitly indicate that the else clause is a tuple by means of parentheses. title, author = title_author.split("-", 1) if title_author else (None, None) print(title, author) # In Search of Lost Time Marcel Proust So why is this happening? What is the order of execution here that lead to the result in the first case? | title, author = title_author.split("-", 1) if title_author else None, None is the same as: title, author = (title_author.split("-", 1) if title_author else None), None Therefore, author is always None Explaination: From official doc An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right. That is to say, the interrupter will look for (x,y)=(a,b) and assign value as x=a and y=b. In your case, there are two interpretation, the main differece is that : title, author = (title_author.split("-", 1) if title_author else None), None is assigning two values (a list or a None and a None) to two variables and no unpacking is needed. title, author = title_author.split("-", 1) if title_author else (None, None) is actually assigning one value (a list or a tuple) to two variable, which need an unpacking step to map two variables to the two values in the list/tuple. As option 1 can be completed without unpacking, i.e. less operation, the interrupter will go with option 1 without explicit instructions. | 4 | 4 |
72,836,985 | 2022-7-2 | https://stackoverflow.com/questions/72836985/ipython-passwd-not-able-to-import-with-new-2022-anaconda-download | I am simply trying to do this from IPython.lib import passwd but I get this error In [1]: from IPython.lib import passwd --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Input In [1], in <cell line: 1>() ----> 1 from IPython.lib import passwd ImportError: cannot import name 'passwd' from 'IPython.lib' (/home/ubuntu/anaconda3/lib/python3.9/site-packages/IPython/lib/__init__.py) Googled the error but nothing. | I am facing the same issue with IPython version 8.4. It seems to me that the security lib is not present anymore. If you are using version 7x you should be able to import it with from IPython.lib.security import passwd as denoted in https://ipython.readthedocs.io/en/7.x/api/generated/IPython.lib.security.html?highlight=passwd However, from version 8x the module security is missing... | 5 | 6 |
72,845,443 | 2022-7-3 | https://stackoverflow.com/questions/72845443/branching-in-apache-airflow-using-taskflowapi | I can't find the documentation for branching in Airflow's TaskFlowAPI. I tried doing it the "Pythonic" way, but when ran, the DAG does not see task_2_execute_if_true, regardless of truth value returned by the previous task. @dag( schedule_interval=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), catchup=False, tags=['test'], ) def my_dag(): @task() def task_1_returns_boolean(): # evaluate and return boolean value return boolean_value @task() def task_2_execute_if_true(): # do_something... outcome_1 = task_1_returns_boolean() if outcome_1: outcome_2 = task_2_execute_if_true() executed = my_dag() What is the proper way of branching in TaskFlowAPI? Should I add one more function specifically for branching? | There's an example DAG in the source code: https://github.com/apache/airflow/blob/f1a9a9e3727443ffba496de9b9650322fdc98c5f/airflow/example_dags/example_branch_operator_decorator.py#L43. The syntax is: from airflow.decorators import task @task.branch(task_id="branching_task_id") def random_choice(): return "task_id_to_run" It was introduced in Airflow 2.3.0. | 4 | 6 |
72,842,563 | 2022-7-2 | https://stackoverflow.com/questions/72842563/how-to-activate-a-python-virtual-environment-automatically-on-login | I have a Python virtual environment named venv in the user home directory. I would like to activate this virtual environment on login. I don't want to type source venv/bin/activate each time after login. I want to type python something.py and have it always use the virtual environment. How can I do this in the user's login scripts? The user is only ever going to do python development in this particular virtual environment. | There might be better ways of doing this but the simplest way I can think of is modifying .bashrc file if you are using an OS Like Ubuntu. In your .bashrc file you can add a line to start your virtual environment. An example could be adding the following to the bottom of your .bashrc file: source myvenv/bin/activate | 5 | 6 |
72,818,247 | 2022-6-30 | https://stackoverflow.com/questions/72818247/streamlit-config-toml-file-not-changing-the-theme-of-the-web-app | I would like to change the theme of my streamlit application that I am working on. I read that I should make a directory called .streamlit/ with a file called config.toml, after creating the .toml file, it does not update the appearance of my web application. Here is the link to the app itself: https://jensen-holm-sports-sim-app-app-4a49zi.streamlitapp.com/ Here is the code in my config.toml file that is not working [theme] base="dark" primaryColor="#213767" backgroundColor="#13710d" font="monospace" and a link to all of the code that is on git hub: https://github.com/Jensen-holm/sports-sim-app any help is appriciated!! thank you | I just ran into the same problem. What I found out is that I had selected the "Dark" option via the hamburger menu at some point before creating the custom config file in the project directory. If you've done the same, the default order of precedence is overwritten and that choice is preserved. Otherwise, the default precedence should load the custom theme by default. Solution was to select the "Custom Theme" again from the hamburger menu. On every run after that changes in the custom config were reflecting properly. In my app, I'm keeping the menu hidden but it still works fine once you have set the "Custom Theme" option - you can go back and re-hide the menu (if that's your need). | 4 | 6 |
72,840,669 | 2022-7-2 | https://stackoverflow.com/questions/72840669/append-only-if-item-isnt-already-appended | In my Python application, I have the following lines: for index, codec in enumerate(codecs): for audio in filter(lambda x: x['hls']['codec_name'] == codec, job['audio']): audio['hls']['group_id'].append(index) How can I only trigger the append statement if the index hasn't been already appended previously? | Simply test if your index not in your list: for index, codec in enumerate(codecs): for audio in filter(lambda x: x['hls']['codec_name'] == codec, job['audio']): if index not in audio['hls']['group_id']: audio['hls']['group_id'].append(index) | 5 | 3 |
72,839,263 | 2022-7-2 | https://stackoverflow.com/questions/72839263/access-python-interpreter-in-vscode-version-controll-when-using-pre-commit | I'm using pre-commit for most of my Python projects, and in many of them, I need to use pylint as a local repo. When I want to commit, I always have to activate python venv and then commit; otherwise, I'll get the following error: black....................................................................Passed pylint...................................................................Failed - hook id: pylint - exit code: 1 Executable `pylint` not found When I use vscode version control to commit, I get the same error; I searched about the problem and didn't find any solution to avoid the error in VSCode. This is my typical .pre-commit-config.yaml: repos: - repo: https://github.com/ambv/black rev: 21.9b0 hooks: - id: black language_version: python3.8 exclude: admin_web/urls\.py - repo: local hooks: - id: pylint name: pylint entry: pylint language: python types: [python] args: - --rcfile=.pylintrc | you have ~essentially two options here -- neither are great (language: system is kinda the unsupported escape hatch so it's on you to make those things available on PATH) you could use a specific path to the virtualenv entry: venv/bin/pylint -- though that will reduce the portability. or you could start vscode with your virtualenv activated (usually code .) -- this doesn't always work if vscode is already running disclaimer: I created pre-commit | 10 | 8 |
72,804,712 | 2022-6-29 | https://stackoverflow.com/questions/72804712/how-to-accelerate-numpy-unique-and-provide-both-counts-and-duplicate-row-indices | I am attempting to find duplicate rows in a numpy array. The following code replicates the structure of my array which has n rows, m columns, and nz non-zero entries per row: import numpy as np import random import datetime def create_mat(n, m, nz): sample_mat = np.zeros((n, m), dtype='uint8') random.seed(42) for row in range(0, n): counter = 0 while counter < nz: random_col = random.randrange(0, m-1, 1) if sample_mat[row, random_col] == 0: sample_mat[row, random_col] = 1 counter += 1 test = np.all(np.sum(sample_mat, axis=1) == nz) print(f'All rows have {nz} elements: {test}') return sample_mat The code I am attempting to optimize is as follows: if __name__ == '__main__': threshold = 2 mat = create_mat(1800000, 108, 8) print(f'Time: {datetime.datetime.now()}') unique_rows, _, duplicate_counts = np.unique(mat, axis=0, return_counts=True, return_index=True) duplicate_indices = [int(x) for x in np.argwhere(duplicate_counts >= threshold)] print(f'Time: {datetime.datetime.now()}') print(f'Unique rows: {len(unique_rows)} Sample inds: {duplicate_indices[0:5]} Sample counts: {duplicate_counts[0:5]}') print(f'Sample rows:') print(unique_rows[0:5]) My output is as follows: All rows have 8 elements: True Time: 2022-06-29 12:08:07.320834 Time: 2022-06-29 12:08:23.281633 Unique rows: 1799994 Sample inds: [508991, 553136, 930379, 1128637, 1290356] Sample counts: [1 1 1 1 1] Sample rows: [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 0 1 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0]] I have considered using numba, but the challenge is that it does not operate using an axis parameter. Similarly, conversion to list and utilization of sets is an option, but then looping through to perform the duplicate counts seems "unpythonic". Given that I need to run this code multiple times (since I am modifying the numpy array and then needing to re-search for duplicates), the time is critical. I have also tried to use multiprocessing against this step but the np.unique seems to be blocking (i.e. even when I try to run multiple versions of this, I end up constrained to one thread running at 6% CPU capacity while the other threads sit idle). | Step 1: bit packing Since your matrix only contains binary values, you can aggressively pack the bits into uint64 values so to perform a much more efficient sort then. Here is a Numba implementation: import numpy as np import numba as nb @nb.njit('(uint8[:,::1],)', parallel=True) def pack_bits(mat): n, m = mat.shape res = np.zeros((n, (m+63)//64), np.uint64) for i in nb.prange(n): for bj in range(0, m, 64): val = np.uint64(0) if bj + 64 <= m: # Fast case for j in range(64): val += np.uint64(mat[i, bj+j]) << (63 - j) else: # Slow case (boundary) for j in range(m - bj): val += np.uint64(mat[i, bj+j]) << (63 - j) res[i, bj//64] = val return res @nb.njit('(uint64[:,::1], int_)', parallel=True) def unpack_bits(mat, m): n = mat.shape[0] assert mat.shape[1] == (m+63)//64 res = np.zeros((n, m), np.uint64) for i in nb.prange(n): for bj in range(0, m, 64): val = np.uint64(mat[i, bj//64]) if bj + 64 <= m: # Fast case for j in range(64): res[i, bj+j] = np.uint8((val >> (63 - j)) & 1) else: # Slow case (boundary) for j in range(m - bj): res[i, bj+j] = np.uint8((val >> (63 - j)) & 1) return res The np.unique function can be called on the much smaller packed array like in the initial code (except the resulting sorted array is a packed one and need to be unpacked). Since you do not need the indices, it is better not to compute it. Thus, return_index=True can be removed. Additionally, only the required values can be unpacked (unpacking is a bit more expensive than packing because writing a big matrix is more expensive than reading an existing one). if __name__ == '__main__': threshold = 2 n, m = 1800000, 108 mat = create_mat(n, m, 8) print(f'Time: {datetime.datetime.now()}') packed_mat = pack_bits(mat) duplicate_packed_rows, duplicate_counts = np.unique(packed_mat, axis=0, return_counts=True) duplicate_indices = [int(x) for x in np.argwhere(duplicate_counts >= threshold)] print(f'Time: {datetime.datetime.now()}') print(f'Duplicate rows: {len(duplicate_rows)} Sample inds: {duplicate_indices[0:5]} Sample counts: {duplicate_counts[0:5]}') print(f'Sample rows:') print(unpack_bits(duplicate_packed_rows[0:5], m)) Step 2: np.unique optimizations The np.unique call is sub-optimal as it performs multiple expensive internal sorting steps. Not all of them are needed in your specific case and some step can be optimized. A more efficient implementation consists in sorting the last column during a first step, then sorting the previous column, and so on until the first column is sorted similar to what a Radix sort does. Note that the last column can be sorted using a non-stable algorithm (generally faster) but the others need a stable one. This method is still sub-optimal as argsort calls are slow and the current implementation does not use multiple threads yet. Unfortunately, Numpy does not proving any efficient way to sort rows of a 2D array yet. While it is possible to reimplement this in Numba, this is cumbersome, a bit tricky to do and bug prone. Not to mention Numba introduce some overheads compared to a native C/C++ code. Once sorted, the unique/duplicate rows can be tracked and counted. Here is an implementation: def sort_lines(mat): n, m = mat.shape for i in range(m): kind = 'stable' if i > 0 else None mat = mat[np.argsort(mat[:,m-1-i], kind=kind)] return mat @nb.njit('(uint64[:,::1],)', parallel=True) def find_duplicates(sorted_mat): n, m = sorted_mat.shape assert m >= 0 isUnique = np.zeros(n, np.bool_) uniqueCount = 1 if n > 0: isUnique[0] = True for i in nb.prange(1, n): isUniqueVal = False for j in range(m): isUniqueVal |= sorted_mat[i, j] != sorted_mat[i-1, j] isUnique[i] = isUniqueVal uniqueCount += isUniqueVal uniqueValues = np.empty((uniqueCount, m), np.uint64) duplicateCounts = np.zeros(len(uniqueValues), np.uint64) cursor = 0 for i in range(n): cursor += isUnique[i] for j in range(m): uniqueValues[cursor-1, j] = sorted_mat[i, j] duplicateCounts[cursor-1] += 1 return uniqueValues, duplicateCounts The previous np.unique call can be replaced by find_duplicates(sort_lines(packed_mat)). Step 3: GPU-based np.unique While implementing a fast algorithm to sort row is not easy on CPU with Numba and Numpy, one can simply use CuPy to do that on the GPU assuming a Nvidia GPU is available and CUDA is installed (as well as CuPy). This solution has the benefit of being simple and significantly more efficient. Here is an example: import cupy as cp def cupy_sort_lines(mat): cupy_mat = cp.array(mat) return cupy_mat[cp.lexsort(cupy_mat.T[::-1,:])].get() The previous sort_lines call can be replaced by cupy_sort_lines. Results Here are the timings on my machine with a 6-core i5-9600KF CPU and a Nvidia 1660 Super GPU: Initial version: 15.541 s Optimized packing: 0.982 s Optimized np.unique: 0.634 s GPU-based sorting: 0.143 s (require a Nvidia GPU) Thus, the CPU-based optimized version is about 25 times faster and the GPU-based one is 109 times faster. Note that the sort take a significant time in all versions. Also, please note that the unpacking is not included in the benchmark (as seen in the provided code). It takes a negligible time as long as only few rows are unpacked and not all the full array (which takes roughtly ~200 ms on my machine). This last operation can be further optimized at the expense of a significantly more complex implementation. | 8 | 8 |
72,831,952 | 2022-7-1 | https://stackoverflow.com/questions/72831952/how-do-i-integrate-custom-exception-handling-with-the-fastapi-exception-handling | Python version 3.9, FastAPI version 0.78.0 I have a custom function that I use for application exception handling. When requests run into internal logic problems, i.e I want to send an HTTP response of 400 for some reason, I call a utility function. @staticmethod def raise_error(error: str, code: int) -> None: logger.error(error) raise HTTPException(status_code=code, detail=error) Not a fan of this approach. So I look at from fastapi import FastAPI, HTTPException, status from fastapi.respones import JSONResponse class ExceptionCustom(HTTPException): pass def exception_404_handler(request: Request, exc: HTTPException): return JSONResponse(status_code=status.HTTP_404_NOT_FOUND, content={"message": "404"}) app.add_exception_handler(ExceptionCustom, exception_404_handler) The problem I run into with the above approach is the inability to pass in the message as an argument. Any thoughts on the whole topic? | Your custom exception can have any custom attributes that you want. Let's say you write it this way: class ExceptionCustom(HTTPException): pass in your custom handler, you can do something like def exception_404_handler(request: Request, exc: HTTPException): return JSONResponse(status_code=status.HTTP_404_NOT_FOUND, content={"message": exc.detail}) Then, all you need to do is to raise the exception this way: raise ExceptionCustom(status_code=404, detail='error message') Note that you are creating a handler for this specific ExceptionCustom. If all you need is the message, you can write something more generic: class MyHTTPException(HTTPException): pass def my_http_exception_handler(request: Request, exc: HTTPException): return JSONResponse(status_code=exc.status_code, content={"message": exc.detail}) app.add_exception_handler(MyHTTPException, my_http_exception_handler) This way you can raise any exception, with any status code and any message and have the message in your JSON response. There's a detailed explanation on FastAPI docs | 8 | 11 |
72,817,748 | 2022-6-30 | https://stackoverflow.com/questions/72817748/scipy-has-no-attribute-signal | I have a file which imports a function from another file, like shown below. file1.py: # import scipy.signal import file2 file2.foo() file2.py: import scipy def foo(): scipy.signal.butter(2, 0.01, 'lowpass', analog=False) When I run file1.py I get the following error: File "file2.py", line 5, in foo scipy.signal.butter(2, 0.01, 'lowpass', analog=False) AttributeError: module 'scipy' has no attribute 'signal' However, when I uncomment line 1 from file1.py (import scipy.signal) the error disappears. Why is this happening? | With scipy, you need to import the submodule directly with either import scipy.signal or from scipy import signal. Many submodule won't work if you just import scipy. You can read about the scipy api here | 4 | 7 |
72,827,704 | 2022-7-1 | https://stackoverflow.com/questions/72827704/how-to-customize-ttk-checkbutton-colors | I am looking for a way to change the background color (and the active color) of the tickbox of the ttk.Checkbutton Optimally, the background color of the box should match the background's color? What is the command to modify this in the style.configure() method? | It is possible to change the colors of the tickbox using the options indicatorbackground and indicatorforeground. style.configure("TCheckbutton", indicatorbackground="black", indicatorforeground="white", background="black", foreground="white") To change the active colors, you need to use style.map() instead of style.configure(). style.map("TCheckbutton", background=[("active", "darkgrey")]) Note that with some themes (e.g. OSX and Windows default themes), the above commands do not produce any effect because the tickbox is created with an image. If this is the case, you can change the theme to use, for instance, "alt" or "clam", with style.theme_use("clam"). If you want to keep using a theme that does not allow to change the tickbox color, you can use your own tickbox images instead: # create custom tickbox element img_unticked_box = tk.PhotoImage(file="/path/to/unticked_box_image") img_ticked_box = tk.PhotoImage(file="/path/to/ticked_box_image") style.element_create("tickbox", "image", img_unticked_box, ("selected", img_ticked_box)) # replace the checkbutton indicator with the custom tickbox in the Checkbutton's layout style.layout( "TCheckbutton", [('Checkbutton.padding', {'sticky': 'nswe', 'children': [('Checkbutton.tickbox', {'side': 'left', 'sticky': ''}), ('Checkbutton.focus', {'side': 'left', 'sticky': 'w', 'children': [('Checkbutton.label', {'sticky': 'nswe'})]})]})] ) I obtained the layout of the Checkbutton with style.layout("TCheckbutton") and then replaced 'Checkbutton.indicator' by 'Checkbutton.tickbox'. | 3 | 6 |
72,824,468 | 2022-7-1 | https://stackoverflow.com/questions/72824468/pip-installing-environment-yml-as-if-its-a-requirements-txt | I have an environment.yml file, but don't want to use Conda: name: foo channels: - defaults dependencies: - matplotlib=2.2.2 Is it possible to have pip install the dependencies inside an environment.yml file as if it's a requirements.txt file? I tried pip install -r environment.yml and it doesn't work with pip==22.1.2. | No, pip does not support this format. The format it expects for a requirements file is documented here. You'll have to convert the environment.yml file to a requirements.txt format either manually or via a script that automates this process. However, keep in mind that not all packages on Conda will be available on PyPI. | 19 | 7 |
72,819,980 | 2022-6-30 | https://stackoverflow.com/questions/72819980/python-selenium-beta-chrome-driver-uses-wrong-binary-path | I'm currently switching from chrome 102 to 104 Beta (since 103 has an unfixed error for my use case with a python - selenium script) I installed the Chrome Beta 104 + the 104 chrome driver. When I start up the script it recognizes that it's a 104 driver but the driver itself searches for the chrome.exe application within the old chrome path: Where it searches right now: C:\Program Files\Google\Chrome\Application\chrome.exe Where it should search for: C:\Program Files\Google\Chrome Beta\Application\chrome.exe Is there some simple way of changing the binary path where the beta chrome driver searches for the exe? Something simple I could put within my python script would be lovely. | In that case, you should specify where selenium has to look to find your chrome executer with binary_location inside the Options class. Try this one out: Assuming your chromedriver.exe and python file are on the same folder, otherwise you'll have to specify the path of the chromedriver.exe as well. from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service options = Options() options.binary_location = r'C:\Program Files\Google\Chrome Beta\Application\chrome.exe' browser = webdriver.Chrome(options = options, service = Service("chromedriver.exe")) browser.get(your_url) | 3 | 6 |
72,813,077 | 2022-6-30 | https://stackoverflow.com/questions/72813077/can-i-insert-a-table-name-by-using-a-query-parameter | I have an SQL Alchemy engine where I try to insert parameters via sqlalchemy.sql.text to protect against SQL injection. The following code works, where I code variables for the condition and conditions values. from sqlalchemy import create_engine from sqlalchemy.sql import text db_engine = create_engine(...) db_engine.execute( text( 'SELECT * FROM table_name WHERE :condition_1 = :condition_1_value'), condition_1="name", condition_1_value="John" ) ).fetchall() However, when I try to code the variable name for table_name, it returns an error. from sqlalchemy import create_engine from sqlalchemy.sql import text db_engine = create_engine(...) db_engine.execute( text( 'SELECT * FROM :table_name WHERE :condition_1 = :condition_1_value'), table_name="table_1", condition_1="name", condition_1_value="John" ) ).fetchall() Any ideas why this does not work? EDIT: I know that it has something to do with the table_name not being a string, but I am not sure how to do it in another way. | Any ideas why this does not work? Query parameters are used to supply the values of things (usually column values), not the names of things (tables, columns, etc.). Every database I've seen works that way. So, despite the ubiquitous advice that dynamic SQL is a "Bad Thing", there are certain cases where it is simply necessary. This is one of them. table_name = "table_1" # NOTE: Do not use untrusted input here! stmt = text(f'SELECT * FROM "{table_name}" β¦') Also, check the results you get from trying to parameterize a column name. You may not be getting what you expect. stmt = text("SELECT * FROM table_name WHERE :condition_1 = :condition_1_value") db_engine.execute(stmt, dict(condition_1="name", condition_1_value="John")) will not produce the equivalent of SELECT * FROM table_name WHERE name = 'John' It will render the equivalent of SELECT * FROM table_name WHERE 'name' = 'John' and will not throw an error, but it will also return no rows because 'name' = 'John' will never be true. | 8 | 9 |
72,805,719 | 2022-6-29 | https://stackoverflow.com/questions/72805719/bytesio-downloading-file-object-from-s3-but-bytestream-is-empty | See update at bottom - question slightly changed I'm trying to download a file from s3 to a file-like object using boto3's .download_fileobj method, however when I try to inspect the downloaded bytestream, it's empty. I'm not sure what I'm doing wrong however: client = boto3.client('s3') data = io.BytesIO() client.download_fileobj(Bucket='mybucket', Key='myfile.wav', Fileobj=data) print(data.read()) Which yields an empty bytestring: b'' UPDATE : kINDA SOLVED. So it turns out that adding data.seek(0) after the download_fileobj line solves the problem. In light of this, I am now looking for an answer that explains what this snippet does and why it fixes the problem. | Thank you for this, I was running into the same problem. :o) The reason this works is that file buffer objects work with an internal pointer to the current spot to read from or write to. This is important when you pass the read() method a number of bytes to read, or to continually write to the next section of the file. As the client.download_fileobj() writes to the byte stream, or any operation writes to any stream, the pointer is positioned to the end of the last write. So you need to tell the file buffer object that you want to read from the beginning of what was just written. | 8 | 8 |
72,816,567 | 2022-6-30 | https://stackoverflow.com/questions/72816567/how-to-select-rows-in-pandas-dataframe-based-on-between-from-another-column | I have a dataframe organised like so: x y e A 0 0.0 1.0 0.01 1 0.1 0.9 0.03 2 0.2 1.3 0.02 ... B 0 0.0 0.5 0.02 1 0.1 0.6 0.02 2 0.2 0.9 0.04 ... etc. I would like to select rows of a A/B/etc. that fall between certain values in x. This, for example, works: p,q=0,1 indices=df.loc[("A"),"x"].between(p,q) df.loc[("A"),"y"][indices] Out: [1.0,0.9] However, this takes two lines of code, and uses chain indexing. However, what is to me the obvious way of one-lining this doesn't work: p,q=0,1 df.loc[("A",df[("A"),"x"].between(p,q)),"y"] Out: [1.0,0.9] How can I avoid chain indexing here? (Also, if anyone wants to explain how to make the "x" column into the indices and thereby avoid the '0,1,2' indices, feel free!) [Edited to clarify desired output] | You can merge your 2 lines of code by using a lambda function. >>> df.loc['A'].loc[lambda A: A['x'].between(p, q), 'y'] 1 0.9 2 1.3 Name: y, dtype: float64 The output of your code: indices=df.loc[("A"),"x"].between(p,q) output=df.loc[("A"),"y"][indices] print(output) # Output 1 0.9 2 1.3 Name: y, dtype: float64 | 4 | 2 |
72,813,425 | 2022-6-30 | https://stackoverflow.com/questions/72813425/order-of-execution-for-multiple-contextmanagers-in-python | I couldn't find the answer for this question maybe someone could help me please? Is the order of execution defined in case of using two contexts like that? with open('a.txt', 'w') as f1, open('b.txt', 'w') as f2: <some operation> Am I guaranteed that the first context (here opening 'a.txt') will be executed before second (here opening 'b.txt')? | According to the language reference: with A() as a, B() as b: SUITE is semantically equivalent to: with A() as a: with B() as b: SUITE So yes, since A() will execute before the body is executed. | 4 | 4 |
72,809,395 | 2022-6-30 | https://stackoverflow.com/questions/72809395/pandas-how-can-i-move-certain-columns-into-rows | Suppose I have the df below. I would like to combine the price columns and value columns so that all prices are in one column and all volumes are in another column. I would also like a third column that identified the price level. For example, unit1, unit2 and unit3. import numpy as np import pandas as pd df = pd.DataFrame( { 'uid': ['U100', 'U200', 'E100', 'E200', 'E300', 'A100', 'A200', 'A300', 'A400', 'A500'], 'location': ['US', 'US', 'EU', 'EU', 'EU', 'Asia', 'Asia', 'Asia', 'Asia', 'Asia'], 'unit1_price': [10, 20, 15, 10, 10, 10, 20, 20, 25, 25], 'unit1_vol': [100, 150, 100, 200, 150, 150, 100, 200, 200, 200], 'unit2_price': [10, 25, 30, 20, 10, 10, 10, 10, 20, 20], 'unit2_vol': [200, 200, 150, 300, 300, 200, 150, 225, 225, 250], 'unit3_price': [0, 0, 0, 20, 20, 20, 20, 20, 20, 20], 'unit3_vol': [0, 0, 0, 500, 500, 500, 500, 500, 500, 500] } ) df This is what the final df should look like: I tried using melt and I think almost have the right answer. pd.melt(df, id_vars=['uid', 'location'], value_vars=['unit1_price', 'unit1_vol', 'unit2_price', 'unit2_vol', 'unit3_price', 'unit3_vol']) This is what the partial df looks like with melt: The problem with the above is that volume and price are in the same column but I want them to be in 2 separate columns. Did I use the right function? | You can form two dataframe using pd.melt first and combine it back to become one dataframe. df1 = df.melt(id_vars=['uid', 'location'], value_vars=['unit1_price', 'unit2_price', 'unit3_price'],var_name='unit',value_name='price') df2 = df.melt(id_vars=['uid', 'location'], value_vars=['unit1_vol', 'unit2_vol', 'unit3_vol'],var_name='unit', value_name="volume") ddf = pd.concat([df1,df2['volume']],axis=1).sort_values(by=['uid','unit'],ignore_index=True) ddf['unit']=ddf['unit'].str.split('_',expand=True)[0] | 12 | 3 |
72,799,623 | 2022-6-29 | https://stackoverflow.com/questions/72799623/xticks-different-interval | How do i set xticks to 'a different interval' For instance: plt.plot(1/(np.arange(0.1,3,0.1))) returns: If I would like the x axis to be on a scale from 0 to 3, how can i do that? I've tried plt.xticks([0,1,2]) but that returns: | You want to learn about plt.xlim and adjacent functions. This causes the X axis to have limits (minimum, maximum) that you specify. Otherwise Matplotlib decides for you based on the values you try to plot. y = 1 / np.arange(0.1,3,0.1) plt.plot(y) plt.xlim(0, 3) # minimum 0, maximum 3 plt.show() Your plot uses only Y values, so the X values are automatically chosen to be 1, 2, 3, ... to pair up with each Y value you provide. If you desire to determine the X too, you can do that: x = np.arange(0.1,3,0.1) y = 1/x plt.plot(x, y) plt.xticks([0,1,2,3]) # ticks at those positions, if you don't like the automatic ones plt.show() | 4 | 1 |
72,794,939 | 2022-6-29 | https://stackoverflow.com/questions/72794939/how-to-quickly-identify-if-a-rule-in-snakemake-needs-an-input-function | I'm following the snakemake tutorial on their documentation page and really got stuck on the concept of input functions https://snakemake.readthedocs.io/en/stable/tutorial/advanced.html#step-3-input-functions Basically they define a config.yaml as follows: samples: A: data/samples/A.fastq B: data/samples/B.fastq and the Snakefile as follows without any input function: configfile: "config.yaml" rule all: input: "plots/quals.svg" rule bwa_map: input: "data/genome.fa", "data/samples/{sample}.fastq" output: "mapped_reads/{sample}.bam" threads: 12 shell: "bwa mem -t {threads} {input} | samtools view -Sb - > {output}" rule samtools_sort: input: "mapped_reads/{sample}.bam" output: "sorted_reads/{sample}.bam" shell: "samtools sort -T sorted_reads/{wildcards.sample} -O bam {input} > {output}" rule samtools_index: input: "sorted_reads/{sample}.bam" output: "sorted_reads/{sample}.bam.bai" shell: "samtools index {input}" rule bcftools_call: input: fa = "data/genome.fa", bam = expand("sorted_reads/{sample}.bam",sample=config['samples']), bai = expand("sorted_reads/{sample}.bam.bai",sample=config['samples']) output: "calls/all.vcf" shell: "bcftools mpileup -f {input.fa} {input.bam} | " "bcftools call -mv - > {output}" rule plot_quals: input: "calls/all.vcf" output: "plots/quals.svg" script: "scripts/plot-quals.py" In the tutorial they mention that this expansion happens in the initialization step: bam = expand("sorted_reads/{sample}.bam",sample=config['samples']), bai = expand("sorted_reads/{sample}.bam.bai",sample=config['samples']) and that the FASTQ paths cannot be determined for rule bwa_map in this phase. However the code works if we run as is, why is that ? Then they recommend using an input function to defer bwa_map to the next phase (DAG phase) as follows: def get_bwa_map_input_fastqs(wildcards): return config["samples"][wildcards.sample] rule bwa_map: input: "data/genome.fa", get_bwa_map_input_fastqs output: "mapped_reads/{sample}.bam" threads: 8 shell: "bwa mem -t {threads} {input} | samtools view -Sb - > {output}" I'm really confused when an input function makes sense and when it does not ? | SultanOrazbayev has a good answer already. Here's another typical example. Often, the input and output files share the same pattern (wildcards). For example, if you want to sort a file you may do: input: {name}.txt -> output: {name}.sorted.txt. Sometimes however the input files are not linked to the output by a simple pattern. An example from bioinformatics is a rule that align reads to a genome: rule align: input: reads= '{name}.fastq', genome= 'human_genome.fa', output: bam= '{name}.bam', shell: ... here the name of the genome file is unrelated to the name of the input reads file and the name of the output bam file. The rule above works because the reference genome is a concrete filename without wildcards. But: What if the choice of reference genome depends on the input fastq file? For same input reads you may need the mouse genome and for others the human genome. An input function comes handy: def get_genome(wildcards): if wildcards.name in ['bob', 'alice']: return 'human_genome.fa', if wildcards.name in ['mickey', 'jerry'], return 'mouse_genome.fa', rule align: input: reads= '{name}.fastq', genome= get_genome, output: bam= '{name}.bam', shell: ... now the reference genome is mouse or human depending on the input reads. | 3 | 4 |
72,803,062 | 2022-6-29 | https://stackoverflow.com/questions/72803062/converting-a-dictionary-of-lists-to-a-pandas-dataframe-using-predefined-headers | I have a dictionary that looks like the following: date_pair_dict = { "15-02-2022 15-02-2022": ["key 1 val 1", "key 1 val 2", "key 1 val 3"], "15-02-2022 16-02-2022": ["key 2 val 1", "key 2 val 2", "key 2 val 3"], "16-02-2022 16-02-2022": ["key 3 val 1", "key 3 val 2", "key 3 val 3"], "16-02-2022 17-02-2022": ["key 4 val 1", "key 4 val 2", "key 4 val 3"] } And a list of headers: headers = ["date pair header", "header val 1", "header val 2", "header val 3"] I would like to create a pandas.DataFrame and write this to Excel, where the format would be the following expected output: date pair header header val 1 header val 2 header val 3 15-02-2022 15-02-2022 key 1 val 1 key 1 val 2 key 1 val 3 15-02-2022 16-02-2022 key 2 val 1 key 2 val 2 key 2 val 3 16-02-2022 16-02-2022 key 3 val 1 key 3 val 2 key 3 val 3 16-02-2022 17-02-2022 key 4 val 1 key 4 val 2 key 4 val 3 Right now, I'm using this (arguably very sad) method: import pandas date_pair_dict = { "15-02-2022 15-02-2022": ["key 1 val 1", "key 1 val 2", "key 1 val 3"], "15-02-2022 16-02-2022": ["key 2 val 1", "key 2 val 2", "key 2 val 3"], "16-02-2022 16-02-2022": ["key 3 val 1", "key 3 val 2", "key 3 val 3"], "16-02-2022 17-02-2022": ["key 4 val 1", "key 4 val 2", "key 4 val 3"] } headers = ["date pair header", "header val 1", "header val 2", "header val 3"] list_of_keys, list_of_val_1, list_of_val_2, list_of_val_3 = [], [], [], [] for key in date_pair_dict.keys(): list_of_keys.append(key) val_1, val_2, val_3 = date_pair_dict.get(key) list_of_val_1.append(val_1) list_of_val_2.append(val_2) list_of_val_3.append(val_3) dataframe = pandas.DataFrame( { headers[0]: list_of_keys, headers[1]: list_of_val_1, headers[2]: list_of_val_2, headers[3]: list_of_val_3, } ) Which is not scalable whatsoever. In reality, this date_pair_dict can have any number of keys, each corresponding to a list of any length. The length of these lists will however always remain the same, and will be known beforehand (I will always predefine the headers list). Additionally, I believe this runs the risk of me having a dataframe that does not share the same order as the original keys, due to me doing the following: for key in dictionary.keys(): .... The keys are date pairs, and need to remain in order when used as the first column of the dataframe. Is there a better way to do this, preferably using a dictionary comprehension? | Like you said you can use a comprehension on your dict key/value pairs: import pandas as pd date_pair_dict = { "15-02-2022 15-02-2022": ["key 1 val 1", "key 1 val 2", "key 1 val 3"], "15-02-2022 16-02-2022": ["key 2 val 1", "key 2 val 2", "key 2 val 3"], "16-02-2022 16-02-2022": ["key 3 val 1", "key 3 val 2", "key 3 val 3"], "16-02-2022 17-02-2022": ["key 4 val 1", "key 4 val 2", "key 4 val 3"] } headers = ["date pair header", "header val 1", "header val 2", "header val 3"] df = pd.DataFrame([[k] + v for k,v in date_pair_dict.items()], columns=headers) print(df) Output: date pair header header val 1 header val 2 header val 3 0 15-02-2022 15-02-2022 key 1 val 1 key 1 val 2 key 1 val 3 1 15-02-2022 16-02-2022 key 2 val 1 key 2 val 2 key 2 val 3 2 16-02-2022 16-02-2022 key 3 val 1 key 3 val 2 key 3 val 3 3 16-02-2022 17-02-2022 key 4 val 1 key 4 val 2 key 4 val 3 | 4 | 3 |
72,801,110 | 2022-6-29 | https://stackoverflow.com/questions/72801110/sorting-a-list-of-chromosomes-in-the-correct-order | A seemingly simple problem, but one that's proving a bit vexing. I have a list of chromosomes (there are 23 chromosome - chromosomes 1 to 21, then chromosome X and chromosome Y) like so: ['chr11','chr14','chr16','chr13','chr4','chr13','chr2','chr1','chr2','chr3','chr14','chrX',] I would like to sort this in the following order : ['chr1', 'chr2','chr2','chr3','chr4','chr11','chr13','chr13', 'chr14','chr14','chr16','chrX'] However, due to the lexicographical nature of python's sort it will sort chr1, chr10, chr11, chr12...chr2, etc. as I have chromosome X, sorting by their integer values also doesn't seem like an option. would I potentially have to specify a unique key by which to sort the list? Or is there some sort of obvious solution I'm missing. | You can use natsorted, what you want is natural sorting after all ;) l = ['chr11','chr14','chr16','chr13','chr4','chr13','chr2', 'chr1','chr2','chr3','chr14','chrX','chrY'] from natsort import natsorted out = natsorted(l) output: ['chr1', 'chr2', 'chr2', 'chr3', 'chr4', 'chr11', 'chr13', 'chr13', 'chr14', 'chr14', 'chr16', 'chrX', 'chrY'] | 5 | 7 |
72,798,967 | 2022-6-29 | https://stackoverflow.com/questions/72798967/how-to-ignore-some-errors-with-sentry-not-to-send-them | I have a project based on django (3.2.10) and sentry-sdk (1.16.0) There is my sentry-init file: from os import getenv SENTRY_URL = getenv('SENTRY_URL') if SENTRY_URL: from sentry_sdk import init from sentry_sdk.integrations.django import DjangoIntegration from sentry_sdk.integrations.redis import RedisIntegration from sentry_sdk.integrations.celery import CeleryIntegration init( dsn=SENTRY_URL, integrations=[DjangoIntegration(), RedisIntegration(), CeleryIntegration()], traces_sample_rate=1.0, send_default_pii=True, debug=True, ) I have a CustomError inherited from Exception Every time I raise the CustomError sentry-sdk sends it to the dsn-url. I want to ignore some class of error or something like this. How can I do this? | You can pass a function that filters the errors to be sent: from os import getenv SENTRY_URL = getenv('SENTRY_URL') if SENTRY_URL: from sentry_sdk import init from sentry_sdk.integrations.django import DjangoIntegration from sentry_sdk.integrations.redis import RedisIntegration from sentry_sdk.integrations.celery import CeleryIntegration def before_send(event, hint): if 'exc_info' in hint: exc_type, exc_value, tb = hint['exc_info'] if isinstance(exc_value, CustomError): # Replace CustomError with your custom error return None return event init( dsn=SENTRY_URL, integrations=[DjangoIntegration(), RedisIntegration(), CeleryIntegration()], traces_sample_rate=1.0, send_default_pii=True, debug=True, before_send=before_send ) You can find more info in the documentation. | 4 | 3 |
72,796,680 | 2022-6-29 | https://stackoverflow.com/questions/72796680/numpy-valueerror-cannot-convert-float-nan-to-integer-python | I want to insert NaN at specific locations in A. However, there is an error. I attach the expected output. import numpy as np from numpy import NaN A = np.array([10, 20, 30, 40, 50, 60, 70]) C=[2,4] A=np.insert(A,C,NaN,axis=0) print("A =",[A]) The error is <module> A=np.insert(A,C,NaN,axis=0) File "<__array_function__ internals>", line 5, in insert File "C:\Users\USER\anaconda3\lib\site-packages\numpy\lib\function_base.py", line 4678, in insert new[tuple(slobj)] = values ValueError: cannot convert float NaN to integer The expected output is [array([10, 20, NaN, 30, 40, NaN, 50, 60, 70])] | Designate a type for your array of float32 (or float16, float64, etc. as appropriate) import numpy as np A = np.array([10, 20, 30, 40, 50, 60, 70], dtype=np.float32) C=[2,4] A=np.insert(A,C,np.NaN,axis=0) print("A =",[A]) A = [array([10., 20., nan, 30., 40., nan, 50., 60., 70.], dtype=float32)] | 4 | 4 |
72,796,364 | 2022-6-29 | https://stackoverflow.com/questions/72796364/how-to-convert-dictionary-to-pandas-dataframe-in-python-when-dictionary-value-is | I want to convert Python dictionary into DataFrame. Dictionary value is a List with different length. Example: import pandas as pd data = {'A': [1], 'B': [1,2], 'C': [1,2,3]} df = pd.DataFrame.from_dict(data) But, the above code doesn't work with the following error: ValueError: All arrays must be of the same length The output that I would like to get is as follows: name value 'A' [1] 'B' [1,2] 'C' [1,2,3] | You can use: df = pd.DataFrame({'name':data.keys(), 'value':data.values()}) print (df) name value 0 A [1] 1 B [1, 2] 2 C [1, 2, 3] df = pd.DataFrame(data.items(), columns=['name','value']) print (df) name value 0 A [1] 1 B [1, 2] 2 C [1, 2, 3] Also working Series: s = pd.Series(data) print (s) A [1] B [1, 2] C [1, 2, 3] dtype: object | 5 | 10 |
72,750,043 | 2022-6-24 | https://stackoverflow.com/questions/72750043/add-timedelta-to-a-date-column-above-weeks | How would I add 1 year to a column? I've tried using map and apply but I failed miserably. I also wonder why pl.date() accepts integers while it advertises that it only accepts str or pli.Expr. A small hack workaround is: col = pl.col('date').dt df = df.with_columns(pl.when(pl.col(column).is_not_null()) .then(pl.date(col.year() + 1, col.month(), col.day())) .otherwise(pl.date(col.year() + 1,col.month(), col.day())) .alias("date")) but this won't work for months or days. I can't just add a number or I'll get a: > thread 'thread '<unnamed>' panicked at 'invalid or out-of-range date<unnamed>', ' panicked at '/github/home/.cargo/registry/src/github.com-1ecc6299db9ec823/chrono-0.4.19/src/naive/date.rsinvalid or out-of-range date:', 173:/github/home/.cargo/registry/src/github.com-1ecc6299db9ec823/chrono-0.4.19/src/naive/date.rs51 :note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Most likely because day and month cycle while year goes to infinity. I could also do this: df = df.with_columns( pl.when(col.month() == 1) .then(pl.date(col.year(), 2, col.day())) .when(col.month() == 2) .then(pl.date(col.year(), 3, col.day())) .when(col.month() == 3) .then(pl.date(col.year(), 4, col.day())) .when(col.month() == 4) .then(pl.date(col.year(), 5, col.day())) .when(col.month() == 5) .then(pl.date(col.year(), 6, col.day())) .when(col.month() == 6) .then(pl.date(col.year(), 7, col.day())) .when(col.month() == 7) .then(pl.date(col.year(), 8, col.day())) .when(col.month() == 8) .then(pl.date(col.year(), 9, col.day())) .when(col.month() == 9) .then(pl.date(col.year(), 10, col.day())) .when(col.month() == 10) .then(pl.date(col.year(), 11, col.day())) .when(col.month() == 11) .then(pl.date(col.year(), 12, col.day())) .otherwise(pl.date(col.year() + 1, 1, 1)) .alias("valid_from") ) | Polars allows to do addition and subtraction with python's timedelta objects. However above week units things get a bit more complicated as we have to take different days of the month and leap years into account. For this polars has offset_by under the dt namespace. (pl.DataFrame({ "dates": pl.datetime_range(pl.datetime(2000, 1, 1), pl.datetime(2026, 1, 1), "1y", eager=True) }).with_columns( pl.col("dates").dt.offset_by("1y").alias("dates_and_1_yr") )) shape: (27, 2) βββββββββββββββββββββββ¬ββββββββββββββββββββββ β dates β dates_and_1_yr β β --- β --- β β datetime[ΞΌs] β datetime[ΞΌs] β βββββββββββββββββββββββͺββββββββββββββββββββββ‘ β 2000-01-01 00:00:00 β 2001-01-01 00:00:00 β β 2001-01-01 00:00:00 β 2002-01-01 00:00:00 β β 2002-01-01 00:00:00 β 2003-01-01 00:00:00 β β 2003-01-01 00:00:00 β 2004-01-01 00:00:00 β β 2004-01-01 00:00:00 β 2005-01-01 00:00:00 β β β¦ β β¦ β β 2022-01-01 00:00:00 β 2023-01-01 00:00:00 β β 2023-01-01 00:00:00 β 2024-01-01 00:00:00 β β 2024-01-01 00:00:00 β 2025-01-01 00:00:00 β β 2025-01-01 00:00:00 β 2026-01-01 00:00:00 β β 2026-01-01 00:00:00 β 2027-01-01 00:00:00 β βββββββββββββββββββββββ΄ββββββββββββββββββββββ | 5 | 6 |
72,720,235 | 2022-6-22 | https://stackoverflow.com/questions/72720235/requirements-txt-for-pytorch-for-both-cpu-and-gpu-platforms | I am trying to create a requirements.txt to use pytorch but would like it to work on both GPU and non-GPU platforms. I do something like on my Linux GPU system: --find-links https://download.pytorch.org/whl/cu113/torch_stable.html torch==1.10.2+cu113 torchvision==0.11.3+cu113 pytorch-lightning==1.5.10 This works fine and the packages are installed and I can use the GPU-enabled pytorch. I wonder how I can modify this for the mac and non GPU users to install the non cuda package for torch and torchvision? Do I need to maintain separate requirements.txt files? | February 2024 update Check https://pytorch.org/. You will see that "CUDA is not available on MacOS, please use default package". However, you can still get performance boosts (this will depend on your hardware) by installing the MPS accelerated version of pytorch by: # MPS acceleration is available on MacOS 12.3+ pip3 install torch torchvision torchaudio This command can be generated here: https://pytorch.org/ And in order to install different Torch versions on different platforms you can use conditionals in your requirements.txt like this # for CUDA 11.8 torch on Linux --index-url https://download.pytorch.org/whl/cu118; sys_platform == "linux" torch; sys_platform == "linux" torchvision; sys_platform == "linux" pytorch-lightning; sys_platform == "linux" # for MPS accelerated torch on Mac torch; sys_platform == "darwin" torchvision; sys_platform == "darwin" pytorch-lightning; sys_platform == "darwin" This will install CUDA enabled torch and torchvision on Linux but the MPS accelerated version of them on MacOS | 5 | 7 |
72,773,206 | 2022-6-27 | https://stackoverflow.com/questions/72773206/selenium-python-attributeerror-webdriver-object-has-no-attribute-find-el | I am trying to get Selenium working with Chrome, but I keep running into this error message (and others like it): AttributeError: 'WebDriver' object has no attribute 'find_element_by_name' The same problem occurs with find_element_by_id(), find_element_by_class(), etc. I also could not call send_keys(). I am just running the test code provided at ChromeDriver - WebDriver for Chrome - Getting started. import time from selenium import webdriver driver = webdriver.Chrome("C:/Program Files/Chrome Driver/chromedriver.exe") # Path to where I installed the web driver driver.get('http://www.google.com/'); time.sleep(5) # Let the user actually see something! search_box = driver.find_element_by_name('q') search_box.send_keys('ChromeDriver') search_box.submit() time.sleep(5) # Let the user actually see something! driver.quit() I am using Google Chrome version 103.0.5060.53 and downloaded ChromeDriver 103.0.5060.53 from Downloads. When running the code, Chrome opens and navigates to google.com, but it receives the following output: C:\Users\Admin\Programming Projects\Python Projects\Clock In\clock_in.py:21: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome("C:/Program Files/Chrome Driver/chromedriver.exe") # Optional argument, if not specified will search path. DevTools listening on ws://127.0.0.1:58397/devtools/browser/edee940d-61e0-4cc3-89e1-2aa08ab16432 [9556:21748:0627/083741.135:ERROR:device_event_log_impl.cc(214)] [08:37:41.131] USB: usb_service_win.cc:415 Could not read device interface GUIDs: The system cannot find the file specified. (0x2) [9556:21748:0627/083741.149:ERROR:device_event_log_impl.cc(214)] [08:37:41.148] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.156:ERROR:device_event_log_impl.cc(214)] [08:37:41.155] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.157:ERROR:device_event_log_impl.cc(214)] [08:37:41.156] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.157:ERROR:device_event_log_impl.cc(214)] [08:37:41.156] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) Traceback (most recent call last): File "C:\[REDACTED]", line 27, in <module> search_box = driver.find_element_by_name('q') AttributeError: 'WebDriver' object has no attribute 'find_element_by_name' [21324:19948:0627/083937.892:ERROR:gpu_init.cc(486)] Passthrough is not supported, GL is disabled, ANGLE is Note: I replaced the file path for this post. I don't think that the DevTools listening section is related to the issue, but I thought I would include it, just in case. | Selenium just removed that method in version 4.3.0. See the CHANGES: https://github.com/SeleniumHQ/selenium/blob/a4995e2c096239b42c373f26498a6c9bb4f2b3e7/py/CHANGES Selenium 4.3.0 * Deprecated find_element_by_* and find_elements_by_* are now removed (#10712) * Deprecated Opera support has been removed (#10630) * Fully upgraded from python 2x to 3.7 syntax and features (#10647) * Added a devtools version fallback mechanism to look for an older version when mismatch occurs (#10749) * Better support for co-operative multi inheritance by utilising super() throughout * Improved type hints throughout You now need to use: driver.find_element("name", "q") In your example, it would become: search_box = driver.find_element("name", "q") search_box.send_keys('ChromeDriver') search_box.submit() For improved reliability, you should consider using WebDriverWait in combination with element_to_be_clickable. | 135 | 194 |
72,774,135 | 2022-6-27 | https://stackoverflow.com/questions/72774135/how-to-type-hint-a-staticmethodabstractmethodproperty-using-mypy-in-python | Given this code: from abc import ABC, abstractmethod from typing import TYPE_CHECKING class Foo(ABC): @property @staticmethod @abstractmethod def name() -> str: pass class Bar(Foo): name = "bar" class Baz(Foo): name = "baz" instances = [Bar(), Baz()] print(instances[0].name) if TYPE_CHECKING: reveal_type(instances[0].name) Ran by the python interpreter, it prints (as expected): bar However, ran by mypy type checker, it prints (unexpectedly): main.py:23: note: Revealed type is "def () -> builtins.str" It looks like the type is wrongly inferred (see playground). Is it possible to fix that somehow? If possible, I would like to fix it "once and for all" (like in the Foo class) because I'm accessing .name in multiple places. | Python 3.11 disallowed wrapping of @property using class decorators such as @classmethod and @staticmethod (see GH#89519). This means the provided snippet shouldn't be considered as valid Python code, and latest versions of Mypy actually warns about it: main.py:6: error: Only instance methods can be decorated with @property [misc] main.py:23: note: Revealed type is "def () -> builtins.str" Found 1 error in 1 file (checked 1 source file) The easiest workaround is simply to use an instance method like so: class Foo(ABC): @property @abstractmethod def name(self) -> str: pass See also: GH#13035. | 4 | 0 |
72,779,926 | 2022-6-28 | https://stackoverflow.com/questions/72779926/gunicorn-cuda-cannot-re-initialize-cuda-in-forked-subprocess | I am creating an inference service with torch, gunicorn and flask that should use CUDA. To reduce resource requirements, I use the preload option of gunicorn, so the model is shared between the worker processes. However, this leads to an issue with CUDA. The following code snipped shows a minimal reproducing example: from flask import Flask, request import torch app = Flask('dummy') model = torch.rand(500) model = model.to('cuda:0') @app.route('/', methods=['POST']) def f(): data = request.get_json() x = torch.rand((data['number'], 500)) x = x.to('cuda:0') res = x * model return { "result": res.sum().item() } Starting the server with CUDA_VISIBLE_DEVICES=1 gunicorn -w 3 -b $HOST_IP:8080 --preload run_server:app lets the service start successfully. However, once doing the first request (curl -X POST -d '{"number": 1}'), the worker throws the following error: [2022-06-28 09:42:00,378] ERROR in app: Exception on / [POST] Traceback (most recent call last): File "/home/user/.local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/home/user/.local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/user/.local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/user/.local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise raise value File "/home/user/.local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/home/user/.local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/user/project/run_server.py", line 14, in f x = x.to('cuda:0') File "/home/user/.local/lib/python3.6/site-packages/torch/cuda/__init__.py", line 195, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method I load the model in the parent process and it's accessible to each forked worker process. The problem occurs when creating a CUDA-backed tensor in the worker process. This re-initializes the CUDA context in the worker process, which fails because it was already initialized in the parent process. If we set x = data['number'] and remove x = x.to('cuda:0'), the inference succeeds. Adding torch.multiprocessing.set_start_method('spawn') or multiprocessing.set_start_method('spawn') won't change anything, probably because gunicorn will definitely use fork when being started with the --preload option. A solution could be not using the --preload option, which leads to multiple copies of the model in memory/GPU. But this is what I am trying to avoid. Is there any possibility to overcome this issue without loading the model separately in each worker process? | Reason for the Error As correctly stated in the comments by @Newbie, the issue isn't the model itself, but the CUDA context. When new child processes are forked, the parent's memory is shared read-only with the child, but the CUDA context doesn't support this sharing, it must be copied to the child. Hence, it reports above-mentioned error. Spawn instead of Fork To resolve this issue, we have to change the start method for the child processes from fork to spawn with multiprocessing.set_start_method. The following simple example works fine: import torch import torch.multiprocessing as mp def f(y): y[0] = 1000 if __name__ == '__main__': x = torch.zeros(1).cuda() x.share_memory_() mp.set_start_method('spawn') p = mp.Process(target=f, args=(x,), daemon=True) p.start() p.join() print("x =", x.item()) When running this code, a second CUDA context is initialized (this can be observed via watch -n 1 nvidia-smi in a second window), and f is executed after the context was initialized completely. After this, x = 1000.0 is printed on the console, thus, we confirmed that the tensor x was successfully shared between the processes. However, Gunicorn internally uses os.fork to start the worker processes, so multiprocessing.set_start_method has no influence on Gunicorn's behavior. Consequently, initializing the CUDA context in the root process must be avoided. Solution for Gunicorn In order to share the model among the worker processes, we thus must load the model in one single process and share it with the workers. Luckily, sending a CUDA tensor via a torch.multiprocessing.Queue to another process doesn't copy the parameters on the GPU, so we can use those queues for this problem. import time import torch import torch.multiprocessing as mp def f(q): y = q.get() y[0] = 1000 def g(q): x = torch.zeros(1).cuda() x.share_memory_() q.put(x) q.put(x) while True: time.sleep(1) # this process must live as long as x is in use if __name__ == '__main__': queue = mp.Queue() pf = mp.Process(target=f, args=(queue,), daemon=True) pf.start() pg = mp.Process(target=g, args=(queue,), daemon=True) pg.start() pf.join() x = queue.get() print("x =", x.item()) # Prints x = 1000.0 For the Gunicorn server, we can use the same strategy: A model server process loads the model and serves it to each new worker process after its fork. In the post_fork hook the worker requests and receives the model from the model server. A Gunicorn configuration could look like this: import logging from client import request_model from app import app logging.basicConfig(level=logging.INFO) bind = "localhost:8080" workers = 1 zmq_url = "tcp://127.0.0.1:5555" def post_fork(server, worker): app.config['MODEL'], app.config['COUNTER'] = request_model(zmq_url) In the post_fork hook, we call request_model to get a model from the model server and store the model in the configuration of the Flask application. The method request_model is defined in my example in the file client.py and defined as follows: import logging import os from torch.multiprocessing.reductions import ForkingPickler import zmq def request_model(zmq_url: str): logging.info("Connecting") context = zmq.Context() with context.socket(zmq.REQ) as socket: socket.connect(zmq_url) logging.info("Sending request") socket.send(ForkingPickler.dumps(os.getpid())) logging.info("Waiting for a response") model = ForkingPickler.loads(socket.recv()) logging.info("Got response from object server") return model We make use of ZeroMQ for inter-process communication here because it allows us to reference servers by name/address and to outsource the server code into its own application. multiprocessing.Queue and multiprocessing.Process apparently don't work well with Gunicorn. multiprocessing.Queue uses the ForkingPickler internally to serialize the objects, and the module torch.multiprocessing alters it in a way that Torch data structures can be serialized appropriately and reliably. So, we use this class to serialize our model to send it to the worker processes. The model is loaded and served in an application that is completely separate from Gunicorn and defined in server.py: from argparse import ArgumentParser import logging import torch from torch.multiprocessing.reductions import ForkingPickler import zmq def load_model(): model = torch.nn.Linear(10000, 50000) model.cuda() model.share_memory() counter = torch.zeros(1).cuda() counter.share_memory_() return model, counter def share_object(obj, url): context = zmq.Context() socket = context.socket(zmq.REP) socket.bind(url) while True: logging.info("Waiting for requests on %s", url) message = socket.recv() logging.info("Got a message from %d", ForkingPickler.loads(message)) socket.send(ForkingPickler.dumps(obj)) if __name__ == '__main__': parser = ArgumentParser(description="Serve model") parser.add_argument("--listen-address", default="tcp://127.0.0.1:5555") args = parser.parse_args() logging.basicConfig(level=logging.INFO) logging.info("Loading model") model = load_model() share_object(model, args.listen_address) For this test, we use a model of about 2GB in size to see an effect on the GPU memory allocation in nvidia-smi and a small tensor to verify that the data is actually shared among the processes. Our sample flask application runs the model with a random input, counts the number of requests and returns both results: from flask import Flask import torch app = Flask(__name__) @app.route("/", methods=["POST"]) def infer(): model: torch.nn.Linear = app.config['MODEL'] counter: torch.Tensor = app.config['COUNTER'] counter[0] += 1 # not thread-safe input_features = torch.rand(model.in_features).cuda() return { "result": model(input_features).sum().item(), "counter": counter.item() } Test The example can be run as follows: $ python server.py & INFO:root:Waiting for requests on tcp://127.0.0.1:5555 $ gunicorn -c config.py app:app [2023-02-01 16:45:34 +0800] [24113] [INFO] Starting gunicorn 20.1.0 [2023-02-01 16:45:34 +0800] [24113] [INFO] Listening at: http://127.0.0.1:8080 (24113) [2023-02-01 16:45:34 +0800] [24113] [INFO] Using worker: sync [2023-02-01 16:45:34 +0800] [24186] [INFO] Booting worker with pid: 24186 INFO:root:Connecting INFO:root:Sending request INFO:root:Waiting for a response INFO:root:Got response from object server Using nvidia-smi, we can observe that now, two processes are using the GPU, and one of them allocates 2GB more VRAM than the other. Querying the flask application also works as expected: $ curl -X POST localhost:8080 {"counter":1.0,"result":-23.956459045410156} $ curl -X POST localhost:8080 {"counter":2.0,"result":-8.161510467529297} $ curl -X POST localhost:8080 {"counter":3.0,"result":-37.823692321777344} Let's introduce some chaos and terminate our only Gunicorn worker: $ kill 24186 [2023-02-01 18:02:09 +0800] [24186] [INFO] Worker exiting (pid: 24186) [2023-02-01 18:02:09 +0800] [4196] [INFO] Booting worker with pid: 4196 INFO:root:Connecting INFO:root:Sending request INFO:root:Waiting for a response INFO:root:Got response from object server It's restarting properly and ready to answer our requests. Benefit Initially, the amount of required VRAM for our service was (SizeOf(Model) + SizeOf(CUDA context)) * Num(Workers). By sharing the weights of the model, we can reduce this by SizeOf(Model) * (Num(Workers) - 1) to SizeOf(Model) + SizeOf(CUDA context) * Num(Workers). Caveats The reliability of this approach relies on the single model server process. If that process terminates, not only will newly started workers get stuck, but the models in the existing workers will become unavailable and all workers crash at once. The shared tensors/models are only available as long as the server process is running. Even if the model server and Gunicorn workers are restarted, a short outage is certainly unavoidable. In a production environment, you thus should make sure this server process is kept alive. Additionally, sharing data among different processes can have side effects. When sharing changeable data, proper locks must be used to avoid race conditions. | 22 | 14 |
72,710,857 | 2022-6-22 | https://stackoverflow.com/questions/72710857/figure-show-works-only-for-figures-managed-by-pyplot | There's bug reported about using matplotlib.pyplot for matplotlib 3.5.1, so I am trying to use matplotlib.figure.Figure to draw figure and it work fine. How can I view the graph in matplotlib for the Figure when I cannot call plt.show? Calling fig.show will give the following exception: Traceback (most recent call last): File "<module1>", line 23, in <module> File "C:\Software\Python\lib\site-packages\matplotlib\figure.py", line 2414, in show raise AttributeError( AttributeError: Figure.show works only for figures managed by pyplot, normally created by pyplot.figure() Demo code to show this issue: import numpy as np import matplotlib.pyplot as plt from matplotlib.figure import Figure x = np.linspace(0, 10, 500) y = np.sin(x**2)+np.cos(x) # ------------------------------------------------------------------------------------ fig, ax = plt.subplots() ax.plot(x, y, label ='Line 1') ax.plot(x, y - 0.6, label ='Line 2') plt.show() # It work, but I cannot use it for the scaling bug in matplotlib 3.5.1 # ------------------------------------------------------------------------------------ fig = Figure(figsize=(5, 4), dpi=100) ax = fig.add_subplot() ax.plot(x, y, label ='Line 1') ax.plot(x, y - 0.6, label ='Line 2') fig.show() # Get exception here | It's not clear if your final goal is: simply to use fig.show() or to use fig.show() specifically with a raw Figure() object 1. If you simply want to use fig.show() Then the first code block with plt.subplots() will work just fine by replacing plt.show() with fig.show(): fig, ax = plt.subplots(figsize=(5, 4)) # if you use plt.subplots() ax.plot(x, y, label='Line 1') ax.plot(x, y - 0.6, label='Line 2') # plt.show() fig.show() # fig.show() will work just fine 2. If you specifically want to use a raw Figure() object Then the problem is that it lacks a figure manager: If the figure was not created using pyplot.figure, it will lack a FigureManagerBase, and this method will raise an AttributeError. That means you'd need to create the figure manager by hand, but it's not clear why you'd want to do this since you'd just be reproducing the plt.subplots() or plt.figure() methods. Note that using fig.show() might give a backend warning (not error): UserWarning: Matplotlib is currently using module://matplotlib_inline.backend_inline, which is a non-GUI backend, so cannot show the figure. This is a separate issue and is explained in more detail in the Figure.show docs: Warning: This does not manage an GUI event loop. Consequently, the figure may only be shown briefly or not shown at all if you or your environment are not managing an event loop. Use cases for Figure.show include running this from a GUI application (where there is persistently an event loop running) or from a shell, like IPython, that install an input hook to allow the interactive shell to accept input while the figure is also being shown and interactive. Some, but not all, GUI toolkits will register an input hook on import. See Command prompt integration for more details. If you're in a shell without input hook integration or executing a python script, you should use pyplot.show with block=True instead, which takes care of starting and running the event loop for you. | 5 | 2 |
72,782,100 | 2022-6-28 | https://stackoverflow.com/questions/72782100/for-loop-in-c-vs-for-loop-in-python | I was writing a method that would calculate the value of e^x. The way I implemented this in python was as follows. import math def exp(x): return sum([ x**n/math.factorial(n) for n in range(0, 100) ]) This would return the value of e^x very well. But when I tried to implement the same method in c#, it didn't output the same value as it did in python. The following was the implementation in c#. static double exp(int x) { double FinalAnswer = 0; for (int j = 0; j <= 100; j++) { FinalAnswer += (Math.Pow(x, j))/Factorial(j); } return FinalAnswer; } The output for this code was an infinity symbol at first. To resolve this I just reduced the number of times the loop ran. The output of the code in c# where the loop only ran 10 times was pretty close to the output in python where the loop ran 100 times. My question is that what is going on between the two loops in different programming languages. At first I thought that the expression that I was using in my method to calculate e^x was converging quickly. But how does a loop that runs 10 times produce an output that matches the output of a loop that runs 100 times. Also, When I increased the for loop in c# to 20 and 30, the values of e^x for x > 3 were way off. Could someone explain what is going on here? | What you're likely running into here is integer overflow with the C# version of the Factorial function (at least your implementation of it, or wherever its coming from). In C#, an int is a numerical type stored in 32 bits of memory, which means it's bounded by -2^31 <= n <= 2^31 - 1 which is around +/- 2.1 billion. You could try using a long type, which is a 64 bit numerical type, however for even larger upper bounds in your for loop, like getting close to 100, you're going to overflow long as well. When you run the Factorial function in C#, it starts off normally for the first little while, however if you keep going, you'll see that it all of a sudden jumps into negative numbers, and if you keep going even further than that, it'll get to 0 and stop changing. You're seeing the output of infinity due to division by 0, and C# has a way of handling that with doubles; that being to just return double.PositiveInfinity. The reason why this doesn't happen in python is that it uses a variable number of bits to store its numerical values. Added note: What you might also want to try is using a Factorial function that works with the double type instead of int or long, however by doing this, you'll lose precision on what the exact value is, but you get more range as the magnitude of the number you can store is larger Further Note: As mentioned in the comments, C# has a type called BigInteger which is designed to handle huge numbers like the values you would expect from large inputs to a Factorial function. You can find a reference to the BigInteger docs here What you can do is calculate each component of the factorial function separately with the power you're using. Here's what I mean: public decimal Exp(decimal power, int accuracy = 100) { decimal runningTotal = 1; decimal finalValue = 1; for (int i = 1; i <= accuracy; i++) { runningTotal *= power/i; finalValue += runningTotal; } return finalValue; } | 14 | 20 |
72,729,999 | 2022-6-23 | https://stackoverflow.com/questions/72729999/python-is-it-a-good-practice-to-rely-on-import-to-execute-code | In Python, is it a good practice to rely on import to execute code, like in the example below? The code in mod.py is supposed to load some config, and needs to be executed once only. It can use more complex logic, but its purpose is to establish values of some parameters, later used as configuration by main.py. # --- mod.py --- param1 = 'abc' param2 = 'def' # ... # --- main.py --- import mod p1 = mod.param1 p2 = mod.param2 # (then calls functions from other components, which use p1, p2, ... as arguments) | Defining things in an additional module is perfectly fine - variables, classes, functions etc. When the module is imported, as long as you don't use from ... import * your namespace does not get cluttered and you can extract a standalone and/or repeated fragments to have cleaner code. It's pretty much an intended use for modules. What is not so good, is having code with side-effects that gets executed on import. This here gives a nice example why it's not a good idea: Say βnoβ to import sideβeffects in Python | 6 | 9 |
72,712,342 | 2022-6-22 | https://stackoverflow.com/questions/72712342/how-to-use-pyupdater | I Have a main.py file for my Tkinter app. I export it to a standalone.exe with Pyinstaller. I want that when I start the .exe to update the .exe if a new version was deployed in the directory where I export my program. Apparently we can do that with PyUpdater but I didn't find how on StackOverflow. | If you're trying to push updates/patches to a frozen python program, read this explanation in the PyUpdater documentation. Instead of attempting make your app auto-update (i.e. push updates to frozen apps) a much more straightforward approach would be to use Inno Setup. | 5 | 0 |
72,777,873 | 2022-6-27 | https://stackoverflow.com/questions/72777873/how-to-add-multiple-embedded-images-to-an-email-in-python | This question is really a continuation of this answer https://stackoverflow.com/a/49098251/19308674. I'm trying to add multiple embedded images (not just one) to the email content. I want to do it in a way that I loop through a list of images, in addition, there will be different text next to each image. Something like this for example as you can see in Weather Next 10 days I want to loop through images from a folder and next to each image there will be some different text as in the example. from email.message import EmailMessage from email.utils import make_msgid import mimetypes msg = EmailMessage() # generic email headers msg['Subject'] = 'Hello there' msg['From'] = 'ABCD <[email protected]>' msg['To'] = 'PQRS <[email protected]>' # set the plain text body msg.set_content('This is a plain text body.') # now create a Content-ID for the image image_cid = make_msgid(domain='example.com') # if `domain` argument isn't provided, it will # use your computer's name # set an alternative html body msg.add_alternative("""\ <html> <body> <p>This is an HTML body.<br> It also has an image. </p> <img src="cid:{image_cid}"> </body> </html> """.format(image_cid=image_cid[1:-1]), subtype='html') # image_cid looks like <[email protected]> # to use it as the img src, we don't need `<` or `>` # so we use [1:-1] to strip them off # now open the image and attach it to the email with open('path/to/image.jpg', 'rb') as img: # know the Content-Type of the image maintype, subtype = mimetypes.guess_type(img.name)[0].split('/') # attach it msg.get_payload()[1].add_related(img.read(), maintype=maintype, subtype=subtype, cid=image_cid) # the message is ready now # you can write it to a file # or send it using smtplib | If I'm able to guess what you are trying to ask, the solution is simply to generate a unique cid for each image. from email.message import EmailMessage from email.utils import make_msgid # import mimetypes msg = EmailMessage() msg["Subject"] = "Hello there" msg["From"] = "ABCD <[email protected]>" msg["To"] = "PQRS <[email protected]>" # create a Content-ID for each image image_cid = [make_msgid(domain="example.com")[1:-1], make_msgid(domain="example.com")[1:-1], make_msgid(domain="example.com")[1:-1]] msg.set_content("""\ <html> <body> <p>This is an HTML body.<br> It also has three images. </p> <img src="cid:{image_cid[0]}"><br/> <img src="cid:{image_cid[1]}"><br/> <img src="cid:{image_cid[2]}"> </body> </html> """.format(image_cid=image_cid), subtype='html') for idx, imgtup in enumerate([ ("path/to/first.jpg", "jpeg"), ("file/name/of/second.png", "png"), ("path/to/third.gif", "gif")]): imgfile, imgtype = imgtup with open(imgfile, "rb") as img: msg.add_related( img.read(), maintype="image", subtype=imgtype, cid=f"<{image_cid[idx]}>") # The message is ready now. # You can write it to a file # or send it using smtplib Kudos for using the modern EmailMessage API; we still see way too many questions which blindly copy/paste the old API from Python <= 3.5 with MIMEMultipart etc etc. I took out the mimetypes image format quessing logic in favor of spelling out the type of each image in the code. If you need Python to guess, you know how to do that, but for a small static list of images, it seems to make more sense to just specify each, and avoid the overhead as well as the unlikely but still not impossible problem of having the heuristics guess wrong. I'm guessing your images will all use the same format, and so you could actually simply hardcode subtype="png" or whatever. It should hopefully be obvious how to add more per-image information into the loop over image tuples, though if your needs go beyond the trivial, you'll probably want to encapsulate the image and its various attributes into a simple class. Your message apparently makes no sense for a recipient who cannot access the HTML part, so I took out the bogus text/plain part you had. You were effectively sending a different message entirely to recipients whose preference is to view plain text over HTML; if that was genuinely your intent, please stop it. If you are unable to provide the same information in the plain text version as in the HTML version, at least don't make it look to those recipients like you had nothing of importance to say in the first place. Tangentially, please don't fake email addresses of domains you don't own. You will end up tipping off the spammers and have them trying to send unsolicited messages to an innocent third party. Always use IANA-reserved domains like example.com, example.org etc which are guaranteed to never exist in reality. I edited your question to fix this. | 4 | 7 |
72,781,458 | 2022-6-28 | https://stackoverflow.com/questions/72781458/how-do-i-wait-for-ray-on-actor-class | I am developing Actor class and ray.wait() to collect the results. Below is the code and console outputs which is collecting the result for only 2 Actors when there are 3 Actors. import time import ray @ray.remote class Tester: def __init__(self, param): self.param = param def run(self): return self.param params = [0,1,2] testers = [] for p in params: tester = Tester.remote(p) testers.append(tester) runs = [] for i, tester in enumerate(testers): runs.append(tester.run.remote()) while len(runs): done_id, result_ids = ray.wait(runs) #runs size is not decreasing result = ray.get(done_id[0]) print('result:{}'.format(result)) time.sleep(1) result:2 (pid=819202) (pid=819200) (pid=819198) result:1 result:0 result:0 result:0 result:0 result:0 ... ... ... The console is printing out forever because the runs variable's size is not reduced. When I call ray.wait(runs) and get the done_id, runs's element with the done_id should be removed, but it is not removed. I want the console output to be like below. result:2 (pid=819202) (pid=819200) (pid=819198) result:1 result:0 | The script you provided is using ray.wait incorrectly. The following code does what you want: import time import ray @ray.remote class Tester: def __init__(self, param): self.param = param def run(self): return self.param params = [0, 1, 2] # I use list comprehensions instead of for loops for terseness. testers = [Tester.remote(p) for p in params] not_done_ids = [tester.run.remote() for tester in testers] # len() is not required to check that the list is empty. while not_done_ids: # Replace not_done_ids with the list of object references that aren't # ready. Store the list of object references that are ready in done_ids. # timeout=1 means sleep at most 1 second, do not sleep if there are # new object references that are ready. done_ids, not_done_ids = ray.wait(not_done_ids, timeout=1) # ray.get can take an iterable of object references. done_return_values = ray.get(done_ids) # Process each result. for result in done_return_values: print(f'result: {result}') I added the following fixes: ray.wait returns two lists, a list of objects that are ready, and a list of objects that may or may not be ready. You should iterate over the first list to get all object references that are ready. Your while loop goes forever until the list is empty. I simply replaced the runs list with not_done_ids so that once all object references are ready, the while loop breaks. ray.wait supports sleeping, with timeout. I removed your sleep and added timeout=1, which enables the program to run more efficiently (there is no sleep if another object is ready!). | 4 | 2 |
72,753,582 | 2022-6-25 | https://stackoverflow.com/questions/72753582/virtualenv-cannot-find-newly-installed-python-version | I'm running Ubuntu 20.04 and I want to start a project using Python 3.10. I used an install guide for Python 3.10 (this one), installed it using the deadsnakes PPA, and that was fine: $ python3.10 Python 3.10.5 (main, Jun 11 2022, 16:53:24) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> I even switched my default python to 3.10 for good measure, using this, and that also works. $ python Python 3.10.5 (main, Jun 11 2022, 16:53:24) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> Yet I cannot build a virtual environment: $ virtualenv myenv -p python3.10 RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.10' And if I try to rely on the default, it gives me python3.8. I had been using python3.8 before alright, but I don't know where that setting is coming from. Pyenv is installed, I don't know if that's interfering nor not. $ virtualenv myenv created virtual environment CPython3.8.10.final.0-64 in 110ms creator CPython3Posix(dest=/home/jokea/FlorA/fl-scraper/myenv, clear=False, global=False) seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, pkg_resources=latest, via=copy, app_data_dir=/home/jokea/.local/share/virtualenv/seed-app-data/v1.0.1.debian.1) activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator I just want to make a virtualenv with Python3.10. What could I be missing? | I was running into the same issue with a fresh python3.7 install. I managed to fix it by installing distutils. sudo apt-get install python3.7-distutils | 5 | 7 |
72,766,397 | 2022-6-27 | https://stackoverflow.com/questions/72766397/abbreviation-similarity-between-strings | I have a use case in my project where I need to compare a key-string with a lot many strings for similarity. If this value is greater than a certain threshold, I consider those strings "similar" to my key and based on that list, I do some further calculations / processing. I have been exploring fuzzy matching string similarity stuff, which use edit distance based algorithms like "levenshtein, jaro and jaro-winkler" similarities. Although they work fine, I want to have a higher similarity score if one string is "abbreviation" of another. Is there any algorithm/ implementation I can use for this. Note: language: python3 packages explored: fuzzywuzzy, jaro-winkler Example: using jaro_winkler similarity: >>> jaro.jaro_winkler_metric("wtw", "willis tower watson") 0.7473684210526316 >>> jaro.jaro_winkler_metric("wtw", "willistowerwatson") 0.7529411764705883 using levenshtein similarity: >>> fuzz.ratio("wtw", "willis tower watson") 27 >>> fuzz.ratio("wtw", "willistowerwatson") 30 >>> fuzz.partial_ratio("wtw", "willistowerwatson") 67 >>> fuzz.QRatio("wtw", "willistowerwatson") 30 In these kind of cases, I want score to be higher (>90%) if possible. I'm ok with few false positives as well, as they won't cause too much issue with my further calculations. But if we match s1 and s2 such that s1 is fully contained in s2 (or vice versa), their similarity score should be much higher. Edit: Further Examples for my Use-Case For me, spaces are redundant. That means, wtw is considered abbreviation for "willistowerwatson" and "willis tower watson" alike. Also, stove is a valid abbreviation for "STack OVErflow" or "STandardOVErview" A simple algo would be to start with 1st char of smaller string and see if it is present in the larger one. Then check for 2nd char and so on until the condition satisfies that 1st string is fully contained in 2nd string. This is a 100% match for me. Further examples like wtwx to "willistowerwatson" could give a score of, say 80% (this can be based on some edit distance logic). Even if I can find a package which gives either True or False for abbreviation similarity would also be helpful. | You can use a recursive algorithm, similar to sequence alignment. Just don't give penalty for shifts (as they are expected in abbreviations) but give one for mismatch in first characters. This one should work, for example: def abbreviation(abr,word,penalty=1): if len(abr)==0: return 0 elif len(word)==0: return penalty*len(abr)*-1 elif abr[0] == word[0]: if len(abr)>1: return 1 + max(abbreviation(abr[1:],word[1:]), abbreviation(abr[2:],word[1:])-penalty) else: return 1 + abbreviation(abr[1:],word[1:]) else: return abbreviation(abr,word[1:]) def compute_match(abbr,word,penalty=1): score = abbreviation(abbr.lower(), word.lower(), penalty) if abbr[0].lower() != word[0].lower(): score-=penalty score = score/len(abbr) return score print(compute_match("wtw", "willis tower watson")) print(compute_match("wtwo", "willis tower watson")) print(compute_match("stove", "Stackoverflow")) print(compute_match("tov", "Stackoverflow")) print(compute_match("wtwx", "willis tower watson")) The output is: 1.0 1.0 1.0 0.6666666666666666 0.5 Indicating that wtw and wtwo are perfectly valid abbreviations for willistowerwatson, that stove is a valid abbreviation of Stackoverflow but not tov, which has the wrong first character. And wtwx is only partially valid abbreviation for willistowerwatson beacuse it ends with a character that does not occur in the full name. | 7 | 0 |
72,719,556 | 2022-6-22 | https://stackoverflow.com/questions/72719556/how-to-add-virtualenv-to-pythonnet | I am not able to load a virtual environment I have using virtual-env in the same directory as the C# file. Here is my code var eng = IronPython.Hosting.Python.CreateEngine(); var scope = eng.CreateScope(); // Load Virtual Env ICollection<string> searchPaths = eng.GetSearchPaths(); searchPaths.Add(@"/Users/Desktop/CSharpProjects/demo1/.venv/lib"); searchPaths.Add(@"/Users/Desktop/CSharpProjects/demo1/.venv/lib/site-packages"); searchPaths.Add(AppDomain.CurrentDomain.BaseDirectory); eng.SetSearchPaths(searchPaths); string file = @"script.py"; eng.ExecuteFile(file, scope); Unhandled exception. IronPython.Runtime.Exceptions.ImportException: No module named 'numpy' Python code is which I can execute on the terminal of the virtualenv created. import numpy as np def name(a, b=1): return np.add(a,b) UPDATE: Seems like IronPython3 is quite hopeless, I will accept an implementation in Pythonnet! Here is my current code on Pythonnet and I am using NuGet - Pythonnet prerelease 3.0.0-preview2022-06-27 The following works fine as it uses the system@s python 3.7, however I would like it to use the virtualenv located in C:\envs\venv2. How can I modify the below code to use the virtual environment located in C:\envs\venv2? My class1.cs is: using Python.Runtime; using System; namespace ConsoleApp1 { public class PythonOperation { PyModule scope; public void Initialize() { Runtime.PythonDLL = @"C:\\Python37\python37.dll"; string pathToVirtualEnv = @"C:\envs\venv2"; string pathToPython = @"C:\Python37\"; Environment.SetEnvironmentVariable("PATH", pathToPython, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable("PYTHONHOME", pathToVirtualEnv, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable("PYTHONPATH", $"{pathToVirtualEnv}\\Lib\\site-packages;{pathToVirtualEnv}\\Lib", EnvironmentVariableTarget.Process); PythonEngine.PythonHome = pathToVirtualEnv; PythonEngine.PythonPath = Environment.GetEnvironmentVariable("PYTHONPATH", EnvironmentVariableTarget.Process); PythonEngine.Initialize(); scope = Py.CreateScope(); PythonEngine.BeginAllowThreads(); } public void Execute() { using (Py.GIL()) { }}}} Error: Fatal Python error: initfsencoding: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' | You may need to try this way of setting up the PythonEngine.PythonPath: string pathToVirtualEnv = /path/to/venv/; var path = Environment.GetEnvironmentVariable("PATH").TrimEnd(';'); path = string.IsNullOrEmpty(path) ? pathToVirtualEnv : path + ";" + pathToVirtualEnv; Environment.SetEnvironmentVariable("PATH", path, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable("PATH", pathToVirtualEnv, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable("PYTHONHOME", pathToVirtualEnv, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable("PYTHONPATH", $"{pathToVirtualEnv}\\Lib\\site-packages;{pathToVirtualEnv}\\Lib", EnvironmentVariableTarget.Process); PythonEngine.PythonHome = pathToVirtualEnv; PythonEngine.PythonPath = PythonEngine.PythonPath + ";" + Environment.GetEnvironmentVariable("PYTHONPATH", EnvironmentVariableTarget.Process); As you can see above, you can reference PythonPath while assigning a new value. | 6 | 2 |
72,756,419 | 2022-6-25 | https://stackoverflow.com/questions/72756419/mypy-incompatible-type-for-virtual-class-inheritance | Demo code #!/usr/bin/env python3 from abc import ABCMeta, abstractmethod class Base(metaclass = ABCMeta): @classmethod def __subclasshook__(cls, subclass): return ( hasattr(subclass, 'x') ) @property @abstractmethod def x(self) -> float: raise NotImplementedError class Concrete: x: float = 1.0 class Application: def __init__(self, obj: Base) -> None: print(obj.x) ob = Concrete() app = Application(ob) print(issubclass(Concrete, Base)) print(isinstance(Concrete, Base)) print(type(ob)) print(Concrete.__mro__) python test_typing.py returns: 1.0 True False <class '__main__.Concrete'> (<class '__main__.Concrete'>, <class 'object'>) and mypy test_typing.py returns: test_typing.py:30: error: Argument 1 to "Application" has incompatible type "Concrete"; expected "Base" Found 1 error in 1 file (checked 1 source But if i change the line class Concrete: to class Concrete(Base):, i get for python test_typing.py this: 1.0 True False <class '__main__.Concrete'> (<class '__main__.Concrete'>, <class '__main__.Base'>, <class 'object'>) and for mypy test_typing.py this: Success: no issues found in 1 source file If i add to my code this: reveal_type(Concrete) reveal_type(Base) i get in both cases the same results for it from mypy test_typing.py: test_typing.py:37: note: Revealed type is "def () -> vmc.test_typing.Concrete" test_typing.py:38: note: Revealed type is "def () -> vmc.test_typing.Base" Conclusion Seems obvious, that MyPi have some problems with virtual base classes but non-virtual inheritance seems working as expected. Question How works MyPy's type estimation in these cases? Is there an workaround? 2nd Demo code Using Protocol pattern: #!/usr/bin/env python3 from abc import abstractmethod from typing import Protocol, runtime_checkable @runtime_checkable class Base(Protocol): @property def x(self) -> float: raise NotImplementedError @abstractmethod def __init__(self, x: float) -> None: raise NotImplementedError """ @classmethod def test(self) -> None: pass """ class Concrete: x: float = 1.0 class Application: def __init__(self, obj: Base) -> None: pass ob = Concrete() app = Application(ob) Pros Working with mypy: Success: no issues found in 1 source file Working with isinstance(Concrete, Base) : True Cons Not working with issubclass(Concrete, Base): TypeError: Protocols with non-method members don't support issubclass() Not checking the __init__ method signatures: __init__(self, x: float) -> None vs. __init__(self) -> None (Why returns inspect.signature() the strings (self, *args, **kwargs) and (self, /, *args, **kwargs) here? With class Base: instead of class Base(Protocol): i get (self, x: float) -> None and (self, /, *args, **kwargs)) ignoring the difference between @abstractmethod and @classmethod (treats ANY method as abstract) 3rd Demo code This time just an more complex example of the 1st code: #!/usr/bin/env python3 from abc import ABCMeta, abstractmethod from inspect import signature class Base(metaclass = ABCMeta): @classmethod def __subclasshook__(cls, subclass): return ( hasattr(subclass, 'x') and (signature(subclass.__init__) == signature(cls.__init__)) ) @property @abstractmethod def x(self) -> float: raise NotImplementedError @abstractmethod def __init__(self, x: float) -> None: raise NotImplementedError @classmethod def test(self) -> None: pass class Concrete: x: float = 1.0 def __init__(self, x: float) -> None: pass class Application: def __init__(self, obj: Base) -> None: pass ob = Concrete(1.0) app = Application(ob) Pros Working with issubclass(Concrete, Base): True Working with isinstance(Concrete, Base): False Method signature check also for __init__. Cons Not working with MyPy: test_typing.py:42: error: Argument 1 to "Application" has incompatible type "Concrete"; expected "Base" Found 1 error in 1 file (checked 1 source file) 4th Demo code In some circumstances the following code might be an possible solution. #!/usr/bin/env python3 from typing import Protocol, runtime_checkable from dataclasses import dataclass @runtime_checkable class Rotation(Protocol): @property def x(self) -> float: raise NotImplementedError @property def y(self) -> float: raise NotImplementedError @property def z(self) -> float: raise NotImplementedError @property def w(self) -> float: raise NotImplementedError @dataclass class Quaternion: x: float = 0.0 y: float = 0.0 z: float = 0.0 w: float = 1.0 def conjugate(self) -> 'Quaternion': return type(self)( x = -self.x, y = -self.y, z = -self.z, w = self.w ) class Application: def __init__(self, rot: Rotation) -> None: print(rot) q = Quaternion(0.7, 0.0, 0.7, 0.0) app = Application(q.conjugate()) Pros: Auto-generated __init__ method because of @dataclass usage. here: (self, x: float = 0.0, y: float = 0.0, z: float = 0.0, w: float = 1.0) -> None Works with isinstance(): True Works with mypy: Success: no issues found in 1 source file Cons: You need to hope, that the next developer uses @dataclass along with implementing your interface.. Not usable for __init__ methods, that are not only taken class attributes. Tipp: If an forced __init__ method is not required and only want to take care of the attributes, then just omit @dataclass. 5th Demo code Updated the 4th code to provide more safety, but without implicit __init__ method: #!/usr/bin/env python3 from abc import abstractmethod from typing import Protocol, runtime_checkable @runtime_checkable class Rotation(Protocol): @property @abstractmethod def x(self) -> float: raise NotImplementedError @property @abstractmethod def y(self) -> float: raise NotImplementedError @property @abstractmethod def z(self) -> float: raise NotImplementedError @property @abstractmethod def w(self) -> float: raise NotImplementedError class Quaternion: _x: float = 0.0 _y: float = 0.0 _z: float = 0.0 _w: float = 1.0 @property def x(self) -> float: return self._x @property def y(self) -> float: return self._y @property def z(self) -> float: return self._z @property def w(self) -> float: return self._w def __init__(self, x: float, y: float, z: float, w: float) -> None: self._x = float(x) self._y = float(y) self._z = float(z) self._w = float(w) def conjugate(self) -> 'Quaternion': return type(self)( x = -self.x, y = -self.y, z = -self.z, w = self.w ) def __str__(self) -> str: return ", ".join( ( str(self._x), str(self._y), str(self._z), str(self._w) ) ) def __repr__(self) -> str: cls = self.__class__ module = cls.__module__ return f"{module + '.' if module != '__main__' else ''}{cls.__qualname__}({str(self)})" class Application: def __init__(self, rot: Rotation) -> None: print(rot) q = Quaternion(0.7, 0.0, 0.7, 0.0) app = Application(q.conjugate()) Current conclusion The Protocol way is unstable. But the Metaclass way is not checkable, because it's not working with MyPy (because it's not static). Updated question Are there any alternative solutions to achieve some type of Interfaces (without class Concrete(Base)) AND make it type-safe (checkable)? | Result After running some tests and more research i am sure, that the actual problem is the behaviour of Protocol to silently overwrite the defined __init__ method. Conclusion Seems logical, since Protocols are not intended to be initiated. But sometimes it's required to define an __init__ method, because in my opinion __init__ methods are also part of the interface of classes and it's objects. Solution I found an existing issue about this problem, that seems to confirm my point of view: https://github.com/python/cpython/issues/88970 Fortunately it's already fixed: https://github.com/python/cpython/commit/5f2abae61ec69264b835dcabe2cdabe57b9a990e But unfortunately, this fix will only be part of Python 3.11 and above. Currenty is Python 3.10.5 available. WARNING: Like mentioned in the issue, some static type checkers might behave different in this case. MyPy just ignores the missing __init__ method (tested it, confirmed) BUT Pyright seems to detect and report the missing __init__ method (not tested by me). | 6 | 0 |
72,753,255 | 2022-6-25 | https://stackoverflow.com/questions/72753255/how-to-detect-the-amount-of-almost-repetition-in-a-text-file | I am a programming teacher, and I would like to write a script that detects the amount of repetition in a C/C++/Python file. I guess I can treat any file as pure text. The script's output would be the number of similar sequences that repeat. Eventually, I am only interested in a DRY's metric (how much the code satisfied the DRY principle). Naively I tried to do a simple autocorrelation but it would be hard to find the proper threshold. u = open("find.c").read() v = [ord(x) for x in u] y = np.correlate(v, v, mode="same") y = y[: int(len(y) / 2)] x = range(len(y)) z = np.polyval(np.polyfit(x, y, 3), x) f = (y - z)[: -5] plt.plot(f) plt.show(); So I am looking at different strategies... I also tried to compare the similarities between each line, each group of 2 lines, each group of 3 lines ... import difflib import numpy as np lines = open("b.txt").readlines() lines = [line.strip() for line in lines] n = 3 d = [] for i in range(len(lines)): a = lines[i:i+n] for j in range(len(lines)): b = lines[j:j+n] if i == j: continue # skip same line group_size = np.sum([len(x) for x in a]) if group_size < 5: continue # skip short lines ratio = 0 for u, v in zip(a, b): r = difflib.SequenceMatcher(None, u, v).ratio() ratio += r if r > 0.7 else 0 d.append(ratio) dry = sum(d) / len(lines) In the following, we can identify some repetition at a glance: w = int(len(d) / 100) e = np.convolve(d, np.ones(w), "valid") / w * 10 plt.plot(range(len(d)), d, range(len(e)), e) plt.show() Why not using: d = np.exp(np.array(d)) Thus, difflib module looks promising, the SequenceMatcher does some magic (Levenshtein?), but I would need some magic constants as well (0.7)... However, this code is > O(n^2) and runs very slowly for long files. What is funny is that the amount of repetition is quite easily identified with attentive eyes (sorry to this student for having taken his code as a good bad example): I am sure there is a more clever solution out there. Any hint? | I would build a system based on compressibility, because that is essentially what things being repeated means. Modern compression algorithms are already looking for how to reduce repetition, so let's piggy back on that work. Things that are similar will compress well under any reasonable compression algorithm, eg LZ. Under the hood a compression algo is a text with references to itself, which you might be able to pull out. Write a program that feeds lines [0:n] into the compression algorithm, compare it to the output length with [0:n+1]. When you see the incremental length of the compressed output increases by a lot less than the incremental input, you note down that you potentially have a DRY candidate at that location, plus if you can figure out the format, you can see what previous text it was deemed similar to. If you can figure out the compression format, you don't need to rely on the "size doesn't grow as much" heuristic, you can just pull out the references directly. If needed, you can find similar structures with different names by pre-processing the input, for instance by normalizing the names. However I foresee this getting a bit messy, so it's a v2 feature. Pre-processing can also be used to normalize the formatting. | 9 | 4 |
72,790,002 | 2022-6-28 | https://stackoverflow.com/questions/72790002/improve-performance-of-combinations | Hey guys I have a script that compares each possible user and checks how similar their text is: dictionary = { t.id: ( t.text, t.set, t.compare_string ) for t in dataframe.itertuples() } highly_similar = [] for a, b in itertools.combinations(dictionary.items(), 2): if a[1][2] == b[1][2] and not a[1][1].isdisjoint(b[1][1]): similarity_score = fuzz.ratio(a[1][0], b[1][0]) if (similarity_score >= 95 and len(a[1][0]) >= 10) or similarity_score == 100: highly_similar.append([a[0], b[0], a[1][0], b[1][0], similarity_score]) This script takes around 15 minutes to run, the dataframe contains 120k users, so comparing each possible combination takes quite a bit of time, if I just write pass on the for loop it takes 2 minutes to loop through all values. I tried using filter() and map() for the if statements and fuzzy score but the performance was worse. I tried improving the script as much as I could but I don't know how I can improve this further. Would really appreciate some help! | It is slightly complicated to reason about the data since you have not attached it, but we can see multiple places that might provide an improvement: First, let's rewrite the code in a way which is easier to reason about than using the indices: dictionary = { t.id: ( t.text, t.set, t.compare_string ) for t in dataframe.itertuples() } highly_similar = [] for a, b in itertools.combinations(dictionary.items(), 2): a_id, (a_text, a_set, a_compre_string) = a b_id, (b_text, b_set, b_compre_string) = b if (a_compre_string == b_compre_string and not a_set.isdisjoint(b_set)): similarity_score = fuzz.ratio(a_text, b_text) if (similarity_score >= 95 and len(a_text) >= 10) or similarity_score == 100): highly_similar.append( [a_id, b_id, a_text, b_text, similarity_score]) You seem to only care about pairs having the same compare_string values. Therefore, and assuming this is not something that all pairs share, we can key by whatever that value is to cover much less pairs. To put some numbers into it, let's say you have 120K inputs, and 1K values for each value of val[1][2] - then instead of covering 120K * 120K = 14 * 10^9 combinations, you would have 120 bins of size 1K (where in each bin we'd need to check all pairs) = 120 * 1K * 1K = 120 * 10^6 which is about 1000 times faster. And it would be even faster if each bin has less than 1K elements. import collections # Create a dictionary from compare_string to all items # with the same compare_string items_by_compare_string = collections.defaultdict(list) for item in dictionary.items(): compare_string = item[1][2] items_by_compare_string[compare_string].append(items) # Iterate over each group of items that have the same # compare string for item_group in items_by_compare_string.values(): # Check pairs only within that group for a, b in itertools.combinations(item_group, 2): # No need to compare the compare_strings! if not a_set.isdisjoint(b_set): similarity_score = fuzz.ratio(a_text, b_text) if (similarity_score >= 95 and len(a_text) >= 10) or similarity_score == 100): highly_similar.append( [a_id, b_id, a_text, b_text, similarity_score]) But, what if we want more speed? Let's look at the remaining operations: We have a check to find if two sets share at least one item This seems like an obvious candidate for optimization if we have any knowledge about these sets (to allow us to determine which pairs are even relevant to compare) Without additional knowledge, and just looking at every two pairs and trying to speed this up, I doubt we can do much - this is probably highly optimized using internal details of Python sets, I don't think it's likely to optimize it further We a fuzz.ratio computation which is some external function, and I'm going to assume is heavy If you are using this from the FuzzyWuzzy package, make sure to install python-Levenshtein to get the speedups detailed here We have some comparisons which we are unlikely to be able to speed up We might be able to cache the length of a_text by nesting the two loops, but that's negligible We have appends to a list, which runs on average ("amortized") constant time per operation, so we can't really speed that up Therefore, I don't think we can reasonably suggest any more speedups without additional knowledge. If we know something about the sets that can help optimize which pairs are relevant we might be able to speed things up further, but I think this is about it. EDIT: As pointed out in other answers, you can obviously run the code in multi-threading. I assumed you were looking for an algorithmic change that would possibly reduce the number of operations significantly, instead of just splitting these over more CPUs. | 5 | 4 |
72,783,608 | 2022-6-28 | https://stackoverflow.com/questions/72783608/creating-tensorflow-dataset-for-mulitple-time-series | I have a multiple time series data that looks something like this: df = pd.DataFrame({'Time': np.tile(np.arange(5), 2), 'Object': np.concatenate([[i] * 5 for i in [1, 2]]), 'Feature1': np.random.randint(10, size=10), 'Feature2': np.random.randint(10, size=10)}) Time Object Feature1 Feature2 0 0 1 3 3 1 1 1 9 2 2 2 1 6 6 3 3 1 4 0 4 4 1 7 7 5 0 2 4 8 6 1 2 3 7 7 2 2 1 1 8 3 2 7 5 9 4 2 1 7 where each object (1 and 2) has its own data (about 2000 objects in real data). I would like to feed this data chunkwise into RNN/LSTM using tf.data.Dataset.window in a way that different objects data don't come in one window like in this example: dataset = tf.data.Dataset.from_tensor_slices(df) for w in dataset.window(3, shift=1, drop_remainder=True): print(list(w.as_numpy_iterator())) Output: [array([0, 1, 3, 3]), array([1, 1, 9, 2]), array([2, 1, 6, 6])] [array([1, 1, 9, 2]), array([2, 1, 6, 6]), array([3, 1, 4, 0])] [array([2, 1, 6, 6]), array([3, 1, 4, 0]), array([4, 1, 7, 7])] [array([3, 1, 4, 0]), array([4, 1, 7, 7]), array([0, 2, 4, 8])] # Mixed data from both objects [array([4, 1, 7, 7]), array([0, 2, 4, 8]), array([1, 2, 3, 7])] # Mixed data from both objects [array([0, 2, 4, 8]), array([1, 2, 3, 7]), array([2, 2, 1, 1])] [array([1, 2, 3, 7]), array([2, 2, 1, 1]), array([3, 2, 7, 5])] [array([2, 2, 1, 1]), array([3, 2, 7, 5]), array([4, 2, 1, 7])] Expected output: [array([0, 1, 3, 3]), array([1, 1, 9, 2]), array([2, 1, 6, 6])] [array([1, 1, 9, 2]), array([2, 1, 6, 6]), array([3, 1, 4, 0])] [array([2, 1, 6, 6]), array([3, 1, 4, 0]), array([4, 1, 7, 7])] [array([0, 2, 4, 8]), array([1, 2, 3, 7]), array([2, 2, 1, 1])] [array([1, 2, 3, 7]), array([2, 2, 1, 1]), array([3, 2, 7, 5])] [array([2, 2, 1, 1]), array([3, 2, 7, 5]), array([4, 2, 1, 7])] Maybe there is another way to do it. The main requirement that my model should see that non-mixed data chunks come from different objects (maybe via embedding). | Hmm, maybe just create two separate dataframes and then concatenate after windowing. That way, you will not have any overlapping: import tensorflow as tf import pandas as pd import numpy as np df = pd.DataFrame({'Time': np.tile(np.arange(5), 2), 'Object': np.concatenate([[i] * 5 for i in [1, 2]]), 'Feature1': np.random.randint(10, size=10), 'Feature2': np.random.randint(10, size=10)}) df1 = df[df['Object'] == 1] df2 = df[df['Object'] == 2] dataset = tf.data.Dataset.from_tensor_slices(df1).window(3, shift=1, drop_remainder=True).concatenate(tf.data.Dataset.from_tensor_slices(df2).window(3, shift=1, drop_remainder=True)) for w in dataset: print(list(w.as_numpy_iterator())) [array([0, 1, 3, 3]), array([1, 1, 9, 2]), array([2, 1, 6, 6])] [array([1, 1, 9, 2]), array([2, 1, 6, 6]), array([3, 1, 4, 0])] [array([2, 1, 6, 6]), array([3, 1, 4, 0]), array([4, 1, 7, 7])] [array([0, 2, 4, 8]), array([1, 2, 3, 7]), array([2, 2, 1, 1])] [array([1, 2, 3, 7]), array([2, 2, 1, 1]), array([3, 2, 7, 5])] [array([2, 2, 1, 1]), array([3, 2, 7, 5]), array([4, 2, 1, 7])] Update 1: Another approach would be to use tf.data.Dataset.filter like this: import tensorflow as tf import pandas as pd import numpy as np df = pd.DataFrame({'Time': np.tile(np.arange(5), 2), 'Object': np.concatenate([[i] * 5 for i in [1, 2]]), 'Feature1': np.random.randint(10, size=10), 'Feature2': np.random.randint(10, size=10)}) objects = df['Object'].unique() dataset = tf.data.Dataset.from_tensor_slices(df) new_dataset = None for o in objects: temp_dataset = dataset.filter(lambda x: tf.math.equal(x[1], tf.constant(o))).window(3, shift=1, drop_remainder=True) if new_dataset: new_dataset = new_dataset.concatenate(temp_dataset) else: new_dataset = temp_dataset for w in new_dataset: print(list(w.as_numpy_iterator())) Update 2: Yet another option would be to exclude / drop overlapping sequences. This way you can flexibly decide what to do with the overlaps: import tensorflow as tf import pandas as pd import numpy as np df = pd.DataFrame({'Time': np.tile(np.arange(5), 2), 'Object': np.concatenate([[i] * 5 for i in [1, 2]]), 'Feature1': np.random.randint(10, size=10), 'Feature2': np.random.randint(10, size=10)}) dataset = tf.data.Dataset.from_tensor_slices(df).window(3, shift=1, drop_remainder=True).flat_map(lambda x: x.batch(3)).filter(lambda y: tf.reduce_all(tf.unique(y[..., 1])[1] == 0)) for w in dataset: print(w) | 7 | 3 |
72,781,400 | 2022-6-28 | https://stackoverflow.com/questions/72781400/getting-black-python-code-formatter-to-align-comments | Yes, I'm, of the understanding that black gives very little leeway in getting it to act differently but I was wondering about the best way to handle something like this (my original code): @dataclass class Thing1: property1: int # The first property. property2: typing.List[int] # This is the second property # and the comment crosses multiple lines. Now, when I run that through black, it gives me: @dataclass class Thing1: property1: int # The first property. property2: typing.List[int] # This is the second property # and the comment crosses multiple lines. which is not really laid out as I'd like. Is there a way to get black to keep the comments lined up? I don't care which column it starts in per field, but it would be nice to return to the readable nature, with the comments within each field being lined up: @dataclass class Thing1: property1: int # The first property. property2: typing.List[int] # This is the second property # and the comment crosses multiple lines. If there's no way to do this, I'll probably either place multi-line comments before each field, or ensure all comments fit on the one line and add lengthier explanatory notes at the end of the dataclass definition: @dataclass class Thing1: property1: int # The first property. # This is the second property and the comment, while it can # have more characters, still crosses multiple lines. property2: typing.List[int] @dataclass class Thing2: property1: int # The first property. property2: typing.List[int] # This is the second property. # Note that the comments above now fit on the same line as the # field, and this comment here is meant to provide any needed # multi-line detail for ALL fields in this class. That at least is still somewhat local to the data fields. But I'd rather find a way to have it line up comments similar to what I originally had, if possible. Happy to entertain any suggestions. | You can wrap your block with # fmt: on/off, so Black doesn't touch it. # fmt: off @dataclass class Thing1: property1: int # The first property. property2: typing.List[int] # This is the second property # and the comment crosses multiple lines. # fmt: on I usually prefer to relocate the comments and stick with default Black formatting. | 6 | 7 |
72,793,916 | 2022-6-28 | https://stackoverflow.com/questions/72793916/how-can-i-import-watershed-function-from-scikit-image | Using Napari Image Analysis GUI to run the Allen Cell Segmenter (no response in Napari github or Allen Cell Forum, thought I'd try here) and getting the following error when I attempt to run the watershed for cutting function: ImportError: cannot import name 'watershed' from 'skimage.morphology' (C:\Users\Murryadmin\anaconda3\envs\napari-env\lib\site-packages\skimage\morphology_init_.py) c:\users\murryadmin\anaconda3\envs\napari-env\lib\site-packages\aicssegmentation\core\utils.py(449)watershed_wrapper() -> from skimage.morphology import watershed, dilation, ball Anyone have any potential fixes for this? Thanks | watershed was moved from skimage.morphology to skimage.segmentation in version 0.17. There was a pointer from morphology to the new function in segmentation in 0.17 and 0.18, but it was removed in 0.19. The Allen Cell Segmenter needs to be updated to match the more modern scikit-image version, so I would raise an issue in their GitHub repository if I were you. Downgrading to scikit-image 0.18 could fix the Allen Cell Segmenter itself, but unfortunately napari requires 0.19+. | 4 | 9 |
72,791,043 | 2022-6-28 | https://stackoverflow.com/questions/72791043/how-would-i-fix-the-issue-of-the-python-extension-loading-and-extension-activati | I keep on getting these messages on the bottom right corner of my screen when opening up VS code. Any idea on how to get rid of it? I can still write code and run the code fine but I don't understand why this is happening. I've tried by deleting the python extension and anything related to python in the video extensions tab, but no luck. | Please update the python extension to the latest version or install the pre-release version directly. ( may be more useful to you ) this will basically solve your problem. If the error continues, follow these steps: Uninstall Python extension (if you have pylance uninstall it first). Uninstall any other extension that is failing. Close all instances of VS Code. Go to, %USERPROFILE%/.vscode/extensions (on windows) or ~/.vscode/extensions on Linux/Mac. Delete any folder with the name starting with ms-python.python* Start VS Code, and install Python extension (also pylance if you uninstalled it in step 1). | 5 | 17 |
72,793,974 | 2022-6-28 | https://stackoverflow.com/questions/72793974/groupby-column-and-create-lists-for-other-columns-preserving-order | I have a PySpark dataframe which looks like this: Id timestamp col1 col2 abc 789 0 1 def 456 1 0 abc 123 1 0 def 321 0 1 I want to group by or partition by ID column and then the lists for col1 and col2 should be created based on the order of timestamp. Id timestamp col1 col2 abc [123,789] [1,0] [0,1] def [321,456] [0,1] [1,0] My approach: from pyspark.sql import functions as F from pyspark.sql import Window as W window_spec = W.partitionBy("id").orderBy('timestamp') ranged_spec = window_spec.rowsBetween(W.unboundedPreceding, W.unboundedFollowing) df1 = df.withColumn("col1", F.collect_list("reco").over(window_spec))\ .withColumn("col2", F.collect_list("score").over(window_spec))\ df1.show() But this is not returning list of col1 and col2. | I don't think the order can be reliably preserved using groupBy aggregations. So window functions seems to be the way to go. Setup: from pyspark.sql import functions as F, Window as W df = spark.createDataFrame( [('abc', 789, 0, 1), ('def', 456, 1, 0), ('abc', 123, 1, 0), ('def', 321, 0, 1)], ['Id', 'timestamp', 'col1', 'col2']) Script: w1 = W.partitionBy('Id').orderBy('timestamp') w2 = W.partitionBy('Id').orderBy(F.desc('timestamp')) df = df.select( 'Id', *[F.collect_list(c).over(w1).alias(c) for c in df.columns if c != 'Id'] ) df = (df .withColumn('_rn', F.row_number().over(w2)) .filter('_rn=1') .drop('_rn') ) Result: df.show() # +---+----------+------+------+ # | Id| timestamp| col1| col2| # +---+----------+------+------+ # |abc|[123, 789]|[1, 0]|[0, 1]| # |def|[321, 456]|[0, 1]|[1, 0]| # +---+----------+------+------+ You were also very close to what you needed. I've played around and this seems to be working too: window_spec = W.partitionBy("Id").orderBy('timestamp') ranged_spec = window_spec.rowsBetween(W.unboundedPreceding, W.unboundedFollowing) df1 = (df .withColumn("timestamp", F.collect_list("timestamp").over(ranged_spec)) .withColumn("col1", F.collect_list("col1").over(ranged_spec)) .withColumn("col2", F.collect_list("col2").over(ranged_spec)) ).drop_duplicates() df1.show() | 4 | 5 |
72,781,500 | 2022-6-28 | https://stackoverflow.com/questions/72781500/trouble-getting-accurate-binary-image-opencv | Using the threshold functions in open CV on an image to get a binary image, with Otsu's thresholding I get a image that has white spots due to different lighting conditions in parts of the image or with adaptive threshold to fix the lighting conditions, it fails to accurately represent the pencil-filled bubbles that Otsu actually can represent. How can I get both the filled bubbles represented and a fixed lighting conditions without patches? Here's the original image Here is my code #binary image conversion thresh2 = cv2.adaptiveThreshold(papergray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 21, 13) thresh = cv2.threshold(papergray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] cv2.imshow("Binary", thresh) #Otsu's cv2.imshow("Adpative",thresh2) | Problems with your approach: The methods you have tried out: Otsu threshold is decided based on all the pixel values in the entire image (global technique). If you look at the bottom-left of your image, there is a gray shade which can have an adverse effect in deciding the threshold value. Adaptive threshold: here is a recent answer on why it isn't helpful. In short, it acts like an edge detector for smaller kernel sizes What you can try: OpenCV's ximgproc module has specialized binary image generation methods. One such method is the popular Niblack threshold technique. This is a local threshold technique that depends on statistical measures. It divides the image into blocks (sub-images) of size predefined by the user. A threshold is set based on the mean minus k times standard deviation of pixel values for each block. The k is decided by the user. Code: img =cv2.imread('omr_sheet.jpg') blur = cv2.GaussianBlur(img, (3, 3), 0) gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY) niblack = cv2.ximgproc.niBlackThreshold(gray, 255, cv2.THRESH_BINARY, 41, -0.1, binarizationMethod=cv2.ximgproc.BINARIZATION_NICK) Result: Links: To know more about cv2.ximgproc.niBlackThreshold There are other binarization techniques available that you may want to explore. It also contains links to research papers that explain each of these techniques on detail. Edit: Adaptive threshold actually works if you know what you are working with. You can decide the kernel size beforehand. See Prashant's answer. | 3 | 4 |
72,791,487 | 2022-6-28 | https://stackoverflow.com/questions/72791487/error-when-installing-microsoft-playwright | I'm getting this error: At line:1 char:1 + playwright install + ~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (playwright:String) [],CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException I'm installing it using pip, for use in python | You need playwright added to your PATH. However, a better way to do this (if python is added to your path) without adding it to your PATH is by running: python -m playwright install This runs the playwright module as a script. Use python -h for more information on these flags, and python -m playwright for more information on the flags supported specifically by playwright | 3 | 17 |
72,712,965 | 2022-6-22 | https://stackoverflow.com/questions/72712965/does-the-src-folder-in-pypi-packaging-have-a-special-meaning-or-is-it-only-a-co | I'm learning how to package Python projects for PyPI according to the tutorial (https://packaging.python.org/en/latest/tutorials/packaging-projects/). For the example project, they use the folder structure: packaging_tutorial/ βββ LICENSE βββ pyproject.toml βββ README.md βββ src/ β βββ example_package_YOUR_USERNAME_HERE/ β βββ __init__.py β βββ example.py βββ tests/ I am just wondering why the src/ folder is needed? Does it serve a particular purpose? Could one instead include the package directly in the top folder? E.g. would packaging_tutorial/ βββ LICENSE βββ pyproject.toml βββ README.md βββ example_package_YOUR_USERNAME_HERE/ β βββ __init__.py β βββ example.py βββ tests/ have any disadvantages or cause complications? | There is an interesting blog post about this topic; basically, using src prevents that when running tests from within the project directory, the package source folder gets imported instead of the installed package (and tests should always run against installed packages, so that the situation is the same as for a user). Consider the following example project where the name of the package under development is mypkg. It contains an __init__.py file and another DATA.txt non-code resource: . βββ mypkg β βββ DATA.txt β βββ __init__.py βββ pyproject.toml βββ setup.cfg βββ test βββ test_data.py Here, mypkg/__init__.py accesses the DATA.txt resource and loads its content: from importlib.resources import read_text data = read_text('mypkg', 'DATA.txt').strip() # The content is 'foo'. The script test/test_data.py checks that mypkg.data actually contains 'foo': import mypkg def test(): assert mypkg.data == 'foo' Now, running coverage run -m pytest from within the base directory gives the impression that everything is alright with the project: $ coverage run -m pytest [...] test/test_data.py . [100%] ========================== 1 passed in 0.01s ========================== However, there's a subtle issue. Running coverage run -m pytest invokes pytest via python -m pytest, i.e. using the -m switch. This has a "side effect", as mentioned in the docs: [...] As with the -c option, the current directory will be added to the start of sys.path. [...] This means that when importing mypkg in test/test_data.py, it didn't import the installed version but it imported the package from the source tree in mypkg instead. Now, let's further assume that we forgot to include the DATA.txt resource in our project specification (after all, there is no MANIFEST.in). So this file is actually not included in the installed version of mypkg (installation e.g. via python -m pip install .). This is revealed by running pytest directly: $ pytest [...] ======================= short test summary info ======================= ERROR test/test_data.py - FileNotFoundError: [Errno 2] No such file ... !!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!! ========================== 1 error in 0.13s =========================== Hence, when using coverage the test passed despite the installation of mypkg being broken. The test didn't capture this as it was run against the source tree rather than the installed version. If we had used a src directory to contain the mypkg package, then adding the current working directory via -m would have caused no problems, as there is no package mypkg in the current working directory anymore. But in the end, using src is not a requirement but more of a convention/best practice. For example requests doesn't use src and they still manage to be a popular and successful project. | 20 | 18 |
72,790,215 | 2022-6-28 | https://stackoverflow.com/questions/72790215/sqlalchemy-2-x-with-specific-columns-makes-scalars-return-non-orm-objects | This question is probably me not understanding architecture of (new) sqlalchemy, typically I use code like this: query = select(models.Organization).where( models.Organization.organization_id == organization_id ) result = await self.session.execute(query) return result.scalars().all() Works fine, I get a list of models (if any). With a query with specific columns only: query = ( select( models.Payment.organization_id, models.Payment.id, models.Payment.payment_type, ) .where( models.Payment.is_cleared.is_(True), ) .limit(10) ) result = await self.session.execute(query) return result.scalars().all() I am getting first row, first column only. Same it seems to: https://docs.sqlalchemy.org/en/14/core/connections.html?highlight=scalar#sqlalchemy.engine.Result.scalar My understanding so far was that in new sqlalchemy we should always call scalars() on the query, as described here: https://docs.sqlalchemy.org/en/14/changelog/migration_20.html#migration-orm-usage But with specific columns, it seems we cannot use scalars() at all. What is even more confusing is that result.scalars() returns sqlalchemy.engine.result.ScalarResult that has fetchmany(), fechall() among other methods that I am unable to iterate in any meaningful way. My question is, what do I not understand? | My understanding so far was that in new sqlalchemy we should always call scalars() on the query That is mostly true, but only for queries that return whole ORM objects. Just a regular .execute() query = select(Payment) results = sess.execute(query).all() print(results) # [(Payment(id=1),), (Payment(id=2),)] print(type(results[0])) # <class 'sqlalchemy.engine.row.Row'> returns a list of Row objects, each containing a single ORM object. Users found that awkward since they needed to unpack the ORM object from the Row object. So .scalars() is now recommended results = sess.scalars(query).all() print(results) # [Payment(id=1), Payment(id=2)] print(type(results[0])) # <class '__main__.Payment'> However, for queries that return individual attributes (columns) we don't want to use .scalars() because that will just give us one column from each row, normally the first column query = select( Payment.id, Payment.organization_id, Payment.payment_type, ) results = sess.scalars(query).all() print(results) # [1, 2] Instead, we want to use a regular .execute() so we can see all the columns results = sess.execute(query).all() print(results) # [(1, 123, None), (2, 234, None)] Notes: .scalars() is doing the same thing in both cases: return a list containing a single (scalar) value from each row (default is index=0). sess.scalars() is the preferred construct. It is simply shorthand for sess.execute().scalars(). | 13 | 32 |
72,789,188 | 2022-6-28 | https://stackoverflow.com/questions/72789188/can-i-use-yt-dlp-to-extract-only-one-video-info-from-a-playlist | Here's my code using Python (simplified version): import yt_dlp YDL_OPTIONS = { 'format': 'bestaudio*', 'noplaylist': True, } with yt_dlp.YoutubeDL(YDL_OPTIONS) as ydl: info = ydl.extract_info(url, download=False) The problem comes up when the url directs to a playlist (e.g. https://www.youtube.com/playlist?list=PLlrATfBNZ98dudnM48yfGUldqGD0S4FFb) Here's the output: [youtube:tab] PLlrATfBNZ98dudnM48yfGUldqGD0S4FFb: Downloading webpage [youtube:tab] PLlrATfBNZ98dudnM48yfGUldqGD0S4FFb: Downloading API JSON with unavailable videos [download] Downloading playlist: C++ [youtube:tab] playlist C++: Downloading 100 videos [download] Downloading video 1 of 100 [youtube] 18c3MTX0PK0: Downloading webpage [youtube] 18c3MTX0PK0: Downloading android player API JSON [download] Downloading video 2 of 100 [youtube] 1OsGXuNA5cc: Downloading webpage [youtube] 1OsGXuNA5cc: Downloading android player API JSON [download] Downloading video 3 of 100 [youtube] 1E_kBSka_ec: Downloading webpage [youtube] 1E_kBSka_ec: Downloading android player API JSON ... As you can see, the 'noplaylist' option didn't work in this case. Is there an option or function making ydl only extract one video info, for instance the first one, in the entire playlist? | noplaylist: Download single video instead of a playlist if in doubt. Source: github.com/yt-dlp/yt-dlp What you shared with us isn't a playlist of a single video so, according to me, there isn't any doubt here. You can request yt_dlp to treat only the first video of the playlist by adding 'playlist_items': '1' to YDL_OPTIONS. Then the info concerning the video are in info['entries'][0]. | 5 | 3 |
72,788,512 | 2022-6-28 | https://stackoverflow.com/questions/72788512/how-to-remove-index-column-when-getting-latex-string-from-pandas-dataframe | Originally, you could simply pass an argument in the to_latex method of the Pandas DataFrame object. Now you get a warning message about signature changes. Example: >>> import pandas as pd >>> import numpy as np >>> data = {f'Column {i + 1}': np.random.randint(0, 10, size=(10, )) for i in range(5)} >>> df = pd.DataFrame(data) >>> df Column 1 Column 2 Column 3 Column 4 Column 5 0 1 8 3 3 5 1 8 6 7 7 3 2 2 1 6 1 1 3 9 7 9 5 5 4 5 4 7 8 9 5 9 5 3 6 2 6 6 9 9 6 8 7 8 7 2 6 5 8 4 9 4 6 2 9 2 6 5 3 0 >>> lat_og = df.to_latex(index=False) <ipython-input-7-986346043a05>:1: FutureWarning: In future versions `DataFrame.to_latex` is expected to utilise the base implementation of `Styler.to_latex` for formatting and rendering. The arguments signature may therefore change. It is recommended instead to use `DataFrame.style.to_latex` which also contains additional functionality. lat_og = df.to_latex(index=False) >>> print(lat_og) \begin{tabular}{rrrrr} \toprule Column 1 & Column 2 & Column 3 & Column 4 & Column 5 \\ \midrule 1 & 8 & 3 & 3 & 5 \\ 8 & 6 & 7 & 7 & 3 \\ 2 & 1 & 6 & 1 & 1 \\ 9 & 7 & 9 & 5 & 5 \\ 5 & 4 & 7 & 8 & 9 \\ 9 & 5 & 3 & 6 & 2 \\ 6 & 9 & 9 & 6 & 8 \\ 8 & 7 & 2 & 6 & 5 \\ 4 & 9 & 4 & 6 & 2 \\ 2 & 6 & 5 & 3 & 0 \\ \bottomrule \end{tabular} You get the desired output with no index column, but I don't want to have to keep using this if it will change, or if I have to continuously import warnings to fix it. The warning message recommends we use the style attribute. How can I use the style attribute to ignore the index column? I read the documentation of the to_latex method associated with the style attribute, but it doesn't have the simple argument as above. Example: >>> lat_new = df.style.to_latex(hrules=True) >>> print(lat_new) \begin{tabular}{lrrrrr} \toprule & Column 1 & Column 2 & Column 3 & Column 4 & Column 5 \\ \midrule 0 & 1 & 8 & 3 & 3 & 5 \\ 1 & 8 & 6 & 7 & 7 & 3 \\ 2 & 2 & 1 & 6 & 1 & 1 \\ 3 & 9 & 7 & 9 & 5 & 5 \\ 4 & 5 & 4 & 7 & 8 & 9 \\ 5 & 9 & 5 & 3 & 6 & 2 \\ 6 & 6 & 9 & 9 & 6 & 8 \\ 7 & 8 & 7 & 2 & 6 & 5 \\ 8 & 4 & 9 & 4 & 6 & 2 \\ 9 & 2 & 6 & 5 & 3 & 0 \\ \bottomrule \end{tabular} The index column is there in the LaTeX. What are some ways we can remove the index column without using the original method? Edit If you come across this issue, know that the pandas developers are planning to deprecate the DataFrame.to_latex() method. They are currently in the process of using the Styler.to_latex() method instead. The signatures as of now are not the same and require additional methods for hiding the index column or escaping latex syntax. See 41649 for more current updates on the process, and see 44411 for the start of the rabbit hole. They plan on having this fixed in pandas 2.0. | It is possible to use the hide() method of the style attribute of a Pandas dataframe. The following code will produce a LaTeX table without the values of the index: import pandas as pd import numpy as np data = {f'Column {i + 1}': np.random.randint(0, 10, size=(10, )) for i in range(5)} df = pd.DataFrame(data) lat_new = df.style.hide(axis="index").to_latex(hrules=True) print(lat_new) The result is the following: \begin{tabular}{rrrrr} \toprule Column 1 & Column 2 & Column 3 & Column 4 & Column 5 \\ \midrule 2 & 5 & 9 & 2 & 2 \\ 2 & 6 & 0 & 9 & 5 \\ 3 & 2 & 4 & 3 & 2 \\ 8 & 9 & 3 & 7 & 8 \\ 5 & 9 & 7 & 4 & 4 \\ 0 & 3 & 2 & 2 & 6 \\ 5 & 7 & 7 & 8 & 6 \\ 2 & 2 & 9 & 3 & 3 \\ 6 & 0 & 0 & 9 & 2 \\ 4 & 8 & 7 & 5 & 9 \\ \bottomrule \end{tabular} | 10 | 10 |
72,776,557 | 2022-6-27 | https://stackoverflow.com/questions/72776557/generate-a-list-of-all-possible-patterns-of-letters | I'm trying to find a way to generate all possible "patterns" of length N out of a list of K letters. I've looked at similar questions but they all seem to be asking about combinations, permutations, etc. which is not exactly what I'm after. For example, let K = 3 and N = 2. That is, I want all 2-letter "patterns" that can be made with the letters [A, B, C]. AA is one such pattern. AB is another. And those are the only two. BB and CC are the same as AA, it's just "a letter, and then the same letter." Similarly, BA, BC, AC, etc. are the same as AB, it's just "a letter, and then a different letter." So for this simple case, there are only two patterns, and in fact this illustrates why K must be less than or equal to N (adding additional letters to choose from doesn't change anything). If instead, K = 3, N = 3, then the five possible patterns would be AAA, AAB, ABA, ABB, and ABC. Every other permutation of three letters has a pattern that is identical to one of those five. If K = 2 and N = 3, then there are just four possible patterns: AAA, AAB, ABA, ABB. (ABC is no longer a valid choice because I only have two letters to choose from.) Of course, these examples are trivial to do by hand - I'm trying to create code that will generate all possible patterns for larger values of N and K. This may be more of a pure mathematical question but ultimately I need a Python function that will produce these so I thought I'd try here first to see if anyone knows or can think of an efficient way to do this. | One of the comments, from @JacobRR, was very close to what we need. Each "pattern" actually corresponds to partitioning of set {1, 2, ..., N} into K subsets. Each subset (even empty!) corresponds to the positions where letter l_k should be placed (l_1 = A, l_2 = B etc.). There's a demo here. https://thewebdev.info/2021/10/28/how-to-generate-set-partitions-in-python For example, in case K=3, N=3, the partitions would be {1,2,3}, β
, β
{1}, {2, 3}, β
{2}, {1, 3}, β
{3}, {1, 2}, β
{1}, {2}, {3} and for K=2, N=3, it's {1,2,3}, β
{1}, {2, 3} {2}, {1, 3} {3}, {1, 2} corresponding exactly to the given examples. This question is also relevant. https://codereview.stackexchange.com/questions/1526/finding-all-k-subset-partitions I also wrote my own naive implementation. import copy N = 3 K = 2 iter = min(N, K) def partitions(S, K): if K == 1: return [[S]] if len(S) == 0: return [[]] result = [] S_new = copy.copy(S) first = S_new.pop(0) if (K-1 <= len(S_new)): p1 = partitions(S_new, K-1) for p in p1: p.append([first]) result.append(p) if (K <= len(S_new)): p2 = partitions(S_new, K) for p in p2: for idx in range(len(p)): p_new = copy.deepcopy(p) p_new[idx].append(first) result.append(p_new) return result for idx in range(1, iter+1): print(partitions([i for i in range(0, N)], idx)) | 4 | 2 |
72,782,410 | 2022-6-28 | https://stackoverflow.com/questions/72782410/building-wheel-for-gevent-pyproject-toml-did-not-run-successfully | I got an error when install dependencies for my project! OS : WinDow 11 Python: 3.10.4 (64bit) Pip: 22.1.2 Building wheel for django-admin-sortable2 (setup.py) ... done Created wheel for django-admin-sortable2: filename=django_admin_sortable2-0.7.5-py3-none-any.whl size=69989 sha256=0a4ff29d0c9b0422611dde61c6c1665dd36b10f98413f4ed7b8532e29afdc03d Stored in directory: c:\users\kev\appdata\local\pip\cache\wheels\99\3e\95\384eeaa2d641ef0c9e8b46e701737b53ae6a973358887816e0 Building wheel for easy-thumbnails (setup.py) ... done Created wheel for easy-thumbnails: filename=easy_thumbnails-2.7-py2.py3-none-any.whl size=69700 sha256=ce66afcd2ca403acf9225b53eed60300c8d37c3bad53dcdf37ebc3a25550bdc6 Stored in directory: c:\users\kev\appdata\local\pip\cache\wheels\cb\33\00\f7fa4b381ae4cbaf99674fb7a4411339d38e616cfcc41632c5 Building wheel for gevent (pyproject.toml) ... error error: subprocess-exited-with-error Γ Building wheel for gevent (pyproject.toml) did not run successfully. β exit code: 1 β°β> [288 lines of output] running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-310 creating build\lib.win-amd64-cpython-310\gevent copying src\gevent\ares.py -> build\lib.win-amd64-cpython-310\gevent copying src\gevent\backdoor.py -> build\lib.win-amd64-cpython-310\gevent copying src\gevent\baseserver.py -> build\lib.win-amd64-cpython-310\gevent copying src\gevent\builtins.py -> build\lib.win-amd64-cpython-310\gevent | tl;dr: use python3.8 or update requirement.txt versions. More info: The combination of (A) gevent==20.9, (B) windows 10, and (C) python3.10 does not have a prebuilt wheel. You can check this kind of stuff by going to pypi and looking what is offered for downloads (https://pypi.org/project/gevent/20.9.0/#files) I'm assuming that you will not be able to compile things from source yourself (it's a hassle), so you need to change (A), (B) or (C). (A). Changing this means relaxing or updating the version requirements. For example gevent==21.12 does have a wheel for windows and python3.10 (B). Changing this means not using windows, probably not an option (C). Changing this means using older python version. For example, python3.8 has a wheel for gevent==20.9. | 10 | 5 |
72,781,651 | 2022-6-28 | https://stackoverflow.com/questions/72781651/how-does-this-discouraging-python-one-liner-that-displays-the-mandlebrot-set-wor | Could you please explain how it works. In particular: where do a,c,n,s,z come from? And how to convert this to regular functions and their calls? x=10 y=5 abs( (lambda a: lambda z, c, n: a(a, z, c, n)) (lambda s, z, c, n: z if n == 0 else s(s, z*z+c, c, n-1)) (0, 0.02*x+0.05j*y, 10) ) It was even longer, but I figured out the rest of it. print('\n'.join([''.join(['*'if abs((lambda a: lambda z, c, n: a(a, z, c, n))(lambda s, z, c, n: z if n == 0 else s(s, z*z+c, c, n-1))(0, 0.02*x+0.05j*y, 40)) < 2 else ' ' for x in range(-80, 20)]) for y in range(-20, 20)])) This monster expression outputs a pseudo-graphic Mandelbrot fractal to the console. | This function (lambda a: lambda z, c, n: a(a, z, c, n)) can be rewritten as def apply_function_to_itself(funk): def inner(arg1, arg2, arg3): funk(funk, arg1, arg2, arg3) return inner This is a functional thing that calls a function with itself as the first argument. This is necessary for recursion in functional languages but in python there is usually a better way. This function (lambda s, z, c, n: z if n == 0 else s(s, z*z+c, c, n-1)) can be rewritten as this: def transform_n_times(funk, value, c, n): if n==0: return arg1 else: return funk(funk, value * value + constant, constant, n-1) We know from the previous function that funk === call_if_n_is_not_0, so this is a recursive function that will apply a operation n times before stopping. So basically combined we can rewrite it like this: def transform_n_times(value, constant, n): for i in range(n): value arg1 * arg1 + arg2 return value Much simpler. Understanding the entire program The entire code could be written like this: def apply_function_to_itself(funk): def inner(arg1, arg2, arg3): funk(funk, arg1, arg2, arg3) return inner def transform_n_times(funk, value, constant, n): if n==0: return arg1 else: return funk(funk, value * value + constant, constant, n-1) abs(apply_function_to_itself(call_if_n_is_not_0)(0, 0.02*x+0.05j*y, 10)) Or like this. We can get rid of the apply_function_to_itself entirely but keep the recursion like this: def transform_n_times(value, constant, n): if n==0: return arg1 else: return transform_n_times(funk, value * value + constant, constant, n-1) abs(transform_n_times(0, 0.02*x+0.05j*y, 10)) Or we can delete the recursion entirely and just use a loop: constant = 0.02*x+0.05j*y value = 0 n = 10 for i in range(n): a1 = value * value + constant abs(value) The basic equation behind the mandlebrot set is f(x) = x^2+c which matches this. | 4 | 4 |
72,741,543 | 2022-6-24 | https://stackoverflow.com/questions/72741543/how-to-reconnect-to-an-existing-jupyter-notebook-session-in-vs-code | I remember I was able to reconnect to an existing Jupyter Notebook session in VS code before by changing the kernel for a notebook. Now the option to reconnect to an existing session is gone, see: How do I reconnect to an existing Jupyter Notebook session in VS code? To be clear, the sessions were never shut down. In fact, I can still see them running in the Running tab of my browser version of Jupyter Notebook, although clicking on them results in a 404 error: The Jupyter kernel is running on a remote server. I use an SSH session to connect back to it when I work. Current versions: VS code is v1.68.1 and the Jupyter extension on the remote machine is v2022.5.1001601848, if that helps. | If you would like to connect to an existing jupyter server can you do so by going into the command Jupyter: Specify Jupyter Server for Connections and selecting the appropriate remote server from the list. If the required server is not on that list, then you can select it. | 11 | 3 |
72,757,575 | 2022-6-25 | https://stackoverflow.com/questions/72757575/error-no-matching-distribution-found-and-error-could-not-find-a-version-that-s | I'm trying to install the requirements of a GitHub clone in a virtual environment created by py -m virtualenv objectremoval command, but I always encounter the "Could not find a version that satisfies the requirement" Error. After cloning the repo, I performed the following lines; D:\test1\Deep-Object-Removal>py -m virtualenv objectremoval created virtual environment CPython3.10.2.final.0-64 in 1474ms creator CPython3Windows(dest=D:\test1\Deep-Object-Removal\objectremoval, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\user\AppData\Local\pypa\virtualenv) added seed packages: pip==22.1.2, setuptools==62.6.0, wheel==0.37.1 activators BashActivator,BatchActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator D:\test1\Deep-Object-Removal>cd objectremoval\Scripts D:\test1\Deep-Object-Removal\objectremoval\Scripts>activate (objectremoval) D:\test1\Deep-Object-Removal\objectremoval\Scripts>cd .. (objectremoval) D:\test1\Deep-Object-Removal\objectremoval>cd .. (objectremoval) D:\test1\Deep-Object-Removal>pip install -r requirements.txt Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement opencv_python==3.3.0.10 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66) ERROR: No matching distribution found for opencv_python==3.3.0.10 (objectremoval) D:\test1\Deep-Object-Removal>pip install opencv_python==3.3.0.10 Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement opencv_python==3.3.0.10 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66) ERROR: No matching distribution found for opencv_python==3.3.0.10 (objectremoval) D:\test1\Deep-Object-Removal>pip3 install opencv_python==3.3.0.10 Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement opencv_python==3.3.0.10 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66) ERROR: No matching distribution found for opencv_python==3.3.0.10 (objectremoval) D:\test1\Deep-Object-Removal>py -m pip install -r requirements.txt Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement opencv_python==3.3.0.10 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66) ERROR: No matching distribution found for opencv_python==3.3.0.10 (objectremoval) D:\test1\Deep-Object-Removal>pip install tensorflow==1.10.1 Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement tensorflow==1.10.1 (from versions: 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1) ERROR: No matching distribution found for tensorflow==1.10.1 (objectremoval) D:\test1\Deep-Object-Removal>pip install numpy==1.13.3 The Error was too long for numpy, you can find it with this hyperlink: https://justpaste.it/7vxkv Also, checking the local packages shows me all installed packages in the global environment. I think this is a relevant problem too. (objectremoval) D:\test1\Deep-Object-Removal>pip list --local Package Version ---------------------------- ------------------- absl-py 1.0.0 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 asttokens 2.0.5 astunparse 1.6.3 attrs 21.4.0 backcall 0.2.0 beautifulsoup4 4.11.1 black 22.1.0 bleach 5.0.0 cachetools 5.0.0 certifi 2021.10.8 cffi 1.15.0 . . . . Also where python and where pip commands does not show the virtual environment paths if I execute them in the main virtual environment folder. (objectremoval) D:\test1\Deep-Object-Removal>where python C:\Program Files\Python310\python.exe C:\Users\user\miniconda3\python.exe C:\Users\user\AppData\Local\Microsoft\WindowsApps\python.exe (objectremoval) D:\test1\Deep-Object-Removal>where pip C:\Program Files\Python310\Scripts\pip.exe C:\Users\user\AppData\Roaming\Python\Python310\Scripts\pip.exe C:\Users\user\miniconda3\Scripts\pip.exe However, it adds an extra path if I execute the where python and where pip commands in Scripts folder: (objectremoval) D:\test1\Deep-Object-Removal\objectremoval\Scripts>where pip D:\test1\Deep-Object-Removal\objectremoval\Scripts\pip.exe C:\Program Files\Python310\Scripts\pip.exe C:\Users\user\AppData\Roaming\Python\Python310\Scripts\pip.exe C:\Users\user\miniconda3\Scripts\pip.exe (objectremoval) D:\test1\Deep-Object-Removal\objectremoval\Scripts>where python D:\test1\Deep-Object-Removal\objectremoval\Scripts\python.exe C:\Program Files\Python310\python.exe C:\Users\user\miniconda3\python.exe C:\Users\user\AppData\Local\Microsoft\WindowsApps\python.exe Neither trying to install the packages in the main folder nor in the Scripts folder gave me desired results. My requirements.txt file includes only the following packages: opencv_python==3.3.0.10 tensorflow==1.10.1 numpy==1.13.3 Also, I have tried to upgrade pip, setuptools, and wheel versions. I have also tried to create a virtual environment with conda by conda create -n <venvname> command, and tried several different python versions, but these were also not helpful. Can you please help me to solve the problem? Thank you for your valuable time. Python: 3.10.2 OS: Windows10 x64 Pro Kind regards, | The chosen package called Deep-Object-Removal seems to be very outdated (last commit 4years ago) and not maintained any longer, i would suggest to search for any currently supported alternative. If you try to install this version of opencv_python in a clean python venv (with python3.10) you get an error: pip install opencv_python==3.3.0.10 ERROR: Could not find a version that satisfies the requirement opencv_python==3.3.0.10 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66) ERROR: No matching distribution found for opencv_python==3.3.0.10 If you have a look at the files at pypi for this version of opencv_python you notice that this version of the pkg. has been yanked. Additional there is no pkg. for python3.10, the last support whl file seems to be for python3.6. You can try to adapt the requirements.txt with a newer one of opencv_python or install python3.6 and download the specific version by hand and install the whl file,( but this may lead to new errors). But again i would recommend to use another, currently supported package instead of Deep-Object-Removal. | 7 | 6 |
72,769,282 | 2022-6-27 | https://stackoverflow.com/questions/72769282/how-to-split-list-data-and-populate-to-a-dataframe | I have a list of items and want to clean the data with certain conditions and the output is a dataframe. Here's the list: [ "Onion per Pack|500 g|Rp18,100|Rp3,700 / 100 g|Add to cart", "Shallot per Pack|250 g|-|49%|Rp22,300|Rp11,300|Rp4,600 / 100 g|Add to cart", "Spring Onion per Pack|250 g|Rp7,000|Rp2,800 / 100 g|Add to cart", "Green Beans per Pack|250 g|Rp5,900|Rp2,400 / 100 g|Add to cart", ] into name unit discount price unit price Onion per Pack 500 g Rp18,100 Rp3,700 / 100 g Shallot per Pack 250 g 49% Rp22,300 Rp11,300 Spring Onion per Pack 250 g Rp7,000 Rp2,800 / 100 g Green Beans per Pack 250 g Rp5,900 Rp2,400 / 100 g Currently my code is: datas = pd.DataFrame() for i in item: long = len(i.split("|")) if long == 5: data = {"name": i.split("|")[0] "unit": i.split("|")[2] "discount": "" "price": i.split("|")[3] "unit price": i.split("|")[4]} dat = pd.DataFrame(data) datas.append(dat) else: data = {"name": i.split("|")[0] "unit": i.split("|")[2] "discount": i.split("|")[4] "price": i.split("|")[6] "unit price": i.split("|")[7]} dat = pd.DataFrame(data) datas.append(dat) Is there a more efficient way? A shorter way to achieve this? | I'd using a map; try: ids = { 5: [0, 2, -1, 3, 4], 8: [0, 2, 4, 6, 7] } datas = pd.DataFrame() for i in item: i = i.split("|") long = len(i) data = { "name": i[ids[long][0]], "unit": i[ids[long][1]], "discount": i[ids[long][2]] if ids[long][2] != -1 else "", "price": i[ids[long][3]], "unit price": i[ids[long][4]], } datas.append(pd.DataFrame(data)) | 4 | 0 |
72,732,058 | 2022-6-23 | https://stackoverflow.com/questions/72732058/match-case-invalid-syntax-with-spyder | I am using Spyder 5.3.1 and Python 3.10.4 within a virtual environment, on Windows 10. I know that with Python 3.10 came the match statement. However, whenever I use the match keyword inside a script, the following error appears: Code Analysis Invalid syntax (pyflakes E) But I can run the script correctly without any problem. So what can be the issue here? Moreover, if I try directly in the IPython console, the match keyword is immediately recognized. | It's not so much an issue with Spyder as it is with the Pyflakes system it uses for linting code. match is a new (soft) keyword in Python 3.10. Pyflakes 2.4.0 only supports up to Python 3.8 currently. Pyflakes 2.5.0 is not out yet, but will cover up to Python 3.11, and should lint the new match keyword properly. | 5 | 3 |
72,764,752 | 2022-6-26 | https://stackoverflow.com/questions/72764752/how-do-i-create-a-an-annualized-growth-rate-for-gdp-column-in-my-dataframe | I basically want to apply this formula: ((New/Old)^4 - 1) * 100. To a data frame I have and create a new column called "Annualized Growth Rate" I have been working off of the FRED GDP data set. I have a data set that looks something like this (not the real numbers) Index GDP 0 100 1 101 2 103 3 107 I want to add a column on the right that applies the formula I mentioned above so that the annualized growth rate numbers would start appearing in the index = 1 position Index GDP Annualized_Growth_Rate 0 100 NaN 1 101 4.060 2 103 8.159 3 107 16.462 How can I go about doing this. I was originally trying to do it using iterrows() Something like for index,row in iterrows(): df['Annualized Growth Rate'] = df['GDP'].loc[index] / df.['GDP'].loc[index - 1]... but then index position -1 is out of range. I'm assuming there is an easy way to go about this that I just don't know. I also know you aren't really supposed to use something like iterrows. | You can use shift to access the previous row and vectorial operations for subtraction, division, power, multiplication: df['Annualized_Growth_Rate'] = df['GDP'].div(df['GDP'].shift()).pow(4).sub(1).mul(100) Output: Index GDP Annualized_Growth_Rate 0 0 100 NaN 1 1 101 4.060401 2 2 103 8.159184 3 3 107 16.462528 | 4 | 3 |
72,764,625 | 2022-6-26 | https://stackoverflow.com/questions/72764625/using-both-or-and-in-an-if-statement-python | def alarm_clock(day, vacation): if day == 0 or day == 6 and vacation != True: return "10.00" else: return "off" print(alarm_clock(0, True)) Why does this return "10.00"? In my mind it should return "off". Yes, day is equal to 0, but vacation is True, and the IF-statements first line states that it should only be executed if vacation is not True. | In Python and binds tighter than or. So your statement is equivalent to this: if day == 0 or (day == 6 and vacation != True): To get the correct result you must parenthesize the precedence yourself: if (day == 0 or day == 6) and vacation != True: | 6 | 22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.